Pro: Windows Server 2008, Server Administrator
Question No: 11 – (Topic 1)
You need to ensure that Admin1 can administer the Web servers to meet the company#39;s technical requirements. To which group should you add Admin1?
the Administrators local group on each Web server
the Backup Operators domain local group
the Backup Operators local group on each Web server
the Domain Admins global group
Question No: 12 – (Topic 1)
Your company has a main office and a branch office. Your network contains a single Active Directory domain.
An Active Directory site exists for each office. All domain controllers run Windows Server 2008 R2. You plan to modify the DNS infrastructure. You need to plan the new DNS infrastructure to meet the following requirements:
路Ensure that the DNS service is available even if a single server fails
路Encrypt the synchronization data that is sent between DNS servers
路Support dynamic updates to all DNS servers What should you include in your plan?
Install the DNS Server server role on two servers. Create a primary zone on the DNS server in the main office. Create a secondary zone on the DNS server in the branch office.
Install the DNS Server server role on a domain controller in the main office and on a domain controller in the branch office. Configure DNS to use Active Directory integrated zones.
Install the DNS Server server role on a domain controller in the main office and on a Readonly Domain Controller (RODC) in the branch office. Configure DNS to use Active Directory integrated zones.
Install the DNS Server server role on two servers. Create a primary zone and a GlobalNames zone on the DNS server in the main office. Create a GlobalNames zone on the DNS server in the branch office.
Answer: B Explanation:
In an ADI primary zone, rather than keeping the old zone file on a disk, the DNS records are stored in the AD, and Active Directory replication is used rather than the old problematic zone transfer. If all DNS servers were to die or become inaccessible, you could simply install DNS on any domain controller (DC) in the domain. The records would be automatically populated and your DNS server would be up without the messy import/export tasks of standard DNS zone files.
Windows 2000 and 2003 allow you to put a standard secondary zone (read only) on a member server and use one of the ADI primary servers as the master.
When you decide which replication scope to choose, consider that the broader the replication scope, the greater the network traffic caused by replication. For example, if you decide to have AD DS-integrated DNS zone data replicated to all DNS servers in the forest, this will produce greater network traffic than replicating the DNS zone data to all DNS servers in a single AD DS domain in that forest.
AD DS-integrated DNS zone data that is stored in an application directory partition is not replicated to the global catalog for the forest. The domain controller that contains the global catalog can also host application directory partitions, but it will not replicate this data to its global catalog.
AD DS-integrated DNS zone data that is stored in a domain partition is replicated to all domain controllers in its AD DS domain, and a portion of this data is stored in the global catalog. This setting is used to support Windows 2000.
If an application directory partition#39;s replication scope replicates across AD DS sites, replication will occur with the same intersite replication schedule as is used for domain partition data.
By default, the Net Logon service registers domain controller locator (Locator) DNS resource records for the application directory partitions that are hosted on a domain controller in the same manner as it registers domain controller locator (Locator) DNS resource records for the domain partition that is hosted on a domain controller.
Close integration with other Windows services, including AD DS, WINS (if enabled), and DHCP (including DHCPv6) ensures that Windows 2008 DNS is dynamic and requires little or no manual configuration. Windows 2008 DNS is fully compliant with the dynamic update protocol defined in RFC 2136. Computers running the DNS Client service register their host names and IPv4 and IPv6 addresses (although not link-local IPv6 addresses) dynamically. You can configure the DNS Server and DNS Client services to perform secure dynamic updates. This ensures that only authenticated users with the appropriate rights can update resource records on the DNS server. Figure 2-22 shows a zone being configured to allow only secure dynamic updates.
Figure 2-22Allowing only secure dynamic updates MORE INFODynamic update protocol
For more information about the dynamic update protocol, see http://www.ietf.org/rfc/rfc2136.txt and http://www.ietf.org/rfc/rfc3007 NOTE Secure dynamic updates
Secure dynamic updates are only available for zones that are integrated with AD DS.
Question No: 13 – (Topic 1)
Your company has Windows Server 2008 R2 file servers.
You need to recommend a data recovery strategy that meets the following requirements:
路Backups must have a minimal impact on performance.
路All data volumes on the file server must be backed up daily.
路If a disk fails, the recovery strategy must allow individual files to be restored.
路Users must be able to retrieve previous versions of files without the intervention of an administrator. What should you recommend?
Deploy File Server Resource Manger (FSRM). Use Windows Server Backup to perform a daily backup to an external disk.
Deploy Windows Automated Installation Kit (Windows AIK). Enable shadow copies for the volumes that contain shared user data. Store the shadow copies on a separate physical disk.
Use Windows Server Backup to perform a daily backup to an external disk. Enable shadow copies for the volumes that contain shared user data. Store the shadow copies on a separate physical disk.
Use Windows Server Backup to perform a daily backup to a remote network share. Enable shadow copies for the volumes that contain shared user data. Store the shadow copies in the default location.
Answer: C Explanation:
Shadow Copies of Shared Folders
Implementing Shadow Copies of Shared Folders will reduce an administrator’s restoration
workload dramatically because it almost entirely eliminates the need for administrator intervention in the recovery of deleted, modified, or corrupted user files. Shadow Copies of Shared Folders work by taking snapshots of files stored in shared folders as they exist at a particular point in time. This point in time is dictated by a schedule and the default schedule for Shadow Copies of Shared Folders is to be taken at 7:00 A.M. and 12:00 P.M. every weekday. Multiple schedules can be applied to a volume and the default schedule is actually two schedules applied at the same time.
To enable Shadow Copies of Shared Folders, open Computer Management from the Administrative Tools menu, right-click the Shared Folders node, click All Tasks and then click Configure Shadow Copies. This will bring up the Shadow Copies dialog box, shown in Figure 12-1. This dialog box allows you to enable and disable Shadow Copies on a per- volume basis. It allows you to edit the Shadow Copy of Shared Folder settings for a particular volume. It also allows you to create a shadow copy of a particular volume manually.
Figure 12-1Enabling Shadow Copies
Enabling Shadow Copies on a volume will automatically generate an initial shadow copy for that volume.
Clicking Settings launches the dialog box shown in Figure 12-2. From this dialog box, you can configure the storage area, the maximum size of the copy store, and the schedule of when copies are taken. Clicking Schedules allows you to configure how often shadow copies are generated. On volumes hosting file shares that contain files that are updated frequently, you would use a frequent shadow copy schedule. On a volume hosting file shares where files are updated less frequently, you should configure a less frequent shadow copy schedule.
Figure 12-2Shadow Copy settings
When a volume regularly experiences intense read and write operations, such as a commonly used file share, you can mitigate the performance impact of Shadow Copies of Shared Folders by storing the shadow copy data on a separate volume. If a volume has less space available than the set limit, the service will remove the oldestshadow copies that it has stored as a way of freeing up space. Finally, no matter how much free space is available, a maximum of 64 shadow copies can be stored on any one volume. When you consider how scheduling might be configured for a volume, you will realize how this directly influences the length of shadow copy data retention. Where space is available, a schedule where shadow copies are taken once every Monday, Wednesday, and Friday allows shadow copies from 21 weeks previously to be retrieved. The default schedule allows for the retrieval of up to 6 weeks of previousshadow copies.
When planning the deployment of Shadow Copies of Shared Folders, it is important to remember that you configure settings on a per-volume basis. This means that the storage area, maximum size, and schedules for different volumes can be completely separate. If
you plan shares in such a way that each volume hosts a single share, you can optimize the shadow copy settings for that share based on how the data is used, rather than trying to compromise in finding an effective schedule for very different shared folder usage patterns.
On what basis (server, volume, share, disk, or folder) are Shadow Copies of Shared Folders enabled?
What happens to shadow copy data when the volume that hosts it begins to run out of space?
Quick Check Answers
Shadow Copies of Shared Folders are enabled on a per-volume basis.
The oldest shadow copy data is automatically deleted when volumes begin to run out of space.
Question No: 14 – (Topic 1)
Your network consists of three Active Directory forests. Forest trust relationships exist between all forests. Each forest contains one domain. All domain controllers run Windows Server 2008 R2.
Your company has three network administrators. Each network administrator manages a forest and the Group Policy objects (GPOs) within that forest.
You need to create standard GPOs that the network administrators in each forest will use. The GPOs must meet the following requirements:
->The GPOs must only contain settings for either user configurations or computer configurations.
->The number of GPOs must be minimized.
Which two actions should you perform? (Each correct answer presents part of the solution. Choose two.)
Export the new GPOs to .cab files. Ensure that the .cab files are available to the network administrator in each forest.
Create two new GPOs. Configure both GPOs to use the required user configurations and the required computer configurations.
Create two new GPOs. Configure one GPO to use the required user configuration. Configure the other GPO to use the required computer configuration.
Back up the Sysvol folder that is located on the domain controller where the new GPOs
were created. Provide the backup to the network administrator in each forest.
Answer: A,C Explanation:
Export a GPO to a File
Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2
You can export a controlled Group Policy object (GPO) to a CAB file so that you can copy it to a domain in another forest and import the GPO into Advanced Group Policy Management (AGPM) in that domain. For information about how to import GPO settings into a new or existing GPO, see Import a GPO from a File.
A user account with the Editor or AGPM Administrator (Full Control) role or necessary permissions in Advanced Group Policy Management (AGPM) is required to complete this procedure. Review the details in quot;Additional considerationsquot; in this topic.
To export a GPO to a file
In the Group Policy Management Console tree, click Change Control in the forest and domain in which you want to manage GPOs.
On the Contents tab, click the Controlled tab to display the controlled GPOs.
Right-click the GPO, and then click Export to.
Enter a file name for the file to which you want to export the GPO, and then click Export. If the file does not exist, it is created. If it already exists, it is replaced.
By default, you must be an Editor or an AGPM Administrator (Full Control) to perform this procedure. Specifically, you must have List Contents, Read Settings, and Export GPO permissions for the GPO.
Group Policy sections
Each GPO is built from 2 sections:
Computer configuration contains the settings that configure the computer prior to the user logon combo-box.
User configuration contains the settings that configure the user after the logon. You cannot choose to apply the setting on a single user, all users, including administrator, are affected by the settings.
Question No: 15 – (Topic 1)
Your network consists of a single Active Directory domain. All domain controllers run Windows Server 2008 R2. The network contains 100 servers and 5,000 client computers. The client computers run either Windows XP Service Pack 1 or Windows 7.
You need to plan a VPN solution that meets the following requirements:
路Stores VPN passwords as encrypted text
路Supports Suite B cryptographic algorithms
路Supports automatic enrollment of certificates
路Supports client computers that are configured as members of a workgroup What should you include in your plan?
Upgrade the client computers to Windows XP Service Pack 3. Implement a standalone certification authority (CA). Implement an IPsec VPN that uses certificate based authentication.
Upgrade the client computers to Windows XP Service Pack 3. Implement an enterprise certification authority (CA) that is based on Windows Server?2008 R2. Implement an IPsec VPN that uses Kerberos authentication.
Upgrade the client computers to Windows 7. Implement an enterprise certification authority (CA) that is based on Windows Server 2008 R2. Implement an IPsec VPN that uses preshared keys.
Upgrade the client computers to Windows 7. Implement an enterprise certification authority (CA) that is based on Windows Server 2008 R2. Implement an IPsec VPN that uses certificate based authentication.
Answer: D Explanation:
This is as close as I could get to an answer to this.
In essence, Enterprise CAs are fully integrated into a Windows Server 2008 environment. This type of CA makes the issuing and management of certificates for Active Directory clients as simple as possible.
Standalone CAs do not require Active Directory. When certificate requests are submitted to Standalone CAs, the requestor must provide all relevant identifying information and manually specify the type of certificate needed. This process occurs automatically with an Enterprise CA. By default, Standalone CA requests require administrator approval.
Administrator intervention is necessary because there is no automated method of verifying a requestor’s credentials. Standalone CAs do not use certificate templates, limiting the ability for administrators to customize certificates for specific organizational needs.
L2TP/IPsecL2TP connections use encryption provided by IPsec. L2TP/IPsec is the protocol that you need to deploy if you are supporting Windows XP remote access clients, because these clients cannot use SSTP. L2TP/IPsec provides per-packet data origin authentication, data integrity, replay protection, and data confidentiality.
L2TP/IPsec connections use two levels of authentication. Computer-level authentication occurs either using digital certificates issued by a CA trusted by the client and VPN server or through the deployment of pre-shared keys. PPP authentication protocols are then used for user-level authentication. L2TP/IPsec supports all of the
VPN authentication protocols available on Windows Server 2008.
Supports Suite B cryptographic algorithms
When using the Certificate Templates console, note that you cannot configure the autoenrollment permission for a level 1 certificate template. Level 1 certificates have Windows 2000 as their minimum supported CA. Level 2 certificate templates have Windows Server 2003 as a minimum supported CA. Level 2 certificate templates are also
the minimum level of certificate template that supports autoenrollment. Level 3 certificates templates are supported only by client computers running Windows Server 2008 or Windows Vista. Level 3 certificate templates allow administrators to configure advanced Suite B cryptographic settings. These settings are not required to allow certificate autoenrollment and most administrators find level 2 certificate templates are adequate for their organizational needs.
Question No: 16 – (Topic 1)
Your network consists of a single Active Directory domain. All domain controllers run Windows Server 2008 R2.
All client computers run Windows 7. All user accounts are stored in an organizational unit (OU) named Staff. All client computer accounts are stored in an OU named Clients. You plan to deploy a new Application.
You need to ensure that the Application deployment meets the following requirements:
->Users must access the Application from an icon on the Start menu.
->The Application must be available to remote users when they are offline.
What should you do?
Publish the Application to users in the Staff OU.
Publish the Application to users in the Clients OU.
Assign the Application to computers in the Staff OU.
Assign the Application to computers in the Clients OU.
Answer: D Explanation:
Group policy objects can be applied either to users or to computers. Deploying applications through the Active Directory is also done through the use of group policies, and therefore applications are deployed either on a per user basis or on a per computer basis.
There are two different ways that you can deploy an application through the Active Directory. You can either publish the application or you can assign the application. You can only publish applications to users, but you can assign applications to either users or to computers. The application is deployed in a different manner depending on which of these methods you use.
Publishing an application doesn’t actually install the application, but rather makes it
available to users. For example, suppose that you were to publish Microsoft Office. Publishing is a group policy setting, so it would not take effect until the next time that the user logs in. When the user does log in though, they will not initially notice anything different. However, if the user were to open the Control Panel and click on the Add / Remove Programs option, they will find that Microsoft Office is now on the list. A user can then choose to install Microsoft office on their machine.
One thing to keep in mind is that regardless of which deployment method you use, Windows does not perform any sort of software metering. Therefore, it will be up to you to make sure that you have enough licenses for the software that you are installing.
Assigning an application to a user works differently than publishing an application. Again, assigning an application is a group policy action, so the assignment won’t take effect until the next time that the user logs in.
When the user does log in, they will see that the new application has been added to the Start menu and / or to the desktop.
Although a menu option or an icon for the application exists, the software hasn’t actually been installed though.
To avoid overwhelming the server containing the installation package, the software is not actually installed until the user attempts to use it for the first time.
This is also where the self healing feature comes in. When ever a user attempts to use the application, Windows always does a quick check to make sure that the application hasn’t been damaged. If files or registry settings are missing, they are automatically replaced.
Assigning an application to a computer works similarly to assigning an application to a user. The main difference is that the assignment is linked to the computer rather than to the user, so it takes effect the next time that the computer is rebooted. Assigning an application to a computer also differs from user assignments in that the deployment process actually installs the application rather than just the application’s icon. as assigning installs the application the next time a computer reboots the app will be available when at next login regardless of which user logs in. also as its being assigned to a computer the GPO needs to be linked to the Clients OU as this is where the computer accounts are located.
Assigning Software to a group.
Create a folder to hold the Windows Installer package on a server. Share the folder by applying permissions that let users and computers read and run these files. Then, copy the MSI package files into this location.
From a Windows Server 2003-based computer in the domain, log on as a domain administrator, and then start Active Directory Users and Computers.
In Active Directory Users and Computers, right-click the container to which you want to link the GPOs, and then click Properties.
Click the Group Policy tab, and then click New to create a new GPO for installing the Windows Installer package. Give the new GPO a descriptive name.
Click the new GPO, and then click Edit. The Group Policy Object Editor starts.
Right-click the Software Settings folder under either Computer Configuration or User Configuration, point to
New, and then click Package.
Question No: 17 – (Topic 1)
Your company has a main office and a branch office. Your network contains a single Active Directory domain.
You install 25 Windows Server 2008 R2 member servers in the branch office.
You need to recommend a storage solution that meets the following requirements:
->Encrypts all data on the hard disks
->Allows the operating system to start only when the authorized user is present
What should you recommend?
Encrypting File System (EFS)
File Server Resource Manager (FSRM)
Windows BitLocker Drive Encryption (BitLocker)
Windows System Resource Manager (WSRM)
Answer: C Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Planning BitLocker Deployment
Windows BitLocker and Drive Encryption (BitLocker) is a feature that debuted in Windows Vista Enterprise and Ultimate Editions and is available in all versions of Windows Server 2008. BitLocker serves two purposes:
protecting server data through full volume encryption and providing an integrity-checking mechanism to ensure that the boot environment has not been tampered with.
Encrypting the entire operating system and data volumes means that not only are the operating system and data protected, but so are paging files, applications, and application
configuration data. In the event that a server is stolen or a hard disk drive removed from a server by third parties for their own nefarious purposes, BitLockerensures that these third parties cannot recover any useful data. The drawback is that if the BitLocker keys for a server are lost and the boot environment is compromised, the data stored on that server will be unrecoverable.
To support integrity checking, BitLocker requires a computer to have a chip capable of supporting the Trusted Platform Module (TPM) 1.2 or later standard. A computer must also have a BIOS that supports the TPM standard. When BitLocker is implemented in these conditions and in the event that the condition of a startup component has changed, BitLocker-protected volumes are locked and cannot be unlocked unless the person doing the unlocking has the correct digital keys. Protected startup components include the BIOS, Master Boot Record, Boot Sector, Boot Manager, and Windows Loader.
From a systems administration perspective, it is important to disable BitLocker during maintenance periods when any of these components are being altered. For example, you must disable BitLocker during a BIOS upgrade. If you do not, the next time the computer starts, BitLocker will lock the volumes and you will need to initiate the recovery process. The recovery process involves entering a 48-character password that is generated and saved to a specified location when running the BitLocker setup wizard. This password should be stored securely because without it the recovery process cannot occur. You can also configure BitLocker to save recovery data directly to Active Directory; this is the recommended management method in enterprise environments.
You can also implement BitLocker without a TPM chip. When implemented in this manner there is no startup integrity check. A key is stored on a removable USB memory device, which must be present and supported by the computer’s BIOS each time the computer starts up. After the computer has successfully started, the removable USB memory device can be removed and should then be stored in a secure location. Configuring a computer running Windows Server 2008 to use a removable USB memory device as a BitLocker startup key is covered in the second practice at the end of this lesson.
BitLocker Volume Configuration
One of the most important things to remember is that a computer must be configured to support BitLocker prior to the installation of Windows Server 2008. The procedure for this is detailed at the start of Practice 2 at the end of this lesson, but involves creating a separate 1.5-GB partition, formatting it, and making it active as the System partition prior to creating a larger partition, formatting it, and then installing the Windows Server 2008 operating system. Figure 1-6 shows a volume configuration that supports BitLocker. If a computer’s volumes are not correctly configured prior to the installation of Windows Server 2008, you will need to perform a completely new installation of Windows Server 2008 after repartitioning the volume correctly. For this reason you should partition the hard disk drives of all computers in the environment on which you are going to install Windows Server 2008 with the assumption that at some stage in the future you might need to deploy BitLocker.
If BitLocker is not deployed, it has cost you only a few extra minutes of configuration time. If you later decide to deploy BitLocker, you will have saved many hours of work reconfiguring the server to support full hard drive encryption.
Figure 1-6Partition scheme that supports BitLocker
The necessity of having specifically configured volumes makes BitLocker difficult to implement on Windows Server 2008 computers that have been upgraded from Windows Server 2003. The necessary partition scheme would have had to be introduced prior to the installation of Windows Server 2003, which in most cases would have occurred before most people were aware of BitLocker.
BitLocker Group Policies
BitLocker group policies are located under the Computer Configuration\Policies\ Administrative Templates\Windows Components\BitLocker Drive Encryption node of a Windows Server 2008 Group Policy object. In the event that the computers you want to deploy BitLocker on do not have TPM chips, you can use the Control Panel Setup: Enable Advanced Startup Options policy, which is shown in Figure 1-7. When this policy is enabled and configured, you can implement BitLocker without a TPM being present. You can also configure this policy to require that a startup code be entered if a TPM chip is present, providing another layer of security.
Figure 1-7Allowing BitLocker without the TPM chip
Other BitLocker policies include:
Turn On BitLocker Backup To Active Directory Domain ServicesWhen this policy is enabled, a computer’s recovery key is stored in Active Directory and can be recovered by an authorized administrator.
Control Panel Setup: Configure Recovery FolderWhen enabled, this policy sets the default folder to which computer recovery keys can be stored.
Control Panel Setup: Configure Recovery OptionsWhen enabled, this policy can be used to disable the recovery password and the recovery key. If both the recovery password and the recovery key are disabled, the policy that backs up the recovery key to Active Directory must be enabled.
Configure Encryption MethodThis policy allows the administrator to specify the properties of the AES encryption method used to protect the hard disk drive.
Prevent Memory Overwrite On RestartThis policy speeds up restarts, but increases the risk of BitLocker being compromised.
Configure TMP Platform Validation ProfileThis policy configures how the TMP security hardware protects the BitLocker encryption key.
Encrypting File System vs. BitLocker
Although both technologies implement encryption, there is a big difference between Encrypting File System (EFS) and BitLocker. EFS is used to encrypt individual files and folders and can be used to encrypt these items for different users. BitLockerencrypts the whole hard disk drive. A user with legitimate credentials can log on to a file server that is protected by BitLocker and will be able to read any files that she has permissions for. This user will not, however be able to read files that have been EFS encrypted for other users, even if she is granted permission, because you can only read EFS-encrypted files if you have the appropriate digital certificate. EFS allows organizations to protect sensitive shared files from the eyes of support staff who might be required to change file and folder permissions as a part of their job task, but should not actually be able to review the contents of the file itself. BitLocker provides a transparent form of encryption, visible only when the server is compromised. EFS provides an opaque form of encryption-the content of files that are visible to the person who encrypted them are not visible to anyone else, regardless of what file and folder permissions are set.
Turning Off BitLocker
In some instances you may need to remove BitLocker from a computer. For example, the environment in which the computer is located has been made much more secure and the overhead from the BitLocker process is causing performance problems. Alternatively, you may need to temporarily disable BitLocker so that you can perform maintenance on startup files or the computer’s BIOS. As Figure 1-8 shows, you have two options for removing BitLocker from a computer on which it has been implemented: disable BitLocker or decrypt the drive.
Figure 1-8Options for removing BitLocker
Disabling BitLocker removes BitLocker protection without decrypting the encrypted volumes. This is useful if a TPM chip is present, but it is necessary to update a computer’s BIOS or startup files. If you do not disable
BitLocker when performing this type of maintenance, BitLocker-when implemented with a TPM chip-will lock the computer because the diagnostics will detect that the computer has been tampered with. When you disable BitLocker, a plaintext key is written to the hard disk drive. This allows the encrypted hard disk drive to be read, but the presence of the plaintext key means that the computer is insecure. Disabling BitLocker using this method provides no performance increase because the data remains encrypted-it is just encrypted in an insecure way. When BitLocker is re-enabled, this plaintext key is removed and the computer is again secure.
Exam Tip Keep in mind the conditions under which you might need to disable BitLocker. Also remember the limitations of BitLocker without a TPM 1.2 chip.
Select Decrypt The Drive when you want to completely remove BitLocker from a computer. This process is as time-consuming as performing the initial drive encryption-perhaps more so because more data might be stored on the computer than when the initial encryption occurred. After the decryption process is finished, the computer is returned to its pre-encrypted state and the data stored on it is no longer protected byBitLocker.
Decrypting the drive will not decrypt EFS-encrypted files stored on the hard disk drive.
Question No: 18 – (Topic 1)
A company has servers that run Windows Server 2008 R2. Administrators use a graphic- intensive Application to remotely manage the network. You are designing a remote network administration solution.
You need to ensure that authorized administrators can connect to internal servers over the Internet from computers that run Windows 7 or Windows Vista. Device redirection enforcement must be enabled for these connections.
What should you recommend? (More than one answer choice may achieve the goal. Select the BEST answer.)
Deploy and configure a server with the Remote Desktop Web Access server role.
Enable Forms-based authentication. Ensure that administrators use RDC 6.1 when accessing internal servers remotely.
Deploy and configure a server with the Remote Desktop Web Access server role. Enable Forms-based authentication. Ensure that administrators use RDC 7.0 when accessing internal servers remotely,
Deploy and configure a server with the Remote Desktop Gateway server role. Ensure that administrators use RDC 7.0 when accessing internal servers remotely.
Deploy and configure a server with the Remote Desktop Gateway server role. Ensure that administrators use RDC 6.1 when accessing internal servers remotely.
Answer: C Explanation:
http://windows.microsoft.com/en-us/windows7/What-is-a-Remote-Desktop-Gateway-server A Remote Desktop Gateway (RD Gateway) server is a type of gateway that enables authorized users to connect to remote computers on a corporate network from any computer with an Internet connection. RD Gateway uses the Remote Desktop Protocol (RDP) along with the HTTPS protocol to help create a more secure, encrypted connection. http://technet.microsoft.com/en-us/library/dd560672(v=ws.10).aspx
Device redirection enforcement
An RD Gateway server running Windows Server 2008 R2 includes the option to allow remote desktop clients to only connect to RD Session Host servers that enforce device redirection. RDC 7.0 is required for device redirection to be enforced by the RD Session Host server running Windows Server 2008 R2.
Device redirection enforcement is configured on the Device Redirection tab of the RD CAP by using Remote Desktop Gateway Manager.
Question No: 19 – (Topic 1)
Your network consists of a single Active Directory domain. All domain controllers run Windows Server 2008 R2. There are five Windows Server 2003 SP2 servers that have the Terminal Server component installed. A firewall server runs Microsoft Internet Security and Acceleration (ISA) Server 2006.
You need to create a remote access strategy for the Remote Desktop Services servers that meets the following requirements:
->Restricts access to specific users
->Minimizes the number of open ports on the firewall
->Encrypts all remote connections to the Remote Desktop Services servers
What should you do?
Implement SSL bridging on the ISA Server. Require authentication on all inbound connections to the ISA Server.
Implement port forwarding on the ISA Server. Require authentication on all inbound connections to the ISA Server.
Upgrade a Windows Server 2003 SP2 server to Windows Server 2008 R2. On the Windows Server 2008 R2 server, implement the Remote Desktop Gateway (RD Gateway) role service, and configure a Remote Desktop resource authorization policy (RD RAP).
Upgrade a Windows Server 2003 SP2 server to Windows Server 2008 R2. On the Windows Server 2008 R2 server, implement the Remote Desktop Gateway (RD Gateway) role service, and configure a Remote Desktop connection authorization policy (RD CAP).
Answer: D Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Terminal Services Gateway TS Gateway allows Internet clients secure, encrypted access to Terminal Servers behind your organization’s firewall without having to deploy a Virtual Private Network (VPN) solution. This means that you can have users interacting with their corporate desktop or applications from the comfort of their homes without the problems that occur when VPNs are configured to run over multiple Network Address Translation (NAT) gateways and the firewalls of multiple vendors.
TS Gateway works using RDP over Secure Hypertext Transfer Protocol (HTTPS), which is the same protocol used by Microsoft Office Outlook 2007 to access corporate Exchange Server 2007 Client Access Servers over the Internet. TS Gateway Servers can be configured with connection authorization policies and resource authorization policies as a way of differentiating access to Terminal Servers and network resources.
Connection authorization policies allow access based on a set of conditions specified by the administrator; resource authorization policies grant access to specific Terminal Server resources based on user account properties.
Connection Authorization Policies
Terminal Services connection authorization policies (TS-CAPs) specify which users are allowed to connect through the TS Gateway Server to resources located on your organization’s internal network. This is usually done by specifying a local group on the TS Gateway Server or a group within Active Directory. Groups can include user or computer accounts. You can also use TS-CAPs to specify whether remote clients use password or smart-card authentication to access internal network resources through the TS Gateway Server. You can use TS-CAPs in conjunction with NAP; this scenario is covered in more detail by the next lesson.
Question No: 20 – (Topic 1)
Your company has a main office and a branch office. The offices connect by using WAN links. The network consists of a single Active Directory domain. An Active Directory site exists for each office. Servers in both offices run Windows Server 2008 R2 Enterprise. You plan to deploy a failover cluster solution to service users in both offices.
You need to plan a failover cluster to meet the following requirements:
路Maintain the availability of services if a single server fails
路Minimize the number of servers required What should you include in your plan?
Deploy a failover cluster that contains one node in each office.
Deploy a failover cluster that contains two nodes in each office.
In the main office, deploy a failover cluster that contains one node. In the branch office, deploy a failover cluster that contains one node.
In the main office, deploy a failover cluster that contains two nodes. In the branch office, deploy a failover cluster that contains two nodes.
Answer: A Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Failover Clustering Failover clustering is a technology that allows another server to continue to service client requests in the event that the original server fails. Clustering is covered in more detail in Chapter 11, “Clustering and High Availability.” You deploy failover clustering on mission-critical servers to ensure that important resources are available even if a server hosting those resources fails.
Failover clustering The Failover Clustering feature enables multiple servers to work together to increase the availability of services and applications. If one of the clustered servers (or nodes) fails, another node provides the required service through failover and is available in Windows Server 2008 Enterprise and Datacenter editions and is not available in Windows Server 2008 Standard or Web editions.
Failover clustering – Formerly known as server clustering, Failover Clustering creates a logical grouping of servers, also known as nodes, that can service requests for applications with shared data stores.
|Lowest Price Guarantee||Yes||No||No|
|Free VCE Simulator||Yes||No||No|