FAQ

Check our FAQ before adding questions.

Category: Main

 

Articles in Category: Top 10 most voted

ESX Host Profiles with vSphere

This nice feature in vSphere is able to backup and restore the ESX host profiles which is very useful for the VMware administrator to restore the settings of the ESX host/cluster. In the host profiles, it contain every single information of the ESX servers and allow users to revert back to the previous settings according to the profiles which previously created.


hostprofile72

You can browse the host profiles tab from Home > Management > Host profiles

Right Click the empty space from the left corner and select create profile


hostprofile8

A create profile wizard will be prompt as show. Select Create profile in this option. You should able to import the backup profile if you have any of it. The host profiles should end with .vpf as file extension.


hostprofile9

Specify reference host and click Next to continue

hostprofile10

Key in the profile name and description here. I prefer to name it same as my ESX Server


hostprofile11

A summary page for the host profiles you going to create. Click finish and your host profiles is created successful.

Now we will try to apply the host profile to the ESX host for restore purpose.

hostprofile12

We need to attach the host/cluster to the specify profiles. In this case, I will attach a host to this profiles now.

hostprofile13

Select the ESX host and click attach. The host profile will attach to the ESX host after this.

To try the test, I had removed the NFS mount point from my ESX host, and the restoration of the host profile, had success to restore the original configuration before I removed the NFS mount from my ESX server.

hostprofile31

Select check compliance to verify the ESX host setting with the profile configuration


hostprofile4

Non compliant meant the configuration on ESX server and the host profile doesn’t match.

Now you can right click and select apply, to restore the ESX server with the host profile configuration previously captured.

hostprofile61

Here is the summarize after the host profiles been applied. Now I had my previous configuration back after the success of ESX host profile restoration.

vSphere 5 - How to Install and Configure VMware ESXi 5
In this video tutorial we will guide you through the installation and configuration of VMware ESXi 5
Upgrading to vSphere 4.1

vSphere 4.1 Upgrade Prerequisites

An upgrade to vSphere 4.1 also requires the following:

  • vCenter running on a 64-bit server with a 64-bit OS.
    • In case your vCenter doesn’t run a 64-bit server, you can use the VMware vCenter Data Migration Tool to remedy the situation. More about this tool in a short while.
  • A 32-bit DSN for vCenter Update Manager
  • Compatible operating systems for vCenter Server, which includes:
    • Windows XP Pro SP2, 64-bit
    • Windows Server 2003 SP1, 64-bit
    • Windows Server 2008, 64-bit

       

vSphere 4.1 Upgrade Process

The basic upgrade process entails the following general steps:

  1. Upgrade vCenter
  2. Upgrade vSphere Client
  3. Upgrade ESX / ESXi servers using VUM (vCenter Upgrade Manager) or vihostupdate
  4. Upgrade VMware Tools on each VM

 vSphere 4.1 vCenter Server Data Migration Tool

As mentioned earlier, VMware has a tool for those who don't have vCenter running on a 64-bit server. Since there is no in-place upgrade path for vCenter Server installation on 32-bit systems, you will have to migrate your vCenter from your 32-bit system to a 64-bit system. This is where you'll need the Migration Tool. It will help you migrate the following items from one server to another:

  • vCenter Server and its configurations,
  • vCenter Update Manager and its configurations,
  • VMware Orchestrator and its configurations, and
  • the default SQL Express 2005 database that comes with vCenter Server.

How to use the vCenter Agent Pre-Upgrade Check Tool

The vCenter Agent Pre-Upgrade Check Tool, which has just been introduced in vSphere 4.1 and can be found in the vCenter Server installation media, is used for performing diagnostic checks prior to in-place upgrades from vCenter 4.0 to 4.1.

Again, it only performs diagnostic checks, so don't expect it to fix any issues. In fact, it is still possible to encounter issues even if the Pre-Upgrade Check Tool doesn't find any during its diagnostic run. However, this tool can help you a lot in checking whether all the major prerequisites for the upgrade have already been met.

So assuming you're in your version 4.0 vCenter Server and you've already inserted the vCenter 4.1 installation media, double-click the installation media icon (in Start > Computer) to start the autorun program and to launch the splash screen.

On the splash screen, click Agent Pre-Upgrade Check.

vCenter Server Splash Screen

When the Agent Pre-check Wizard welcome screen appears, just click the Next button.

Agent Pre-check Wizard Welcome Screen

In the Select Database window, select the DSN you want to connect to from the drop-down list and enter the appropriate User name and Password. In our case, we selected the Use Virtual Center Credentials option, which prompted the wizard to populate the User name and Password fields automatically.

Pre-agent Upgrade Checker Wizard

Click Next.

In the next window, you'll be given the option to scan either all ESX/ESXi servers or specific ESX/ESXi servers. Choose Standard Mode to scan all servers and click Next.

Scan All ESX/ESXi Servers

In the following window, click the Run precheck button to start scanning; that should only take a few minutes. If everything goes well, you should see something like this:

ESX/ESXi Server Scan Complete

Click Next.

You'll then see your ESX servers with a Pass notice right beside each one. If you want to generate a printable report, click the View Report button.

VMware Agent Upgrade Checker Status Results

If there are no errors, you'll see a simple summary just like this:

VMware Agent Upgrade Checker Results

Close that window to go back to the AgentUpdateChecker status + results window and click the Next button. You'll then be informed that you have successfully completed the Upgrade Pre-check Wizard.

If you don't see much going on at your end - just like what you've seen here - then you should be happy. Barring any unforeseen events, that means your system is completely ready for the upgrade.

ESX host in a VMware High Availability cluster fails to enter maintenance mode and stops at 2%
ESX host in a VMware High Availability cluster fails to
enter maintenance mode and stops at 2%
 
Symptoms
 
• An ESX host fails to enter maintenance mode in a VMware High Availability (HA) or DRS cluster
• Hosts fail to migrate when attempting to enter maintenance mode
• The progress indicator remains at 2% indefinitely
• Trying to remediate a host and getting a time out error when trying to enter the maintenance
mode
 
Purpose
 
This article provides an explanation for the behavior and includes a workaround.
Note: The workaround provided may take a significant amount of time to complete.
 
Resolution
 
This is normal behavior for a VMware HA/DRS cluster that is using strict admission control.
Disabling strict admission control (allowing virtual machines to power on even if they violate constraints)
should allow a host to enter maintenance mode in this situation. A bug was discovered that would not
allow a host to enter maintenance mode even if stict admission control was disabled.
This was resolved in VirtualCenter 2.5 Update 3 and disabling strict admission control should now allow
hosts to enter maintenance mode correctly.
 
To workaround the issue, temporarily disable VMware HA in the cluster settings. You will then be able to
put the ESX Server host into Maintence mode and do the work required. You can then re-enable HA on
your cluster.
 
For detailed steps, see VMware High Availability: Concepts, Implementation, and Best Practices.
Note: DRS needs to be enabled on your cluster in Fully Automated mode if you want VirtualCenter to
migrate your running virtual machines automatically to other hosts when placing your host in Maintenance
Mode.
Configure VMware ESXi 4.1 Networking

VMware ESXi 4.1 Networking

one of the most interesting parts of configuring a fresh ESXi host, is the Network. I mean, you have an ESXI host with 4 or more NIC’s, network based shared storage like iSCSI or NFS. Further you want to configure for redundancy, provide vMotion, FT, enable Cluster features like HA and DRS, etc, etc.

 

There are a few preconditions:

 

  • I leave aside the topic how to distribute on-board NICs and NICs on expansion cards (the ESXi server in this example is virtual)
  • Shared storage is iSCSI
  • Try to fit-in FT
  • Everything must be redundant, assume we do have two stacked switches for management, VMotion and LAN traffic and two stacked switches for iSCSI traffic. It is best practice to have separate switches for storage traffic. Switches have been configured correctly (trunk ports etc.)

 

The first part of the design is the Management network. There are two options to create a virtual switch, a vNetwork Standard Switch(vSS)or a vNetwork Distributed Switch (vDS). Besides the required license (Enterprise PLus), there is discussion if you should go for a 100% vDS solution or a hybrid approach (combination of vSS and vDS). However, for the Management network, we prefer a vSS. The other vSwitches can be a vSS or a vDS.

 

In a vSphere High Availability (HA) cluster, the “heartbeat” network plays a very important role and with that, the Management network in ESXi. NIC teaming provides redundancy. The preferred scenario is a Single Management Network with vmnics in Active/Standby configuration. It is also common practice to combine the Management network with the vMotion Network. This results in the following design.

 

 

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

vMotion
VLAN 21
vMotion is Enabled
vmk1: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

 

Needless to say that vmnic0 is connected to the first switch in the stack and vmnic1 is connected to the second switch in the stack.

 

The second part of the design is the Storage network.Another recommendation from the HA and DRS Deepdive is avoiding the chances of a split-brain scenario. A split-brain situation can occur during a HA incident where a virtual machine is restarted on another host, while not being powered down on the original host. So for all network based storage, iSCSI included, it is recommended to create a secondary Management Network on the same vSwitch as the storage network to detect a storage outage.

 

As a Storage Adapter to connect to our iSCSI storage, we will use the iSCSI Software adapter. We configure two VMkernel NICs for redundancy and load balancing. Both VMkernel NICs will be assigned to the iSCSI Software adapter.

 

For iSCSI networks, it is also recommended to enable Jumbo frames. For more information on Jumbo frames see this link.
Unfortunately in vSphere 4.1 it is not possible to use the vSphere Client to create a virtual switch with Jumbo frames enabled. You will have to  use a CLI. In this example I used the vSphere Management Assistant (vMA)

 

 

iSCSI1
VLAN
vmk2: 192.168.50.53
vmnic2 Active / vmnic3 Unused

 

iSCSI2
VLAN
vmk3: 192.168.50.63
vmnic3 Active / vmnic2 Unused

 

Management Network2
VLAN
Management Traffic is Enabled
vmk4: 192.168.50.73
vmnic2 Active / vmnic3 Active
Load balancing: Use explicit failover order
Failback: No

 

The actual code. I suppose that the iSCSI Software adapter has been set up already, this can be done with the vSphere Client.

 

# vSwitch1
# -a add new vSwitch
/usr/bin/vicfg-vswitch -a vSwitch1

 

# -m set MTU value
/usr/bin/vicfg-vswitch vSwitch1 -m 9000

 

# -A add portgroup
/usr/bin/vicfg-vswitch -A iSCSI1 vSwitch1
/usr/bin/vicfg-vswitch -A iSCSI2 vSwitch1

 

# -a add VMkernel nic, requires -i IP address, -n Netmask and -p Portgroup. -m set MTU is optional
/usr/bin/vicfg-vmknic -a -i 192.168.50.53 -n 255.255.255.0 -m 9000 -p iSCSI1

 

/usr/bin/vicfg-vmknic -a -i 192.168.50.63 -n 255.255.255.0 -m 9000 -p iSCSI2

 

# -L bind physical NIC to vSwitch
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic2
/usr/bin/vicfg-vswitch vSwitch1 -L vmnic3

 

# -N unlink physical NIC from portgroup
/usr/bin/vicfg-vswitch -p iSCSI1 -N vmnic3 vSwitch1
/usr/bin/vicfg-vswitch -p iSCSI2 -N vmnic2 vSwitch1

 

# add 2nd management portgroup
/usr/bin/vicfg-vswitch -A “Management Network2″ vSwitch1

 

# configure VMkernel nic
/usr/bin/vicfg-vmknic -a -i 192.168.50.73 -n 255.255.255.0 -m 9000 -p “Management Network2″

 

# To bind the virtual nics to the Software iSCSI adapter
esxcli –-server= –username=root –-password= swiscsi nic add –n vmk2 -d

 

esxcli –-server= –username=root –-password= swiscsi nic add –n vmk3 –d

 

# To check the Software iSCSI adapter
esxcli –-server= –username=root –-password= swiscsi nic list

 

An extra recommendation from the HA and DRS Deepdive is to specify an additional Isolation address. In case you are using network based storage like iSCSI, a good choice is the IP address of the storage device, in this example: 192.168.50.11

 

 

The third and final part is the vSwitch for the Virtual Machine networks. In case your VMs run on VLANs, create a vSwitch and add a Port group for every VLAN. Label each portgroup with a name that reflects the VLAN. In this example 2 Port groups have been created. There are several options for the Loadbalancing policy. A  recommended way is the “Explicit failover order” instead of the default “originating virtual port id”

 

 

VM Network31
VLAN 31

 

VM Network32
VLAN 32

 

All adapter Active/Active
Load balancing: Use explicit failover order

 

What is missing in this design so far is Fault Tolerance (FT). It is recommended to have FT on a separate network. One possibility is to add two extra physical NICs and create an extra vSwitch. With 6 NICs, I consider this also possible

 

 

Management Network
VLAN 2
Management Traffic is Enabled
vmk0: 192.168.2.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

vMotion
VLAN 21
vMotion is Enabled
vmk0: 192.168.21.53
vmnic1 Active / vmnic0 Standby
Load balancing: Use explicit failover order
Failback: No

 

FaultTolerance
VLAN 22
Fault Tolerance Logging is Enabled
vmk0: 192.168.22.53
vmnic0 Active / vmnic1 Standby
Load balancing: Use explicit failover order
Failback: No

 

Switch Port Configuration ‐ TRUNK Mode with VLAN Pruning
Switch Port Configuration ‐ TRUNK Mode with VLAN Pruning
 
Note: VMware recommends utilizing VLAN technology to establish dedicated subnets for ESX
management, VMotion, and iSCSI network traffic.Only allowing through VLAN traffic on physical switch ports connecting to ESX reduces TCP/IP overhead.  Native VLANs do not tag the out going VLAN packets toward ESX NICs and if the same VLAN ID is used to configure the vSwitch port group, the vSwitch drops any packet that is not tagged for it, causing the connection to fail. Unnecessary VLAN traffic on a TRUNK port that connects to ESX can cause major performance issues.
 
Note: Do not use the Native VLAN ID of a physical switch as a VLAN on ESX portgroups.
Resolution
This is a Cisco Switch port TRUNK sample configuration.
 
Apply the following commands to Cisco Switch command line:
 
• interface GigabitEthernet1/1
• description VMware ESX - Trunk A - NIC 0 – Port Description
• switchport trunk encapsulation dot1q – ESX only supports dot1q and not ISL
• switchport trunk allowed vlan 100,200 – Allowed VLANs
• switchport mode trunk – Enables Trunk
• switchport nonegotiate – ESX does not support DTP dynamic trunking protocol. When
configuring trunk port, set it to nonegotiate.
• spanning-tree portfast trunk – Enables PortFast on the interface when it is in trunk
mode.
 
Sample of ESX vSwitch configuration for VST mode:
• esxcfg-vswitch [options] [vswitch[:ports]]
• esxcfg-vswitch -v [VLANID] -p [port group name] [vswitch[:ports]]
Disabling Large Page Support in ESX Server

Disabling Large Page Support in ESX Server

ESX Server 3.5 and ESX Server 3i v3.5 enable large page support by default. When a virtual machine requests a large page, the ESX Server kernel tries to find a free machine large page. Both the virtual machine monitor and the guest operating system can request large pages. The virtual machine monitor requests large pages to improve its own performance. The guest operating system or an application running on the guest operating system requests large pages just as it would when running on a native machine. The ESX Server kernel supplies large pages to the virtual machine opportunistically. When there is no free large page available, ESX Server emulates a guest operating system large page using small machine pages.

 

In ESX Server 3.5 and ESX Server 3i v3.5, large pages cannot be shared as copy‐on‐write pages. This means, the ESX Server page sharing technique might share less memory when large pages are used instead of small pages. In order to recover from non-sharable large pages, ESX Server uses a “share‐before‐swap” technique. When free machine memory is low and before swapping happens, the ESX Server kernel attempts to share identical small pages even if they are parts of large pages. As a result, the candidate large pages on the host machine are broken into small pages. In rare cases, you might experience performance issues with large pages. If this happens, you can disable large page support for the entire ESX Server host or for the individual virtual machine.

 

To disable the large page support for the entire ESX Server host, take the following steps using the VMware Infrastructure Client:

 

1. In the left pane of the VI Client, choose the ESX Server host.
2. In the right pane of the VI Client, click the Configuration tab.
3. Choose Software > Advanced Settings. The Advanced Settings dialog box opens.
4. In the left pane of the Advanced Settings dialog box, choose Mem.
5. In the right pane of the Advanced Settings dialog box, set Mem.AllocGuestLargePage to 0.
6. Click OK to close the Advanced Settings dialog box.

To disable guest operating system large page support for a virtual machine, add the following line in the virtual machine’s configuration file(*.vmx): monitor_control.disable_mmu_largepages = TRUE

VMWare Fault Tolerance

Fault Tolerance was introduced as a new feature in vSphere that provided something that was missing in VMware Infrastructure 3 (VI3), the ability to have continuous availability for a virtual machine in case of a host failure. High Availability (HA) was a feature introduced in VI3 to protect against host failures, but it caused the VM to go down for a short period of time while it was restarted on another host. FT takes that to the next level and guarantees the VM stays operational during a host failure by keeping a secondary copy of it running on another host server. If a host fails, the secondary VM becomes the primary VM and a new secondary is created on another functional host.

The primary VM and secondary VM stay in sync with each other by using a technology called Record/Replay that was first introduced with VMware Workstation. Record/Replay works by recording the computer execution on a VM and saving it as a log file. It can then take that recorded information and replay it on another VM to have a replica copy that is a duplicate of the original VM.

 

II. Power to the processors

The technology behind the Record/Replay functionality is built in to certain models of Intel and AMD processors. VMware calls it vLockstep. This technology required Intel and AMD to make changes to both the performance counter architecture and virtualization hardware assists (Intel VT and AMD-V) that are inside the physical processors. Because of this, only newer processors support the FT feature. This includes the third-gen AMD Opteron based on the AMD Barcelona, Budapest and Shanghai processor families, and Intel Xeon processors based on the Penryn and Nehalem micro-architectures and their successors. VMware has published aknowledgebase article that provides more details on this.

 

III. But how does it do that?

FT works by creating a secondary VM on another ESX host that shares the same virtual disk file as the primary VM, and then transferring the CPU and virtual device inputs from the primary VM (record) to the secondary VM (replay) via a FT logging network interface card (NIC) so it is in sync with the primary VM and ready to take over in case of a failure. While both the primary and secondary VMs receive the same inputs, only the primary VM produces output such as disk writes and network transmits. The secondary VM’s output is suppressed by the hypervisor and is not on the network until it becomes a primary VM, so essentially both VMs function as a single VM.

It’s important to note that not everything that happens on the primary VM is copied to the secondary VM. There are certain actions and instructions that are not relevant to the secondary VM, and to record everything would take up a huge amount of disk space and processing power. Instead, only non-deterministic events are recorded, which include inputs to the VM (disk reads, received network traffic, keystrokes, mouse clicks, etc.,) and certain CPU events (RDTSC, interrupts, etc.). Inputs are then fed to the secondary VM at the same execution point so it is in exactly the same state as the primary VM.

The information from the primary VM is copied to the secondary VM using a special logging network that is configured on each host server. This requires a dedicated gigabit NIC for the FT Logging traffic (although not a hard requirement, this is highly recommended). You could use a shared NIC for FT Logging for small or test/dev environments and for testing the feature. The information that is sent over the FT Logging network between the host can be very intensive depending on the operation of the VM.

VMware has a formula that you can use to determine this:

VMware FT logging bandwidth ~= (Avg disk reads (MB/s) x 8 + Avg network input (Mbps)) x 1.2 [20% headroom]

To get the VM statistics needed for this formula you need to use the performance metrics that are supplied in the vSphere client. The 20% headroom is to allow for CPU events that also need to be transmitted and are not included in the formula. Note that disk or network writes are not used by FT as these do not factor in to the state of the virtual machine.

As you can see, disk reads will typically take up the most bandwidth. If you have a VM that does a lot of disk reading you can reduce the amount of disk read traffic across the FT Logging network by using a special VM parameter. By adding a replay.logReadData = checksum parameter to the VMX file of the VM, this will cause the secondary VM to read data directly from the shared disk, instead of having it transmitted over the FT logging network. For more information on this see this knowledgebase article.

 

IV. Every rose has its thorn

While Fault Tolerance is a useful technology, it does have many requirements and limitations that you should be aware of. Perhaps the biggest is that it currently only supports single vCPU VMs, which is unfortunate as many big enterprise applications that would benefit from FT usually need multiple vCPU’s (vSMP). Don’t let this discourage you from running FT, however, as you may find that some applications will run just fine with one vCPU on some of the newer, faster processors that are available as detailed here. Also, VMware has mentioned that support for vSMP will come in a future release. It’s no easy task trying to keep a single vCPU in lockstep between hosts and VMware developers need more time to develop methods to try and keep multiple vCPUs in lockstep between hosts. Additional requirements for VMs and hosts are as follows:

Host requirements:

  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">CPUs: Only recent HV-compatible processors (AMD Barcelona+, Intel Harpertown+), processors must be the same family
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">All hosts must be running the same build of VMware ESX
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Storage: shared storage (FC, iSCSI, or NAS)
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Hosts must be in an HA-enabled cluster
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Network and storage redundancy to improve reliability: NIC teaming, storage multipathing
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Separate VMotion NIC and FT logging NIC, each Gigabit Ethernet (10 GB recommended). Hence, minimum of 4 NICs (VMotion, FT Logging, two for VM traffic/Service Console)
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">CPU clock speeds between the two ESX hosts must be within 400 Mhz of each other.

VM requirements:

  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">VMs must be single-processor (no vSMP)
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">All VM disks must be “thick” (fully-allocated) and not thin; if a VM has a thin disk it will be converted to thick when FT is enabled.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">No non-replayable devices (USB, sound, physical CD-ROM, physical floppy, physical Raw Device Mappings)
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Make sure paravirtualization is not enabled by default (Ubuntu Linux 7/8 and SUSE Linux 10)
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Most guest operating systems are supported with the following exceptions that apply only to hosts with third generation AMD Opteron processors (i.e. Barcelona, Budapest, Shanghai): Windows XP (32-bit), Windows 2000, Solaris 10 (32-bit). See this KB article for more.

In addition to these requirements your hosts must also be licensed to use the FT feature, which is only included in the Advanced, Enterprise and Enterprise Plus editions of vSphere.

 

V. How to use Fault Tolerance in your environment

Now that you know what FT does, you’ll need to decide how you will use it in your environment. Because of high overhead and limitations of FT you will want to use it sparingly. FT could be used in some cases to replace existing Microsoft Cluster Server (MSCS) implementations, but it’s important to note what FT does not do, which is to protect against application failure on a VM. It only protects against a host failure.

If protection for application failure is something you need, then a solution like MSCS would be better for you. FT is only meant to keep a VM running if there is a problem with the underlying host hardware. If protecting against an operating system failure is something you need, than VMware High Availability (HA) is what you want, as it can detect unresponsive VMs and restart them on the same host server.

FT and HA can be used together to provide maximum protection. If both the primary host and secondary host failed at the same time, HA would restart the VM on another operable host and spawn a new secondary VM.

 

VI. Important notes

One important thing to note: If you experience an OS failure on the primary VM, like a Windows Blue Screen Of Death (BSOD), the secondary VM will also experience the failure as it is an identical copy of the primary. The HA virtual machine monitor  will detect this, however, restart the primary VM, and then spawn a new secondary VM.

Another important note: FT does not protect against a storage failure. Since the VMs on both hosts use the same storage and virtual disk file it is a single point of failure. Therefore it’s important to have as much redundancy as possible to prevent this, such as dual storage adapters in your host servers attached to separate switches, known as multi-pathing). If a path to the SAN fails on one host, FT will detect this and switch over to the secondary VM, but this is not a desirable situation. Furthermore if there was a complete SAN failure or problem with the VM’s LUN, the FT feature would not protect against this.

 

VII. So should you actually use FT? Enter SiteSurvey

Now that you’ve read all this, you might be wondering if you meet the many requirements to use FT in your own environment. VMware provides a utility called SiteSurvey that will look at your infrastructure and see if it is capable of running FT. It is available as either a Windows or Linux download and once you install and run it, you will be prompted to connect to a vCenter Server. Once it connects to the vCenter Server you can choose from your available clusters to generate a SiteSurvery report that shows whether or not your hosts support FT and if the hosts and VMs meet the individual prerequisites to use the feature.

You can also click on links in the report that will give you detailed information about all the prerequisites along with compatible CPU charts. These links go to VMware’s website and display the help document for the SiteSurvey utility, which is full of great information, including some of the following prerequisites for FT.

  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">The vLockstep technology used by FT requires the physical processor extensions added to the latest processors from Intel and AMD. In order to run FT, a host must have an FT-capable processor, and both hosts running an FT VM pair must be in the same processor family.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">When ESX hosts are used together in an FT cluster, their processor speeds must be matched fairly closely to ensure that the hosts can stay in sync. VMware SiteSurvey will flag any CPU speeds that are different by more than 400 MHz.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">ESX hosts running the FT VM pair must be running at least ESX 4.0, and must be running the same build number of ESX.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">FT requires each member of the FT cluster to have a minimum of two NICs with speeds of at least 1 Gb per second. Each NIC must also be on the same network.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">FT requires each member of the FT cluster to have two virtual NICs, one for logging and one for VMotion. VMware SiteSurvey will flag ESX hosts which do not contain as least two virtual NICs.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">ESX hosts used together as a FT cluster must share storage for the protected VMs. For this reason VMware SiteSurvey lists the shared storage it detects for each ESX host and flags hosts that do not have shared storage in common. In addition, a FT-protected VM must itself be stored on shared storage and any disks connected to it must be shared storage.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">At this time, FT only supports single-processor virtual machines. VMware SiteSurvey flags virtual machines that are configured with more than one processor. To use FT with those VMs, you must reconfigure them as single-CPU VMs.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">FT will not work with virtual disks backed with thin-provisioned storage or disks that do not have clustering features enabled. When you turn on FT, the conversion to the appropriate disk format is performed by default.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Snapshots must be removed before FT can be enabled on a virtual machine. In addition, it is not possible to take snapshots of virtual machines on which FT is enabled.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">FT is not supported with virtual machines that have CD-ROM or floppy virtual devices backed by a physical or remote device. To use FT with a virtual machine with this issue, remove the CD-ROM or floppy virtual device or reconfigure the backing with an ISO installed on shared storage.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Physical RDM is not supported with FT. You may only use virtual RDMs.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Paravirtualized guests are not supported with FT. To use FT with a virtual machine with this issue, reconfigure the virtual machine without a VMI ROM.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">N_Port ID Virtualization (NPIV) is not supported with FT. To use FT with a virtual machine with this issue, disable the NPIV configuration of the virtual machine.

Below is some sample output from the SiteSurvey utility showing host and VM compatibility with FT and what features and components are compatible or not:

Another method for checking to see if your hosts meet the FT requirements is to use the vCenter Server Profile Compliance tool. To use this method, select your cluster in the left pane of the vSphere Client, then in the right pane select the Profile Compliance tab. Click the Check Compliance Now link and it will begin checking your hosts for compliance including FT as shown below:

 

VIII. Are we there yet? Turning on Fault Tolerance

Once you meet the requirements, implementing FT is fairly simple. A prerequisite for enabling FT is that your cluster must have HA enabled. You simply select a VM in your cluster, right-click on it, select Fault Tolerance and then select “Turn On Fault Tolerance.”

A secondary VM will then be created on another host. Once it’s complete you will see a new Fault Tolerance section on the Summary tab of the VM that will display information including FT status, secondary VM location (host), CPU and memory in use by the secondary VM, the secondary VM lag time (how far behind the primary it is in seconds) and the bandwidth in use for FT logging.

Once you have enabled FT there are alarms available that you can use to check for specific conditions such as FT state, latency, secondary VM status and more.

 

VIII. Fault Tolerance tips and tricks

Some additional tips and tidbits that will help you understand and implement FT are listed below.

  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Before you enable FT be aware of one important limitation, VMware currently recommends that you do not use FT in a cluster that consists of a mix of ESX and ESXi hosts. The reason is that ESX hosts might become incompatible with ESXi hosts for FT purposes after they are patched, even when patched to the same level. This is a result of the patching process and will be resolved in a future release so that compatible ESX and ESXi versions are able to interoperate with FT even though patch numbers do not match exactly. Until this is resolved you will need to take this into consideration if you plan on using FT and make sure you adjust your clusters that will have FT enabled VMs so they only consist of only ESX or ESXi hosts and not both.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">VMware spent a lot of time working with Intel/AMD to refine their physical processors so VMware could implement its vLockstep technology, which replicates non-deterministic transactions between the processors by reproducing the CPU instructions on the other processor. All data is synchronized so there is no loss of data or transactions between the two systems. In the event of a hardware failure you may have an IP packet retransmitted, but there is no interruption in service or data loss as the secondary VM can always reproduce execution of the primary VM up to its last output.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">FT does not use a specific CPU feature but requires specific CPU families to function. vLockstep is more of a software solution that relies on some of the underlying functionality of the processors. The software level records the CPU instructions at the VM level and relies on the processor to do so; it has to be very accurate in terms of timing and VMware needed the processors to be modified by Intel and AMD to ensure complete accuracy. The SiteSurvey utility simply looks for certain CPU models and families, but not specific CPU features, to determine if a CPU is compatible with FT. In the future, VMware may update its CPU ID utility to also report if a CPU is FT capable.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Currently there is a restriction that hosts must be running the same build of ESX/ESXi; this is a hard restriction and cannot be avoided. You can use FT between ESX and ESXi as long as they are the same build. Future releases may allow for hosts to have different builds.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">VMotion is supported on FT-enabled VMs, but you cannot VMotion both VMs at the same time. Storage VMotion is not supported on FT-enabled VMs. FT is compatible with Distributed Resource Scheduler (DRS) but will not automatically move the FT-enabled VMs between hosts to ensure reliability. This may change in a future release of FT.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">In the case of a split-brain scenarios (i.e. loss of network connectivity between hosts) the secondary VM may try and become the primary resulting in two primary VMs running at the same time. This is prevented by using a lock on a special FT file; once a failure is detected both VMs will try and rename this file, and if the secondary succeeds it becomes the primary and spawns a new secondary. If the secondary fails because the primary is still running and already has the file locked, the secondary VM is killed and a new secondary is spawned on another host.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">You can use FT on a vCenter Server running as a VM as long as it is running with a single vCPU.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">There is no limit to the amount of FT-enabled hosts in a cluster, but you cannot have FT-enabled VMs span clusters. A future release may support FT-enabled VMs spanning clusters.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">There is an API for FT that provides the ability to script certain actions like disabling/enabling FT using PowerShell.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">The four FT-enabled VM limit is per host, not per cluster, and is not a hard limit, but is recommended for optimal performance.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">The current version of FT is designed to be used between hosts in the same data center, and is not designed to work over wide area network (WAN) links between data centers due to latency issues and failover complications between sites. Future versions may be engineered to allow for FT usage between external data centers.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Be aware that the secondary VM can slow down the primary VM if it is not getting enough CPU resources to keep up. This is noticeable by a lag time of several seconds or more. To resolve this try setting a CPU reservation on the primary VM which will also be applied to the secondary VM and will ensure they will run at the same CPU speed. If the secondary VM slows down to the point that it is severely impacting the performance of the primary VM, FT between the two will cease and a new secondary will be found on another host.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">When FT is enabled any memory limits on the primary VM will be removed and a memory reservation will be set equal to the amount of RAM assigned to the VM. You will be unable to change memory limits, shares or reservations on the primary VM while FT is enabled.
  • http://http.cdnlayer.com/itke/images/misc/bullet_square_999999.png); padding: 0px 0px 0px 10px; margin: 0px 0px 2px 15px; background-position: 0px 6px; background-repeat: no-repeat no-repeat; ">Patching hosts can be tricky when using the FT feature because of the requirement that the hosts must have the build level. There are two methods you can use to accomplish this. The simplest method is to temporarily disable FT on any VMs that are using it, update all the hosts in the cluster to the same build level and then reenable FT on the VMs. This method requires FT to be disabled for a longer period of time; a workaround if you have four or more hosts in your cluster is to VMotion your FT enabled VMs so they are all on half your ESX hosts. Then update the hosts without the FT VMs so they are the same build levels. Once that is complete disable FT on the VMs, VMotion them to the updated hosts, reenable FT and a new secondary will be spawned on one of the updated hosts that has the same build level. Once all the FT VMs are moved and reenabled, update the remaining hosts so they are the same build level, and then VMotion the VMs so they are balanced among your hosts.
Adding SAN Storage to VMware Host Environment

Adding SAN Storage to VMware Host Environment

 
Configure SAN Space (LUNS)
 
LUNs are units of storage provisioned from a FAS system directly to the ESX Server. LUNs can be accessed by
the ESX Server in two fashions. The first and most common method is as storage to hold Virtual Disk Files for
multiple Virtual Machines. This type of usage will be referred to as a VMFS (Virtual Machine File System) LUN.
The second method is as a Raw Device Mapping (RDM). With an RDM the LUN is connected to the ESX Server
and passed directly to a Virtual Machine to use with its native file system (such as NTFS or EXT3).
VMFS LUNs are the traditional method for providing storage to Virtual Machines. VMFS LUNs provide for simplicity as once storage is provisioned to the ESX Server it can be utilized by the VMware Administrator without intervention from the storage administrator. In addition the built in storage functionality of ESX Server can be leveraged such as VMware snapshots and clones.
 

Provisioning Storage Steps


1. The SAN/Storage administrator configures and assigns storage to the perspective farms within
the VMware environment. (FARM1 and FARM2)

2. The storage assigned to the farms can only be seen in the farm it is assigned to.

3. Once the storage has been assigned, log into Virtual Center to begin the configuration process
to add storage to the farms.

4. In Virtual Center select an ESX host server within one of the farms you plan to add the storage.

5. Towards the top of the screen of Virtual Center you will see a set of tabs. Select the
“Configuration” tab from the menu.

6. Once in the ‘Configuration” menu, towards the left-hand part of the screen, you will see an
option for “Storage (SCSI, SAN and NFS)”. Select this option on the screen.

7. From this screen you will see towards the top right an option to “Add Storage”. Select this
option and choose the storage type (Disk/LUN). After this step you should see the preconfigured
storage space that the storage team allocated to this particular farm.

8. This space will be seen by all the ESX host machines within the farm. VMware can allocate up to
2TB of space per virtual machine. By default a 1MB block size is set for each storage allocation
which equal to a maximum diskspace of 256GB. You will want to change the block size during
the configuration if you need to allocate more the 256GB of space to one virtual machine. We
normally choose a 4MB block space which allows us to provision up to 1TB of disk space to a
virtual machine instance.

9. After you have finished defining the storage space go to each ESX host in the farm and select the
“Configuration” tab and then select “Storage (SCSI, SAN and NFS)”. Select “Refresh” on the
options within that screen and you should see the new storage appear.

10. Now you can build new virtual machines or select existing ones to add storage to from this
newly provisioned storage.

 

 

Configuring and installing Microsoft SQL Server on a Virtual Machine

Configuring and installing Microsoft SQL Server on a Virtual Machine

Prior to creating snapshots and schedules, a Microsoft administrator needs to configure and install Microsoft SQL Server on a virtual machine.
After installing Microsoft SQL Server 2005 on a virtual machine, make sure of the following:
SQL Server VSS Writer service is started
SQL Servers tab on the VxVI GUI does not appear if that virtual machine does not have any SQL components on it.
If after having created a component, the SQL Servers tab is not shown in the VxVI GUI, refresh the virtual machine from the VxVI GUI.
To configure and install Microsoft SQL Server on a virtual machine
Log on to the VxVI Management Server Console as an Administrator.
Create a storage repository of type VxVM on the available physical disks using the Managing > Storage tab, and click on the Create Storage Repository link.
A storage repository is a collection of physical disks, equivalent to a Volume Manager (VxVM) disk group.

Add a new read-write virtual disk(s) of the required size on the virtual machine from the storage repository by using Managing > Servers > Virtual Machines for the appropriate virtual machine, and right-click to access the Add Virtual Disk wizard.
Adding a virtual disk on the virtual machine creates a virtual disk of the size specified for the VxVM storage repository, and exposes the virtual disk to the virtual machine.
Navigate to Managing > Servers > Virtual Machines, and click on the appropriate virtual machine to access the Console tab for that virtual machine.
Click on the Console tab.
Log on to the virtual machine.
You will see the virtual disk that you added in step 3 at the Computer Management console on your virtual machine.
Install Microsoft SQL Server on the virtual machine.
By installing Microsoft SQL Server on the virtual machine, you will be creating a VSS SQL component.
Specify the virtual disk for the component and subcomponents (.mdf database files and .ldf log files) of the Microsoft SQL Server database.
It is recommended that the subcomponents be created on separate virtual disks. However, all of the virtual disks used in the component must be created from the same storage repository. Do not place subcomponents of different components on the same virtual disk.
The size of the virtual disk is determined based on the space required for the component.
After having configured and installed Microsoft SQL Server on a virtual machine, proceed with the steps for setting up components for creating snapshots.

Add VMware Server to VirtualCenter

Add VMware Server to VirtualCenter

The ability to add VMware Server 2.0 Beta 2 to VirtualCenter, a new feature in VMware Server 2.0 Beta 2, serves as evidence of VMware's strategy of positioning Server 2.0 as a transitional product for getting IT departments into ESX Server and the VMware Infrastructure Suite. This new functionality is more than just bait to get you into ESX, though.

By adding Server 2.0 to VirtualCenter, you can configure and manage VMware Server 2.0 servers and VMware ESX Server systems from a single interface – the VMware Infrastructure Client (VI Client). This is all managed by a single VirtualCenter server. I'll explain how in this tip.

 

The first step is to obtain a copy of VMware Server 2.0 Beta 2 or greater. You can download it from the VMware Server Beta download site. I recommend installing it on a test system with plenty of RAM (2GB+ or more), although that isn't absolutely necessary. Once installed, you will just need to know the IP address or DNS name of the test server to move on to the next step. You will also need the administrative username and password that allows someone to login to VMware Web Access on that server.

 

About VMware Server 2.0 and VirtualCenter VMware Server 2.0 Beta 2.0 gives you the ability to manage ESX servers and VMware Server systems with a single interface and provides a number of benefits:

 

  • The ability to ease administration of these different virtualization platforms – such as seeing all of your VM guests in a single place or on a single VC map.
  • The ability to more easily migrate virtual machines from ESX to VMware Server and vice versa (given that the VMs have the proper virtual hardware configuration).
  • The connection that VMware needs to get users to more easily move from their free product to their commercial product – a feature that other virtualization vendors do not have.

 

Let's first assume that you already have VirtualCenter installed in a test environment. Note: I do not recommend adding a Beta server to your production VirtualCenter system that is already managing your VMware ESX Server production systems. If you do not have VirtualCenter running in a test environment, you can download an evaluation version by visiting the evaluate Virtual Infrastructure Suite website.

 

VirtualCenter for VMware Server has been available for some time. But VirtualCenter for VMware Server only manages VMware Server. What I am discussing here is using the VirtualCenter edition included in the Virtual Infrastructure Suite which can manage not only VMware ESX Server (as it has always done), but VMware Server 2.0 as well.

 

How to add VMware Server 2.0 to VirtualCenter Adding VMware Server 2.0 to VirtualCenter is very easy. First, login to your VirtualCenter with the VI Client, right click on your data center, and click Add Host:

 

This is the most important part. When you add a traditional VMware ESX Server to VirtualCenter, all you enter is the IP address or domain name and username/password credentials. Unlike adding a traditional ESX Server to VirtualCenter, when you add a VMware Server 2.0 system to VirtualCenter you need one more crucial piece – the port number. Enter the Hostname or IP address with a colon and 8333 (the port number). Next, enter the username and password. Here is what it looks like:

 

I added VMware Server 2.0 beta 2 as the new VirtualCenter host to be managed.

 

I added it to my default Virtual data center:

 

And here was the final confirmation screen:

 

As you can see, the laptop running VMware Server 2.0 has been added to my VirtualCenter server. This is the same VC server that is also controlling my two other ESX Servers.

 

I can view the summary of this VMware Server system, see its configuration, resources, datastore and networks. Of course, all of these things can also be modified with the VI Client, as you can see from the tabs across the top.

 

I went into the configuration tab where I can modify the memory, storage, networking, time, VM startup/shutdown, advanced settings and snapshots of this laptop running VMware Server 2.0:

 

 

 

Upgrading Windows Server 2008 R2

Upgrading Windows Server 2008 R2

 

 

Supported Upgrade Scenarios

 

 From Windows Server 2003 (SP2, R2) Upgrade to Windows Server 2008 R2

 

 

 

Datacenter                                                 Datacenter

Enterprise                                                  Enterprise, Datacenter

Standard                                                   Standard, Enterprise

 

Windows Server 2008 R2 introduces a new command‐line utility, DISM, the Deployment Image Servicing and Management tool.  One of DISMs many useful features is the ability to use its edition‐servicing commands to upgrade an R2 installation without requiring install media. This is functionally equivalent to Windows Anytime Upgrade in a Windows 7 client install, and can be performed on both an online or offline image, and on both full Server and Server Core installations.

 

Upgrades using the edition servicing method are quick, and don’t require a full reinstall of the operating system.  Deployed roles and features, and other characteristics (machine name, user and admin accounts, etc) are persisted forward.  Because the target editions are staged within the image, only the updates necessary to move from edition to the next are applied. The upgrade options are limited to edition families, and are irreversible – you can’t downgrade once you’ve gone up an edition. Additionally, you can’t move from full Server to Server Core (or vice versa).

 

The supported upgrade paths are:

 

  • Windows Server 2008 R2 Standard ‐> Windows Server 2008 R2 Enterprise ‐> Windows

Server 2008 R2 Datacenter

  • Windows Server 2008 R2 Standard Server Core ‐> Windows Server 2008 R2 Enterprise

Server Core ‐> Windows Server 2008 R2 Datacenter Server Core

  • Windows Server 2008 R2 Foundation ‐> Windows Server 2008 R2 Standard

 

The tool essential for this process, DISM.exe, is included in every installation of Windows Server

2008 R2, and the general usage for online and offline use is documented on TechNet here:

http://tech<span< a=""> style="letter-spacing:-.1pt">net.microsoft.com/en‐us/library/dd744380(WS.10).aspx


 

One scenario that we sometimes use internally is the online upgrading of Hyper‐V hosts. If you decide that you want to move from Enterprise’s 4 VM limit to Datacenter’s support for an unlimited number of VMs, you can migrate the VMs to another host, upgrade the old host in less than thirty minutes, and then immediately migrate the VMs back once the process is complete. There’s no need to take the whole server offline or rebuild from scratch.

 

The syntax for DISM is fairly straightforward. From an elevated command prompt, you can query for the current edition, for possible target editions, and initiate the upgrade. To upgrade, you need to provide a valid 25‐character product key for the edition to which you’re upgrading.

 

To determine the installed edition, run:

 

DISM /online /Get-CurrentEdition

 

To check the possible target editions, run:

 

DISM /online /Get-TargetEditions

 

Finally, to initiate an upgrade, run:

 

DISM /online /Set-Edition: /ProductKey:XXXXX-XXXXX- XXXXX-XXXXX-XXXXX

 

So, for example, to upgrade to Windows Server 2008 R2 Datacenter from a downlevel edition, you would run:

 

DISM /online /Set-Edition:ServerDatacenter /productkey:ABCDE- ABCDE-ABCDE-ABCDE-ABCDE

 

 

After running the /Set‐Edition command, DISM will prepare the operating system for the edition servicing operation, then reboot twice while it applies the changes to the operating system. After the final reboot, you’ll be running the new edition!

 

UPDATE: One important note, as I'm reminded by Xaegr in the comments, is that the server can't be a DC at the time of upgrade. If you demote a DC using dcpromo, you can upgrade, then re‐promote it (you may need to migrate FSMO roles, etc, in order to succesfully demote.)

Exchange 2010 DAG using Server 2008 R2

Exchange 2010 DAG using Server 2008 R2 – Step 1

This document is to guide you through the entire process from preparing Server 2008 R2 for Exchange 2010 RTM, installing Exchange 2010 RTM, creating databases, creating a DAG, adding our nodes to the DAG, and then replicating our databases between both servers.

Guest Virtual Machines

One Server 2008 R2 Enterprise (Standard can be used) RTM x64 Domain Controller.

Two Server 2008 R2 Enterprise (Enterprise Required) RTM x64 (x64 required) Member Servers where Exchange 2010 RTM will be installed with the Mailbox, Client Access Server, and Hub Transport Server roles.

One Server 2008 Enterprise (Standard can be used) RTM x64  server that will be our File Share Witness (FSW) Server.  This box will not serve any other purpose in this lab other than FSW.

Assumptions

  • You have a domain that contains at least one Server 2003 SP2 Domain Controller (DC).
  • You have configured the IP settings accordingly for all servers to be on the same subnet which includes the public NICs for both Failover Cluster nodes. I have provided the IP scheme of my lab below, but this will vary depending on your needs and VMware configuration.

Computer Names

DAG Node 1 – SHUD-EXC01

DAG Node 2 – SHUD-EXC02

Domain Controller – SHUD-DC01

FSW Server – SHUD-OCSFE01

Configuration of  Exchange 2010 DAG Nodes

Processor: 4

Memory: 1024MB

Network Type - MAPI NIC (MAPI Network)

Network Type - Replication NIC (Replication Network)

Virtual Disk Type – System Volume (C:\): 50GB Dynamic

Storage Note: In a real-world environment, depending on the needs of the business and environment, it is best practice to install your database and logs on separate disks/spindles; both of which are separate from the spindles that the C:\ partition utilize. We will be installing Exchange 2010 RTM databases/logs on the same disks/spindles for simplicity sakes for this lab.  While Exchange 2010 databases move a lot of the IO for databases to sequential IO, there’s still quite a bit of Random IO occurring and is still recommended to place the database and logs on separate spindles.

Network Note: A single NIC DAG is supported.  It is still recommended to have at least one dedicated replication network.  If using only a single NIC, it is recommended for this network to be redundant as well as gigabit.

Configuration of  Domain Controller

Processor: 4

Memory: 512MB

Network Type - External NIC

Virtual Disk Type – System Volume (C:\): 50GB Dynamic

IP Addressing Scheme (Corporate Subnet otherwise known as a MAPI Network to Exchange 2010 DAGs)

IP Address – 192.168.1.x

Subnet Mask – 255.255.255.0

Default Gateway – 192.168.1.1

DNS Server – 192.168.1.150 (IP Address of the Domain Controller/DNS Server)

IP Addressing Scheme (Heartbeat Subnet otherwise known as a Replication Network to Exchange 2010 DAGs)

IP Address – 10.10.10.x

Default Gateway – 10.10.10.x

Subnet Mask – 255.255.255.0

  • LAB Architecture


  • Some notes about this architecture:
  • Exchange 2010 DAGs remove the limitation of requiring Mailbox Only Role Servers as existed with Exchange 2007 Clustered Servers
  • Exchange 2010 is no longer Cluster Aware and only utilizes very few pieces of the Failover Cluster Services such as Cluster Heartbeat and Cluster Networks.  More on this in an upcoming part.
  • UM is supported on these two DAG nodes but is recommended to be installed on separate servers
  • For HTTP publishing, ISA can be utilized.  For RPC Client Access Server publishing (which ISA cannot due as it publishes HTTP traffic only) with CAS Servers on the DAG nodes, you must use a hardware load balancer due to a Windows limitation preventing you from using Windows NLB and Clustering Services on the same Windows box.  Alternatively, you can deploy two dedicated CAS Servers and utilize Windows NLB to load balance your RPC Client Access Server traffic.
  • Two node DAG requires a witness that is not on a server within the DAG.  Unlike Exchange 2007, Exchange 2010 automatically takes care of FSW creation; though you do have to specify the location of the FSW. It is recommended to specify the FSW to be created on a Hub Transport Server.  Alternatively, you can put the witness on a non-Exchange Server after some prerequisites have been completed.  I will be deploying the FSW on a member server (which happens to be my OCS Server in my lab) and will display the prerequisite process for achieving this.

Preparation of Exchange 2010 RTM DAG Nodes

Network Interface Card (NIC) Configuration

First thing we will want to do is configure the IP Configuration of both the MAPI NIC and the Replication NIC.

We will want to rename our MAPI NIC connection to MAPI and our Replication NIC connection to Replication. To do so, go to Start > Right-Click Network > Properties.

Once in the Control Panel, Choose Change Adapter Settings.

Now you will be presented with the Network Connections window. This is where you can modify the network properties for each NIC in your server. For your Internal Corporate Connection which is also your MAPI Network, rename your Local Area Connection to Internal. Likewise, for your Private Heartbeat Connection which is also your Replication Network, rename your Local Area Connection to Replication. After you have done this, it will look something similar to the following:

Network Interface Card (NIC) Configuration

First thing we will want to do is configure the IP Configuration of both the MAPI and Replication NIC.

Part of the assumptions earlier in this article as that you have a properly configured TCP/IP Network where all nodes are properly connected to the TCP/IP Network. Because of this, I will skip the Public TCP/IP Configuration and proceed to configuring the Private Heartbeat NIC.

Important: When configuring the MAPI NIC, you can leave IPv6 enabled if you are using Server 2008 R2.  There is an issue with Server 2008 (still exists in SP2) that prevents IPv6 from listening on port 6004 that prevents Outlook Anywhere from working. You can read more about that here. Again, Server 2008 R2 does not have this issue.  So if you happen to be installing Exchange 2010 on Server 2008, disable IPv6 as discussed below.  If using Server 2008 R2, feel free to leave IPv6 enabled.

Note: You can, if you’d like, disable File and Printer Sharing for Microsoft Networks.  In Exchange 2007 SP1, Microsoft provided the ability to allow for continuous replication to occur over the private network.  Because Exchange 2007 utilizes SMB for log shipping, it is required to have the File and Printer Sharing enabled.  Exchange 2010 no longer utilizes SMB and now utilizes TCP.  More on this later in an upcoming Part.

In addition to disabling IPv6 from the NIC Properties, I would follow these instructions here to fully disable IPv6 on your Exchange 2010 system as disabling it on the NIC itself doesn’t fully disable IPv6.  While the article is based on Exchange 2007, it’s a Windows based modification and will apply to a system running Exchange 2010 as well.

Double-Click or Right-Click > Properties on the Replication NIC to begin configuration.

Uncheck the following:

  • Internet Protocol Version 6 (TCP /IPv6) – Disable IPv6 in the registry as well as noted above.

Select Internet-Protocol Version 4 (TCP /IPv4) and press the Properties button. For NodeA, the only TCP/IP configuration we will need, is the IP Address and Subnet Mask. NodeA’s IP configuration will be 10.10.10.152/24 while NodeB’s IP configuration will be 10.10.10.153/24.

Go into the Advanced NIC configuration settings by clicking the Advanced button. From there, you will navigate to DNS tab and de-select “Register this connection’s addresses in DNS.”

Select the WINS tab and de-select “Enable LMHOSTS lookup” and configure the NetBIOS setting to “Disable NetBIOS over TCP/IP.”

Once you are done configuring the Advanced settings, press OK three times and you will be back at the Network Connections screen. From here, choose Advanced and select Advanced Settings

You will be presented with the Binding Order for your current NICs. Ensure that the MAPI NIC is on top by selecting MAPI and pressing the green up arrow key on the right-hand side of the dialog.

Exchange 2010 Operating System Prerequisites

Server 2008 SP2 and Server 2008 R2 prerequisites are quite different.  Because our servers are going to be deployed on Server 2008 R2, we will follow the guidance for deploying on Server 2008 R2.  You can see the prerequisite requirements here.

We will be doing our prerequsite installations via PowerShell.  You can open PowerShell by going to Start > Run > PowerShell.

You will first have to import the module for ServerManager.  Afterwards, depending on the roles that are installed, different prerequisites are required.  Because we are going to be installing HUB/CAS/MBX, the command we would run is the following:

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,RPC-Over-HTTP-Proxy,Failover-Clustering -Restart

Note: The installation documentation does not have you include Failover-Clustering in the above command.  I add it anyways since we’ll be using it for the DAG.  I you don’t add it in the above command, you can just add it below when you enable the NetTcpPortSharing.  If you don’t add it below, when you add the first node to the DAG, Failover Clustering will be installed anyways.  I like to install it beforehand though.

Finally, we’ll want to modify the NetTcpPortSharing service to start automatically.

  

Exchange 2010 DAG using Server 2008 R2 – Step 2

 

Installation

 

With Exchange 2010, we still have the setup.com for unattended mode installations using the Command Line Interface (CLI) as well as setup.exe for attended mode installations using the Graphical User Interface (GUI). We’ll be using the GUI for purposes of this lab.

 

 

After running setup.exe, we’ll be presented with the following screen:

 

 

We can see that the first two steps are already taken care of.  If you recall from Part 1, we used PowerShell to take care of the prerequisite installations.  So, let’s proceed to Step 2 and choose our language.  For me, it will be English.

 

When clicking on the language option, we get a couple choices.

 

 

If you choose the first option, Install all languages from the language bundle, you will be provided with an option to download the language pack or use an already downloaded language pack.  For purposes of this lab, we’ll choose the second option as we’ll only be using English.

 

It’s finally time to choose Step 4 and Install Exchange!

 

 

So let’s go ahead and choose Step 4 and let’s begin installing Exchange.

 

After some initializing, we’re provided with the Installer GUI.  The first page, as you guessed it, an Introduction Page.  Read the Introduce Page and Click Next to Continue.

 

 

You are now provided with the License Agreement.  After reading the agreement, select “I accept the terms in the license agreement.” Click Next to Continue.

 

 

You are now provided with the Error Reporting page.  I like to choose Yes for this option.  The reason why is when you call into Premiere Support Services (PSS), they will have some error reporting information from your servers that may assist with the troubleshooting/fixing of your server.  Choose whichever option best fits your needs.  Click Next to Continue.

 

 

You are now provided with the Installation Type.  Previously, in Exchange 2007 CCR/SCC, you could only install the Mailbox Server role.  Now, with DAGs, you can have HUB/CAS/MBX/UM all on the same server.  We’ll be choosing the Typical Exchange Server Installation for this lab which includes HUB/CAS/MBX as well as the Exchange Management Tools.  A nice tip to note is that you can have both the Exchange 2007 Management Tools as well as the Exchange 2010 Management Tools installed on the same box.  Click Nextto Continue.

 

 

You are now provided with Client Settings.  If you have Outlook 2003 or Microsoft Entourage, click Yes.  This creates a Public Folder Database and modifies some Exchange options such as OAB Distribution for Public Folders to provide support for these clients.  As a side note, there was an msexchangeteam.com blog post that stated that Entourage is getting updated to support Exchange Web Services (EWS) so in the future, you may only have to do this for Outlook 2003 clients and not Entourage.  For purposes of this lab, we will not be using Entourage or Outlook 2003.  Click Next to Continue.

 

 

Your first server is most likely going to be an internet facing CAS server.  Because of this, I specified our Internet-facing FQDN.  This will modify your -ExternalURL parameters for this Exchange 2010 CAS.  Pretty nifty and an installation option I very much welcome.  Click Next to Continue.

 

 

You are now provided with the Customer Experience Improvement Program.  I always like to join these things to provide information to Microsoft to help make the product better. Click Next to Continue.

 

 

Finally, it’s time for some Readiness Checks.  We can see that the organization will need to be prepared with a /PrepareAD which will prepare the schema, forest, and domain.  Make sure the person running this installation is a part of the Enterprise Admins and Schema Admins in order to update the schema.

 

We also see that we need the Filter Pack.  I didn’t include this in Part 1 as Microsoft updates their Setup Prerequisite files and this (links/files/requirement)  may change in the future.  So go to the link here to download the filter pack.  Make sure you download and install the x64 version.  You can install the Filterpack while the Exchange setup is still running.  Once the Filterpack is finished installing, Click Install in the Exchange Setup.

 

 

So now you’re presented with the installation.  It only took 10-15 minutes for the install to complete.  Pretty fast!  Click Finish to Finish.

 

 

 

Exchange 2010 DAG using Server 2008 R2 – Step 3

File Share Witness

We will be using a File Share Witness (FSW) on a non-Exchange Server.  This will go on our member server, SHUD-OCSFE01.

We will need to go onto our Member Server and add the “Exchange Trusted Subsystem” group to our Local Administrators Group.  If you do not do this, you will get an Access Denied error message. Unlike Exchange 2007, we do not have to pre-create the FSW.  We tell Exchange 2010 where the FSW will be located, and because the Exchange Trusted Subsystem is added to the non-Exchange box, it will have the permissions necessary to create, modify, and maintain the FSW.

It is still recommended to place the FSW on a Hub Transport Server.  In fact, if you don’t specify the FSW location (Witness Server and Witness Directory), Exchange 2010 will automatically go out and look for a Hub Transport Server and choose a location on its own.  Alternatively, you can specify the Witness Directory and not the Server; in which case Exchange 2010 will automatically choose an Exchange 2010 Server (non-DAG Hub Transport Server preferred) on its own but use the Directory you specified.

Creating the DAG using the EMC and assigning a Static IP

Open the Exchange Management Console (EMC) and go into Organization Management > Mailbox.  Click on Database Availability Group. As you can see, it’s currently empty.

Right-Click on the empty space and choose ” New Database Availability Group.”

Let’s give our DAG a name.  For purposes of this lab, I used the name ShudnowDAG.  As stated, we want SHUD-OCSFE01 to be our witness server and our directory will be C:\ShudnowDAG.  Click New to Create our DAG.

The DAG is successfully created.  At this time, an empty objecting representing the DAG with the name you specified and an object class of msExchMDBAvailabilityGroup is created in Active Directory. You can see the object in either ADSIEdit or LDP.  The DN for this object is:

CN=ShudnowDAG,CN=Database Availability Groups,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Shudnow,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=shudnow,DC=net

The completion page will show a warning  informing you that the server that contains the FSW is not an Exchange Server.  We know this already as it was our intention to do this all along.

When the GUI is used to create a DAG, it uses DHCP for the Network Name cluster resource.  If we want to specify a static IP for our DAG, we need to use the Set-DatabaseAvailabilityGroup cmdlet.  The Set-DatabaseAvailabilityGroup cmdlet has a switch called -DatabaseAvailabilityGroupIpAddresseses.  This switch will and should never contain IP Addresses for your replication network.  This -DatabaseAvailabilityGroupIPAddresses switch is ONLY for your MAPI Network subnets so the Network Name resource can come online due to a dependency in the Failover Clustering Services among some other Exchange related functions.

Among some other Exchange related functions eh?  I knew you would ask! Read on…

The Network Name resource is what is used to update the password for the Cluster Name Object (CNO) which is the DAG$ computer account.  The Network Name resource does not have to necessarily be online for Exchange to operate properly, but if it’s not, the DAG$ computer account will expire.  Obviously this would not be a good thing and will cause bad things to happen.

There are also parts of the code that will attempt to connect to the Network Name resource (such as DAG member modifications), but if that fails, those pieces of code will fall back to the servername once the network timeout occurs.

Exchange also utilizes the Possible Owners of the Network Name resource for moving the Primary Active Manager (PAM) which is the server that has control of the default Cluster Group which essentially monitors database status and makes the decisions on what server mounts which database.  For more information about the Active Manager, click here.

So moving on… as you can see, our DAG is using DHCP which is denoted by the <> characters.

So taking a look at our first node, we see our MAPI Network is on the 192.168.1.0 subnet due to the IP Address being 192.168.1.152/24.  Our second node is on the same subnet.

We currently have 192.168.1.154 free so we will use that static IP for our DAG.  It’s not absolutely necessary to use a static IP, but if you feel the need to use a static, feel free.

Now that we have our static IP chosen, let’s run the following command:

Set-DatabaseAvailabilityGroup -Identity ShudnowDAG -DatabaseAvailabilityGroupIPAddresses 192.168.1.154

We now see that our DAG has the following static IP configured.

Creating the DAG using the EMS and assigning a Static IP

Using the EMS is much faster.  Instead of doing all the above, all you need to do is run the following command:

New-DatabaseAvailabilityGroup -Name ShudnowDAG -WitnessServer SHUD-OCSFE01 -WitnessDirectory C:\ShudnowDAG -DatabaseAvailabilityGroupIPAddresses 192.168.1.154

See? Much faster than using the EMC.  This will definitely be the method I am going to be using in the future to create a DAG when using a static IP instead of DHCP.

Adding the first Node to our DAG

Well, let’s go ahead and add our first node to the DAG.  Go into the EMC Organization Configuration >Mailbox Database Availability Group > Right-Click our DAG Manage Database availability Group Membership.

Add the first Node.  Click Manage to Continue.

Our first node has successfully been added.

But… what exactly was done during this behind the scenes when this first node was added to the DAG?  The following occurs (from Technet documentation):

  • The Windows Failover Clustering component is installed, if it is not already installed.
  • A failover cluster is created using the name of the DAG.
  • A cluster network object (CNO) is created in default computers container.
  • The name and IP address of the DAG is registered as a Host (A) record in DNS.
  • The server is added to DAG object in Active Directory.
  • The cluster database is updated with information on the databases that are mounted on the added server.

First of all,we can see the DAG has been registered in DNS.

Second of all, we can see the DAG’s Cluster Network Object (CNO) has been created in AD.

Third of all, we can see the cluster has been formed.  As you can see, there’s no CMS/Virtual Server in the Services and applications.  This is because Exchange 2010 is not a cluster aware application.  Exchange 2010 only utilizes Windows Failover Clustering Services for heartbeat information and cluster networks.

Finally, we can see that the cluster is currently set to Node Majority.  When we add our second node, the cluster will be switched to Node Majority with File Share Witness since we’ll have an even number of Exchange Nodes and will need a 3rd node/share to act as our witness.  Because of this, we won’t see any FSW data inside of FSW share until our second node is added to the DAG.

 

 

Exchange 2010 DAG using Server 2008 R2 – Step 4

Adding the second Node to our DAG

Well, let’s go ahead and add our first node to the DAG.  Go into the EMC Organization Configuration >Mailbox Database Availability Group > Right-Click our DAG Manage Database availability Group Membership.

Add the second Node.  Click Manage to Continue.

Our second node has successfully been added.

But… what exactly was done during this behind the scenes when this second node was added to the DAG?  The following occurs (from Technet documentation):

  • The server is joined to Windows Failover Cluster for the DAG.
  • The quorum model is automatically adjusted:
  • A Node Majority quorum model is used for DAGs with an odd number of members.
  • A Node and File Share Majority quorum is used for DAGs with an even number of members.
  • The witness directory and share are automatically created by Exchange when needed.
  • The server is added to DAG object in Active Directory.
  • The cluster database is updated with info on mounted databases.

First of all,we can see the DAG has been joined to the Windows Failover Cluster.

Second of all,we can see the Quorum Model has been adjusted to Node Majority and File Share Witness because we have an even number of nodes.

We can also see the FSW is set to the location we specified when creating our DAG (SHUD-OCSFE01 with a path of C:\ShudnowDAG) and that there is Quorum data in this location.

Adding Database Replicas

Well, let’s go ahead and create a new database and replicate it.  Go into the EMC Organization Configuration > Mailbox Database Management.

We can see there’s currently two databases that were created during the installation on our Exchange Mailbox Servers; one for the first node and one for the second node.

We can’t delete these databases because they contain some arbitration mailboxes.  Arbitration mailboxes are special mailboxes that are used to manage approval workflows.  For example, moderated e-mails.  We can see these arbitration mailboxes and what mailbox databases they belong to by running the following command:

Get-Mailbox -Arbitration | FL Name,Database

Create a new Database.  I will create a new mailbox database with the name, LABDatabase01.  I will then also mount the database The two commands I will use to do this are:

New-MailboxDatabase -Name LABDatabase01 -Server SHUD-EXC01

Mount-Database -Identity LABDatabase01

Let’s add a Mailbox Database Copy to our second DAG node so we have redundant databases.  Database Management > Select the new Database > Right-Click and Choose Add Mailbox Database Copy.”

Choose the second server for the server that will obtain our Database Copy.  Click Add to Continue.

We should then see a successful copy being added to our second DAG Node.

To verify, in the EMC, click on the LABDatabase01 and we should see a Mounted copy and a Healthy copy below.

To do a switchover, you can right-click on the copied database and choose Activate Database Copy.

DAGs Networks

Go into the EMC Organization Configuration > Mailbox Database Availability Group.  At the bottom, you will see the Networks.  You can see both are enabled for Replication.  Exchange 2010 always uses the last recently used replication network.  You can leave both enabled to Replication or you can disable the MAPI Network from having Replication enabled.  This will force all replication to go over your dedicated replication network. Keep in mind, when you do this, your MAPI Network can still do replication.  It will only do replication when there are no dedicated replication networks available.  For example, if the dedicated replicated network were to go down due to some switch but your MAPI network was available, replication would begin to utilize the MAPI network.

If you were in a situation where you were adding a 3rd node to the DAG and it was in a different subnet, you will need to add an IP Address for that subnet so the Network Name resource can come online for that subnet.  So let’s say we now added a 3rd DAG node that was on the 172.16.0.0/12 subnet.  Remember our Set-DatabaseAvailabilityGroup cmdlet with the -DatabaseAvailabilityGroupIpAddresseses switch?  In this case, let’s say 172.16.2.154 was going to be our DAG IP for that subnet.  We would have to add that IP to the switch above.  But that switch is not additive, so we would have to run the following command:

Set-DatabaseAvailabilityGroup -Identity ShudnowDAG -DatabaseAvailabilityGroupIPAddresses 192.168.1.154,172.16.2.154

As you can see, I specified both 192.168.1.154 in addition to 172.16.2.154.

What happens is if the DAG fails over to the second DAG node, the DAG will keep the 192.168.1.154 address.  But if it fails over to the 3rd node, it will use the 172.16.2.154.  Again, this command has nothing to do with the replication networks, only the MAPI Networks.  And again, it’s only so the Network Name resource can come online which is a cluster dependency.  No clients will connect to this Network Name resource and Exchange has multiple mechanisms to connect to Exchange.

 

Change Access Order of SCSI disk in ESX Virtual Machines

Change Access Order of SCSI disk in ESX Virtual Machines

 

1. Right click the virtual machine and select Edit Settings.

 

2. Select Hard disk 1 from the menu on the left.

3. Using the pull down menu labeled “Virtual Device Node”, change the settings from SCSI 0:0 to SCSI 0:2.

4. Select Hard disk 2 from the menu on the left.

5. Using the pull down menu labeled “Virtual Device Node”, change the settings from SCSI 0:1 to SCSI 0:0.

6. Select Hard disk 1 again from the menu on the left.

7. Using the pull down menu labeled “Virtual Device Node”, change the settings from SCSI 0:1 to SCSI 0:1.

8. Boot the virtual machine and image the system.



Powered by MaQma HelpDesk