Tuesday, September 24, 2013

vCloud Director - Migration of storage to new storage profile



Scenario:
Migration from local direct attached storage on single ESXi host to more flexible environment with multiple ESXi host with new shared storage profile.

Some considerations:
  • The migration will create full clone VMs on new storage profile so please take the storage usage into consideration before starting the move.  Look at thin provisioning on VMs hard disk.
  • Can you afford to shut down the VMs or not for migration, this will affect your effort.
  • Just not vAPPs needs to be moved, also remember your vAPP templates and media.  I would start with the vAPPs and media first.

vCloud Director 5.1.2 – Bug - Retain IP/MAC resources does not apply when you use the “Move to” task to move to new Storage profile.

In previous blog post I mentioned the usefulness of this setting, but during our storage migration to new profile for vCloud director we ran into a bug where this is not applied.

I have opened a case with VMware and they verified this as a bug and now has a SR.  Hopefully get this fixed within the next build.

My current vcloud director version where this applies:
vCloud director 5.1.2.1068441

Debugging the problem:


When you have to move vAPPs to new storage profile the easiest way is  to shut down the vAPP and select “move to”.

However when you perform this task the vAPP will actually release the Org VDC NAT'd address for the VM.
If you have any NAT's configured on the Edge gateway, this will now be out of sync.

Workaround:

I will discuss this in next blog post.

VMware Labs Flings: Lctree - Visualization of linked clone VM trees


Flings: Lctree

I was just pointed to Flings by the VMware support team.
These apps and tools build by VMware engineers are great and already found my favorite for vCloud director.



This tool is designed for the visualization of linked clone VM trees created by VMware vCloud Director when using fast provisioning.  
I managed Lab Manager before and always found the build in context view feature useful to show the relationship and dependencies between virtual machines.
This helped me a lot in finding the information about shadow copies in our environment as well as the visualizing the chain length and make decisions on when to consolidate.


Applications are not supported so no fixes and use at own risk.

vCloud director setting – Retain IP/MAC resources


This is a great setting and very useful, which I hope all users are aware of.

Scenario:

We make use of internal vAPP network on each vAPP which is then connected to the Org DC network. This means that each VM has a NAT’d address to its owned assigned ORG VDC IP.
On the OrgVDC IP we then again use NAT’s to our external networks.  
The destination IP for these NATs in the Edge gateway is the Org VDC IP address assigned to the VM.

Monday, September 9, 2013

Host Spanning not working for VMs which runs on different hosts within a vAPP and using vCDNI network.

By default when a vAPP is created it creates a new port group within the associated vDS.
From testing and learning the hard way it seems the first uplink listed in the vDS is always assigned as the active uplink in the port group for the vAPP and load balancing set to “Route based on the originating virtual port ID”. 
This of course means you cannot setup teaming/EtherChannel on the physical uplink ports and whichever uplink is assigned needs to have the same VLAN ID as is configured for the vCDNI.


Debugging the problem:

In my situation the vCloud environment started with a single host and direct attached storage so only had a single vDS which had the port groups assigned for management, vmotion, external networks and to which vCDNI was associated too.
This caused the situation that our management uplink was always selected as active uplink for vApp port groups create, since it was the first listed uplink in vDS.  We however did not want to assign the same VLAN and have traffic flow over the physical management ports, physical separation always best in my opinion.

Solution:

Created a separate vDS on which I migrated the management and vmotion port groups (virtual adapters) too, as well as another for my external networks. This can be accomplished without downtime when you have 2 or more uplinks associate to the vmkernel
On the vDS which is associated to the vCDNI I removed all the uplinks. 
(On each of the uplinks, the associated vmnic has to be removed first before you can delete the uplinks from vDS, this is accomplished by the following:
Select the host
Select configuration tab
Select networking
Select vSphere Distributed Switch
Select Manage physical adapters.
Click remove for vmnic from the uplink name.

I setup two uplinks on the vCD associated to vCDNI and assigned the same VLAN ID on both of the uplinks physical ports.



Thursday, September 5, 2013

Migrate Management and vMotion virtual Adapters (vmk0,vmk1) to new distributed virtual switch (vDS) without downtime.

  1. In Vcenter server select networking.
  2. Create new vDS on vcloud datacenter.
  3. Set the amount of uplinks needed and name them appropriately.  In my case we have two uplinks each for vmotion and management so total of 4.  Create the same uplink names as on original vDS.
  4. Create new Management and Vmotion port groups (different names, cannot be same) and remember to set your VLAN and balancing/teaming policies, but most importantly change the active uplinks to the newly create uplinks. (The upcoming steps we will assign the physical adapter to the active uplinks.
  5. Now go the Hosts and Clusters.
  6. Select the ESXi host and select configuration -> networking.
  7. Select vSphere Distributed Switch
  8. Now you will see both the VDS’s.
  9. Updated simplified procedure below for steps 10 - 13, however original still works as well.
  10. On the original vDS select the manage physical adapters.
  11. Now remove the physical adapter from the 2nd mgmt and vmotion uplink.  Keeping the active primary uplink in place.
  12. After it is removed select the "manage physical adapters" on new vDS.
  13. Add the removed physical adapter to the new uplinks.
  14. On new vDS select manage virtual adapters.
  15. Click Add
  16. Select Migrate existing virtual adapters.
  17. Select the virtual adapters (vmk0,vmk1) from old vDS and select the new port group name from new vDS to be associated with on move.
  18. After completed.
  19. Now run steps 10 to 13 to remove the physical adapter from the original vDS uplink and add to the new vDS uplink.
  20. done

UPDATE: Actually found a shortcut for the process from step 10 - 13
  • On the new vDS select the manage physical adapters.
  • On the uplink name select “click to Add NIC”
  • Select the corresponding physical adapter on original vDC.
  • You will be prompted if you want to move the physical adapter from original to new vDC
  • Whalaa!