28 dic 2011

HP Flex10 with VMware, our design.

Some days ago I was participating on a twitter conversation about Flex10 and VMware, and the differences between Mapped and Tunneled VLAN setting of VirtualConnect.

The following is a quick review on how we did it at our DC.

At first, we went the mapped route, since it required far less uplink connections, and the 28 VLAN per flexnic limit seemed far for us. Our design follows some simple rules:

* Highly Available: 2 vmnics per vSwitch

* Isolated traffic for ServiceConsole, vMotion, StorageNetwork and Virtual Machines

* Untagged frames where possible, except for Virtual Machines

On VCM we defined 4 Shared Uplink Sets (SUS), 2 for iSCSI, 2 for trunking. Service Console and vMotion networks were included in the trunk. Each SUS having one 10Gbps uplink. We do that to have both Interconnect modules active.

Our server profile was configured according the following image



The names of the flexnics are LOM:{1,2}-{a-c} with LOM:1 being internal server interfaces connected to Interconnect module number 1, and LOM:2 are connected to Interconnect module number 2. Letters a through c are used to represent each flexnic inside a physical interface.



On the VMware side we have 3 vSwitches and 1 dvSwitch. Mapping between flexnics and vmnics is as follows

vmnic0 <-> LOM:1-a

vmnic1 <-> LOM:2-a

vmnic2 <-> LOM:1-b

vmnic3 <-> LOM:2-b

vmnic4 <-> LOM:1-c

vmnic5 <-> LOM:2-c

vmnic6 <-> LOM:1-d

vmnic7 <-> LOM:2-d

VMware configuration is as follows:

vSwitch0:

* ServiceConsole

* vmnic0, vmnic1

* Both vmnic active

* vmnic speed: 500 Mbps

vSwitch1:

* vMotion

* vmnic2, vmnic3

* Both vmnic active

* vmnic speed: 2.5 Gbps

vSwitch2:

* iSCSI

* vmnic4, vmnic5

* 2 vmk interfaces to do iSCSI multipath, following the vSphere guide on that subject

* vmnic speed: 4 Gbps

dvSwitch1

* 2 uplinks per host

* vmnic6, vmnic7

* One portgroup per customer VLAN

* vmnic speed: 3 Gbps

After some months we hit the 28 VLAN limit on Virtual Machine’s network, so we decided to study doing the switch to tunneled mode. The first issue is that you can’t have a mix of the modes, so the change must be done in the whole VC Domain.

As a design decision we use a different VC Domain on each enclosure, so we used a new C7000 as a pivot point for the change.

The connections in the tunnel mode are like the following image:





The uplinks for Service Console are 1 Gbps each, all others are 10 Gbps.

After the mode conversion, we created a new server profile, just defining a “TRUNK1” and “TRUNK2” networks, mapped to uplinks 1 on each Interconnect module respectively, and replacing the previous “Multiple Network” setting on vmnic6 and vmnic7, or Port 7/8, or LOM:1-d and LOM:2-d

No changes necessary on VMware side.

As a side note, when you use mapped mode, you need to define every VLAN on the uplink network infrastructure, then VC module, then Server Profile, then VMware dvSwitch.

In tunnel mode, you simply define the VLAN on network infrastructure, and then on VMware, no more hassle on VC side.

Note: A newer VC FW (3.30) now allows 162 VLAN and the combination of Tunnel and Mapped modes.