We are (cold) migrating our current vRanger 5.x (ESX 4.1) environment to a physically new ESXi5.1 infrastructure. We have found our new backups take up to a day (before timing out) to complete: throughput is up to only 1MB/s. We think the problem is with how our new network is configured.
In our previous ESX 4.1 environment, we had a vRanger VM, the ESX service consoles, and NAS repository all communicating on the same subnet, with acceptable performance.
With ESXi 5.1 and no service console, we now have a separate management LAN for our ESXi hosts, switches and SAN. We have successfully routed this LAN into our existing production LAN so we can manage our environment from anywhere inside the company. However, it is not clear what the optimum network configuration should be for the vranger server VM or VAs. At the moment, they are on the production LAN (as in our previous environment), but this must mean all backup traffic is being routed from the ESXi hosts (through the management interfaces) to an external router, then the VMs, which sounds terribly inefficient. The obvious remedy would seem to be to add a second NIC to the vranger server VM and VAs on the management LAN for direct communication but I'm not sure how or if this can be done on the vSwitches without adversely impacting the ESXi server's VM kernel port configuration. Is there a network example somewhere which shows the correct network setup for the vranger VMs?
The above solution would work for a physical server attached to the iscsi network to expose the vsphere environment directly, as outlined in the documentation, but we want to avoid setting up yet another server.
1. How do we add a VM to the management LAN (if this is actually what needs to be done)?
2. Do we need to move our NAS onto the management LAN also?
The only other solution I can think of is to leave the vranger VMs configured as is and deploy a virtual router to improve speed between the management and production subnets.
Yours in confusion.