Quantcast
Channel: Software Communities : Popular Discussions - vRanger
Viewing all articles
Browse latest Browse all 1662

vRanger 5.5 and NVSD vRanger edition integration - Space/Dedupe time isssues

$
0
0

I've just recently implemented NVSD vRanger edition in our environment and coming across some issues regarding the amount of time the de-dupe process takes, as well as some strange nuances we're seeing with the vRanger jobs, and curious to see how other environments are configured (i.e. Is ours misconfigured completely  )

 

Here's a breakdown of the environment:

VM environment is a mix of primarily ESXi5 with a few small clusters of ESX4.1 & ESXi4.1

2 vRanger 5.5 Servers (Dell R710's with Dual 6 core CPU 24GB RAM each running Windows 2008R2)

1 NVSD 1.51 vRanger Edition Server (Dell R710's with Dual 6 core CPU 24GB RAM with a 15TB Chunk (RAID 5) store and 2 Staging stores of 10.9TB and 12.7TB RAID10 volumes (all housed on Dell MD3200+1200 Disk attached by 6GB SAS) running Windows 2008R2)

 

Have around 400 VM's running on 32 ESX hosts that are backed up primerily over SAN (FC) with a few over LAN, Both the vRanger servers talk to the NVSD over a Link Aggregated 2GB backup network.

 

Initially I used the default configuration for NVSD with garbage collection running 13:00-19:00and de-duplication to run 'Anytime', however upon the first two days of backups the Staging store became full (at this point, was only a single 10.9TB stage configured for staging).  In order to allow backups to continue, the Garbage collection process was turned off and the number of active de-duplicators set to 18 (with a cap set to 16 on each of the Staging Pools) to allow for the de-duplication to process to trim some storage down, and the additional staging volume was added to ensure backups continued.

We’re now seeing the staging pool shift over to chunk about 2.5TB a day with an ingest ratio of around 3TB

Perfmon has been run on the system for a week and the CPU doesn’t exceed 30% utilization and memory about 14GB, Disc I/O doesn’t appear to be constrained

My questions are 1) Is this de-duplication time window normal behavior and we just need another NVSD server in the environment to account 2) Is there any way to increase this performance so we can have a better deduplication ratio from stanging to chunk to prevent the injest ratio exceeding the dedupe progression


Viewing all articles
Browse latest Browse all 1662

Trending Articles