I'm going to mention something that I know will sound like bragging, but I'm mentioning it in more amazement of where things have gotten with regards to price and technology and actually how lucky I am to get to use some of this stuff. Today my boss asked me to count the total memory we have in all our ESX servers because we're trying to do some planning for future projects, etc.
So I totaled it all up and in raw capacity, we're at 933GB of RAM out of 19 servers in the farm. This is the accumulation of the past couple years of hardware.
I've rack-mounted, installed, cabled, and currently manage a system of the following (except for the clariion...i didn't mount and install, but i do share the management):
1 x HP c7000 chassis
10 x BL460C G1 blades (@ 32 GB each)
5 x BL460c G6 blades (@48 GB each)
(All blades have 2 emulex FC HBAs + 4 NIC)
4 x Dell R710 (@ 96 GB each)
(each have 1 Emulex HBA and 6 NICs)
The HP c7000 chassis has:
4 x Cisco Catalyst Blade Switch 3020 for HP
Two ports aggregated to an uplink switch per Catalyst using only two of the four Catalyst to the corp network (4 Gb worth of connectivity).
The other two Catalyst are used for private networking so that each blade has redundant NICs. The private is used for vmotion and fault tolerant communication.
Multiple vlans are used to allow our VMs access to seven different subnets.
2 x Brocade 4/24 SAN Switch Power Pack for HP c-Class BladeSystem
This gives us redundant paths to each of the clariion service processors with 4-way zoning end to end. ESX manages the multiple paths paths for failover.
1 x EMC clariion CX-960 (16 GB cache) with 8 bays of 15 drives @ 1 TB 7200 RPM SATA II
We get to use 4 of the 8 bays (60 drives). We have a mixture of RAID 5, RAID 6, and RAID 1+0 for different testing.
Surprisingly as it may sound, we don't backup any of the data on the array for the work we do. The work we do isn't for storing live data, we just need the availability for testing. If we lost the array, we'd have down time, but nothing on there would need to be restored.
So I totaled it all up and in raw capacity, we're at 933GB of RAM out of 19 servers in the farm. This is the accumulation of the past couple years of hardware.
I've rack-mounted, installed, cabled, and currently manage a system of the following (except for the clariion...i didn't mount and install, but i do share the management):
1 x HP c7000 chassis
10 x BL460C G1 blades (@ 32 GB each)
5 x BL460c G6 blades (@48 GB each)
(All blades have 2 emulex FC HBAs + 4 NIC)
4 x Dell R710 (@ 96 GB each)
(each have 1 Emulex HBA and 6 NICs)
The HP c7000 chassis has:
4 x Cisco Catalyst Blade Switch 3020 for HP
Two ports aggregated to an uplink switch per Catalyst using only two of the four Catalyst to the corp network (4 Gb worth of connectivity).
The other two Catalyst are used for private networking so that each blade has redundant NICs. The private is used for vmotion and fault tolerant communication.
Multiple vlans are used to allow our VMs access to seven different subnets.
2 x Brocade 4/24 SAN Switch Power Pack for HP c-Class BladeSystem
This gives us redundant paths to each of the clariion service processors with 4-way zoning end to end. ESX manages the multiple paths paths for failover.
1 x EMC clariion CX-960 (16 GB cache) with 8 bays of 15 drives @ 1 TB 7200 RPM SATA II
We get to use 4 of the 8 bays (60 drives). We have a mixture of RAID 5, RAID 6, and RAID 1+0 for different testing.
Surprisingly as it may sound, we don't backup any of the data on the array for the work we do. The work we do isn't for storing live data, we just need the availability for testing. If we lost the array, we'd have down time, but nothing on there would need to be restored.