Posts Tagged ‘AC units’

h1

Data Center Cooling: Our Story

August 12, 2013

By Randy GibbsRandyGibbs

It was the middle of July and the temperatures in the San Joaquin Valley were in excess of 100 degrees for four straight days.  Here at the San Joaquin County Office of Education, keeping the computer room cooled down to 72 degrees for a legacy HP3000 mainframe was on every computer operator’s mind. The operators began to hear those strange noises that a 10-year-old AC unit makes when belts are wearing out or compressors are going bad. And everyone was waiting for that dreaded call in the middle of the night that the computer room had reached 95 degrees and someone needs to rush down to the facility and start opening doors, plugging in fans, and shutting down noncritical machines (as if there are any).  It was going to happen, guaranteed!

And it’s always at the most critical time, like in the middle of running a payroll for 12,000 employees, or running end-of-month financial statements. Operators showed up for work those four days wearing their “lucky shoes.”  You know, the ones they had on the last time the AC didn’t shut down during a heat wave.  The stress level rose to the point that the swing shift operator didn’t even want to go home, for fear he’d just have to come back. This played out far too times for too many years, and it was time to do something about it.

The first thing is to understand our Data Center environment. Our Data Center was built in 1988. The computer room is 1,120 square feet with a 24-inch raised floor space and ceilings that stretch 10 feet in height.  Total space to be cooled was a little over 13,400 cu. ft.  The Data Center was designed to house large-scale mainframe systems.  The first mainframe system installed was a Unisys A9 and then we upgraded to a Unisys A12.  The next mainframe system was a DEC VAX Alpha chip. No one remembers if it was an Alpha 8 or Alpha 10.  The programmers are too old to remember exactly, but could find some old emails if needed. The last mainframe to be installed was an HP 3000, which is still plugged in and running. The HP 3000 is set to be disconnected on August 31, 2013— the end of our mainframe era and we will surely have some sort of celebration.  As we moved away from mainframe systems over the last 15 to 20 years, we began deploying more and more servers to the point that we currently have approximately 95 servers in nine racks sitting on the computer room floor.  I guess it’s becoming more of a “server room” than a “computer room” in the original design.

The AC unit for this space is an APC 18-ton unit forced air under the floor. It does an adequate job, but is taxed very hard as it runs close to 90 percent of its capacity at all times. Coupled with 100-degree weather, it is no wonder the operators hate the summer months.  Just the slightest hiccup and panic sets in.  So, what do we do?

We started looking at what would be the optimal thing to do not considering cost as a factor.  Our computer operations manager, who is an engineer, said to give him twice the cooling power that is needed.  He always wants twice as much. Twice as fast disks, twice as much space, twice as much CPU, twice as many printers—you get the picture.  He showed us that we really required 23 tons of cooling based upon the equipment we had in the Data Center, the heat that they produce and the area to be cooled.  After discovering the need, we looked at cost.  Replacing the 18-ton unit with two 23-ton units was cost prohibitive at approximately $200,000.

Next, we looked into portable on-demand cooling systems such as MovinCool products.  Great products for portable cooling, but this would not eliminate the need to rush to the Data Center in the middle of the night. Plus, who was going to babysit the units 24 hours a day, seven days a week just to empty the water from the condensation collectors?

This is very inexpensive, at around $30,000, and was actually the direction we were looking at going. Better than doing nothing, but this would only be a temporary solution, until the 18-ton unit died. Then a replacement would be a must.

During the analysis of this project, it just so happened that another department within the San Joaquin County Office of Education was going to be required to buy a back-up generator so their clients would be served no matter what happened environmentally to their server room, specifically a power outage.  This was going to cost them in excess of $130,000 to install a backup generator for their facility. It just so happens that we have a 1,200 amp backup generator that could run our Data Center for months if needed, as long as we fill up the fuel tank regularly. So I invited them to move their server room over to our Data Center and then we could split the cost of a new AC unit.  It took some time figuring out the details, such as how much power they would need, where their servers would be located, and how much total cooling would be required.  All of which were definitely not show stoppers.

We contacted several of local HVAC vendors, because we always want to spend our money at home, and asked to do an analysis of our environment. They agreed that a 23-ton AC unit was required.  Now our computer operations manager, who always wants twice as much, suggested that we not replace the old unit, but rather add another unit to the Data Center.  This made a lot of sense so we ended up adding a LiebertDS Precision Cooling 30-ton AC unit to our Data Center environment. Total cost of the project including exhaust systems, electrical work, transfer switch and AC unit was $120,000.  This was split between two departments and saved each of us a considerable amount.

So, here is what our Data Center environment looks like today. A computer/server room of approximately 1,120 square feet.  We have 13 server racks with more than 110 servers in them (virtualization project will reduce this to about 30), six network racks with LAN and WAN equipment in them, an HP 3000, a 40-battery bank UPS system with freshly replaced batteries, and a 1,200-amp backup generator system (computer operators sleep well at night during the winter storms, too) with four automatic transfer switches, three AC units, 18 ton, 30 ton, and a small five-ton unit for our UPS room. Should we lose power to our facility, here is the process:  As soon as the power goes off, all servers and network equipment switch over to UPS.  Sixty seconds later, our generator starts up and another 60 seconds later, our servers and network equipment switch to generator power and automatic transfer switch #1. Another 60 seconds later, the 30-ton AC unit switches to generator power and automatic transfer switch #2.  And a further 60 seconds later, the 18-ton AC unit and five-ton AC unit switch to generator power and automatic transfer switch #3.  Finally, 60 seconds later our computer lab (because people need someplace to work when the power goes out) switches over to generator power and automatic transfer switch #4. And computer operators now sleep happily ever after. The end.

About the Author

Randy Gibbs is the Division Director, Information Technology, for the San Joaquin County Office of Education. He can be reached at rgibbs@sjcoe.edu.