Thursday, April 2, 2009

The Mainframe of the 21st century: Google offers a peek into it's data centres

Nicholas Carr and Slashdot both looked into Google's revealing of it's data centre technology at the company's Data Center Energy Summit.

Google showed a video of the computer-packed shipping containers that it confirmed are the building blocks of its centers (And as Nicholas Carr remarked, proving that Robert X. Cringely was on the money after all)

Google's big surprise: each server has its own 12-volt battery to supply power if there's a problem with the main source of electricity. 'This is much cheaper than huge centralized UPS,' says Google server designer Ben Jai. 'Therefore no wasted capacity.' Efficiency is a major financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: 'We were able to measure our actual usage to greater than 99.9 percent efficiency.' Google has patents on the built-in battery design, 'but I think we'd be willing to license them to vendors,' says Urs Hoelzle, Google's vice president of operations. Google has an obsessive focus on energy efficiency. 'Early on, there was an emphasis on the dollar per (search) query,' says Hoelzle. 'We were forced to focus. Revenue per query is very low.'"

A commenter on Slashdot had a pretty insightful take:

Most people buy computers one at a time, but Google thinks on a very different scale. Jimmy Clidaras revealed that the core of the company's data centers are composed of standard 1AAA shipping containers packed with 1,160 servers each, with many containers in each data center.
Mainstream servers with x86 processors were the only option, he added. "Ten years was clear the only way to make (search) work as free product was to run on relatively cheap hardware. You can't run it on a mainframe. The margins just don't work out," he said.

I think Google may be selling themselves short. Once you start building standardized data centers in shipping containers with singular hookups between the container and the outside world, you've stopped building individual rack-mounted machines. Instead, you've begun building a much larger machine with thousands of networked components. In effect, Google is building the mainframes of the 21st century. No longer are we talking about dozens of mainboards hooked up via multi-gigabit backplanes. We're talking about complete computing elements wired up via a self-contained, high speed network with a combined computing power that far exceeds anything currently identified as a mainframe.

The industry needs to stop thinking of these systems as portable data centers, and start recognizing them for what they are: Incredibly advanced machines with massive, distributed computing power. And since high-end computing has been headed toward multiprocessing for some time now, the market is ripe for these sorts of solutions. It's not a "cloud". It's the new mainframe.


  1. I don't know which 80's you lived through, but mainframe processing was alive and well in the 80's I lived through. Minicomputers were a joke back then, and were seen as mostly a way to play video games. (With a smattering of spreadsheet and word processing here and there.) In the 90's, PCs started to take hold. They took over the word processing and spreadsheet functionality of the mainframe helper systems. (Anybody here remember BTOS? No? Damn. I'm getting old.)

    Note that this didn't retire the mainframe despite public impressions. It only caused a number of bridge solutions to pop up. It was the rise of the World Wide Web that led to a general shift toward PC server systems over mainframes. All we're doing now is reinventing the mainframe concept in a more modern fashion that supports multimedia and interactivity.

    Welcome to Web 2.0. It's not thin-client, it's rich terminal. The mainframe is sitting in a cargo container somewhere far away and we're all communicating with it over a worldwide telecom infrastructure known as the "internet". MULTICS, eat your heart out.

  2. The in-computer onboard UPS is not a new idea. I don't see how they could have gotten any patents on it since I used it have one of these (my day might still). The device I saw had a gel cell mounted on an 8-bit ISA card, full length. It had +5/12v pass through connectors for powering the drives and it powered the computer through the main bus. There was more logic to it, as it had some monitoring capabilities too.

    What's next, patenting a hard drive on a plugin board? Been there, it was called the Hard Card and put a 20mb HDD in an 8 bit full length ISA slot, a truly neat idea for upgrading old XT computers back in the day. You could make them work with AT computers too by putting a regular disk controller, without a drive connected, on the bus too and the BIOS would see the XT controller and boot from it.

  3. I recently came accross your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.