MIT Develops Cheaper Supercomputer Clusters By Nixing Costly RAM In Favor Of Flash
Put into real world terms, the researchers claim that even without their new network design, 40 servers with 10 terabytes of RAM wouldn't chew through a 10.5TB computation any better than 20 servers with 20TB of flash memory, though they would consume a lot more power.
"This is not a replacement for DRAM or anything like that," says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. "But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space."
What's involved here is moving a little computational power off of the servers and onto the chips that control the flash drives. This allows the system to pre-process some of the data on flash drives before passing back to the servers. As a bonus, the pre-processing algorithms are wired into the chips, so there's no computational overhead on the server.
The researchers tested three different algorithms. One is an image search (trying to find matches for a sample image in a large database). The second is an implementation of Google's PageRank algorithm. And the third is a popular application called Memcached. In all three cases, the network of flash-based servers performed competitively with a network of RAM-based servers.