Delivering 1 Million IOPs Using Seven PCI Express Prototype SSDs

MaxBurn

Storage Is My Life
Joined
Jan 20, 2004
Messages
3,245
Location
SC
http://www.intel.com/pressroom/innovation/innovation.htm#110909a

Recently Intel showcased more than 1 million IOPs (input/output operations per second) supported by a single mainstream server using 7 PCIe solid state drive (SSD) prototypes. With this proof of concept, Intel is identifying platform bottlenecks and working on engineering improvements for future storage products. To get this kind of performance from conventional hard drives, you would need many storage racks, filled with ~4,000 hard drives - an expensive, space-consuming and power-hungry proposition. Intel used one dual socket server with an expansion box and consumed only ~400 watts (more than 100x lower power than a hard drive configuration would have). We used a challenging workload for a single 1U server: 4Kbyte transfers with a 2:1 read/write ratio. Meanwhile, the CPUs were only about 50% utilized, leaving plenty of power left for applications. This could enable an online retailer to host an unprecedented number of website transactions while containing costs or a game developer could bring products to market faster. For more info, see Senior VP & GM, Bob Baker's Intel Developer Forum Keynote "Silicon Leadership - Delivering Innovation" and recent press.

No answer on the question if all that data disappeared after a firmware update though.
 

P5-133XL

Xmas '97
Joined
Jan 15, 2002
Messages
3,173
Location
Salem, Or
Any online retailer that ran 1M transactions per second would quickly run out of customers! Then there's also network bandwidth; CPU resources; and would undoubtably need more storage capacity than SSD's could currently produce with any current technologies.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
To me the first thing that comes to mind is running reports out of our ERP systems at work. You have files for which the individual indexes can be several GB in size and an overall DB size in the hundreds of GB range. Some reports absolutely destroy disk access and can take millions of IOs to complete. They can take 40+ minutes to run. I can just imagine the throughput improvement this sort of thing would bring.

Unfortunately, enterprise SSDs for the midrange are still obscenely expensive.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
17,497
Location
USA
I'd take just one of the cards, but it is not likely to fit in my case.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
If the drives are operating in parallel, and they're using 4kB blocks, each must be averaging nearly 600MB/s for a total of 4GB/s. They've opted for an unrealistic 2:1 read/write ratio. I'll bet they're relying on their 'lazy write' buffering to gather writes and keep speed close to reads. Unfortunately, their current products are not battery-backed - oops.

So the technology would need at least 6Gbs SAS to move out of the PCI-e slot - Fibre Channel would be too slow. I guess you could just dedicate the 1U server as a SAN node, using the CPU as the controller. Pretty expensive to hot-swap though ...

Which brings me to the main point: you won't see the gains because enterprises are increasingly using SANs. The latency will kill most of the advantage, and you'd need 10 Fibre HBAs to support the bandwidth.
 

time

Storage? I am Storage!
Joined
Jan 18, 2002
Messages
4,932
Location
Brisbane, Oz
You have files for which the individual indexes can be several GB in size and an overall DB size in the hundreds of GB range. Some reports absolutely destroy disk access and can take millions of IOs to complete. They can take 40+ minutes to run.

Out of interest, is that using DB2?

'Large' reports may not use indexes, but if they do, and the indexes are falling out of RAM, performance is guaranteed to suck. Indexes need several times as many I/Os as sequential table scans.

Otherwise, partitioning is your friend - unless you're using MS SQL Server, in which case you're probably f****d.
 

Fushigi

Storage Is My Life
Joined
Jan 23, 2002
Messages
2,890
Location
Illinois, USA
DB2/400. The query optimizer will recommend temporary indexes to the SQL engine, which will create them as needed and, with the latest OS release, keep the temp indexes around for a while in case they are reused. Historical data from the query optimizer can also be used to create permanent indexes to reduce the need for future temp index overhead. We have some tables with over 50 actively maintained indexes.

We can also look at the last time an index was used and if it isn't used often, maybe just for monthly reporting, set it's maintenance to "delayed"; the system will stop updating the index until it is actually used. At that time it will "catch up" the db changes. It's a pay-me-now-or-pay-me-later thing, but in some shops the improvement in daily runtime will be worth the tradeoff of a longer report cycle.
 
Top