Posted by Richard Solomon on January 15th, 2014
What’s that? Yes, actually I *am* aware that New Year’s day was two weeks ago! What do you mean I’m late? No, no, wait I see the problem – not THAT kind of resolution (who keeps those anyway?), but the “number of pixels I can see” kind of resolution
This was all spawned by an e-mail I got around New Year’s day from a major consumer PC maker inviting me to experience “4K gaming”. I gather that in the TV/video world “4K” is being replaced by “UltraHD” – and it seems we learned nothing from the “HD” experience as there are at least 2 different resolutions being called UltraHD/4K. I’m not much of a video guy, but I assume the original was 4096 x 2160 (thus the 4K – as all true geeks know K = 1024) but it’s now also applied to 3840 x 2160 (which at least is the 16:9 ratio we’re all getting used to). My trusty calculator tells me that even the “low” resolution works out to over 8 million pixels (8,294,400) – and figure that no true gamer would accept less than 24 bit color, so 3 bytes per pixel works out to 24,883,200 bytes per screen. Multiply that by 60Hz (I know, no modern gamer could tolerate a piddling 60Hz refresh rate, but humor an old timer) and you get 1,492,992,000 bytes/sec. Let me use my poetic license here and call that 1.5GB/sec – and maybe you can start to see how 4K gaming enters my admittedly PCIe-centric world.
“Who needs PCIe 4.0 anyway?“
Hmmm, I wonder if 4K gamers might want to double their PCIe bandwidth? I mean if the frame buffer is being read out at 1.5GB/sec, how much more do you need to update that and keep all of your Doom19* monsters awash in gore?
I wonder if there’s any need to get data off disk to create all those 19th generation monsters? I know I’m still channeling my former life as a storage guy, and that I’ve talked before about PCI Express in your Disks, but I may not have shared an insight I got from some of my former LSI colleagues. Back when PCIe 3.0′s 8GT/s data rate was new and seemed like “a lot” I asked some of the flash-storage folks “how much bandwidth can you really use?” – in what was probably a bit of sarcastic tone, as I expected an answer well under the maximum available in a x8 PCIe connection at the time. Their dead-serious response was “as much as you can give us!” and they backed it up by showing me how they really could scale flash performance up by scaling out the number of NAND chips they used. In simple English that means PCIe storage also has the potential to suck up truly massive amounts of PCIe bandwidth.
So here’s where my resolution comes together with New Year’s – the first draft of the PCI Express 4.0 specification. Those of you who are PCI-SIG members should soon see what PCI-SIG calls the “0.3 draft” of PCIe 4.0, but it’s by now very old news that 4.0 will include the 16GT/s (sometimes called “Gen4″) signaling rate. I would bet good money** that we won’t get a final “1.0″ version of PCIe 4.0 (how confusing is THAT nomenclature) this year, but we should see the earliest formal spec coming out, and maybe, just maybe, some cutting edge implementers showing off a technology demonstration or two…
Here’s to a great 2014 for PCI Express, and a (belated) “Happy New Year” to all readers of ExpressYourself! What other technology drivers would YOU expect to drive adoption of PCIe 4.0? Leave a comment below with your thoughts, and as ever, check off your own New Year’s resolution by clicking here to subscribe to this blog and become one of the proud few folks who kept theirs.
*Yeah, I know there’s no Doom19 – I lost track after Doom3 I think, but I get to take those kind of liberties every now and then!
**Those that know me know that my maximum bet is $1, so keep that in mind whenever I talk about betting…