What is GDDR7 memory? It's the next generation of graphics memory for GPUs like the upcoming Nvidia Blackwell RTX 50-series. It will be used in a variety of products over the coming years, providing a generational upgrade over the existing GDDR6 and GDDR6X solutions, which then boosts performance in gaming and other types of workloads. But there's a lot more going on beneath the name.
Ever since the second generation of GDDR memory—for "Graphics Double Data Rate," if you're wondering—rolled out, the pattern has been pretty clear. GDDR (formerly DDR SGRAM) arrived way back in 1998, and every few years, a new iteration has arrived, boasting higher speeds and bandwidth.
The current generation GDDR6 arrived in 2018 and was first used in the Nvidia RTX 20-series and AMD RX 5000-series GPUs, starting with speeds of 14 GT/s (giga-transfers per second — or alternatively Gbps for gigabits per second) and eventually topping out at 20 GT/S. There was also GDDR6X memory, used solely by Nvidia on higher tier RTX 30- and 40-series GPUs, with initial speeds of 19 GT/s that eventually reached up to 23 GT/s — at least in shipping products.
GDDDR7 has been on the horizon for several years now since Samsung first discussed the tech back in 2022, with the final JEDEC specifications released on March 5, 2024. All the major memory manufacturers — Micron, Samsung, and SK hynix — have already committed to supporting the standard, and chips should be in mass production now. We anticipate seeing the first retail products using GDDR7 this fall.
GDDR7 Speeds
GDDR7 will initially start at speeds of 32 GT/s — 60% higher than the fastest GDDR6 memory, and 33% higher than the fastest GDDR6X memory (though no products ever used 24 GT/s speeds). But that's just the baseline starting point.
Micron and Samsung have publicly disclosed plans to release GDDR7 with speeds of up to 36 GT/s, while SK hynix says it will have up to 40 GT/s. That last would double the bandwidth of the top GDDR6 solutions, and we will likely see such chips shipping in 2025.
Looking forward, plans are in place for GDDR7 to reach up to 48 GT/s. We don't expect to see such memory for at least another year or two, but conceivably we could eventually see even higher clocks, depending on what happens in the intervening years.
Actual memory clocks are not as high as the above numbers might suggest. As with GDDR6, GDDR7 will use a quad data rate (QDR) — so technically, it's more like GQDR7, but the naming people have settled on sticking with DDR. Data is also fetched from memory in chunks, and the base clocks are much lower than the names would suggest. In fact, even "QDR" is a bit of a misnomer, as GDDR6X has base clocks of 1188 MHz ("19Gbps") to 1438 MHz ("24Gbps"), while GDDR6 has base clocks of 1375–2500 MHz (11–20 Gbps).
Will there be GDDR7X memory?
The past two generations of GDDR have seen "X" variants, GDDR5X and GDDR6X. Both of these came from Micron, working in collaboration with Nvidia to create an even higher bandwidth variant of the base memory. GDDR5 topped out at 9 GT/s in the GTX 1060 6GB, while GDDR5X stretched to 12 GT/s. Similarly, GDDR6 maxed out at 20 GT/s, and GDDR6X pushed that to 24 GT/s. That's 33% faster with GDDR5X and 20% faster with GDDR6X.
So, could we see GDDR7X that pushes speeds beyond whatever level GDDR7 eventually reaches? We asked Micron about this at GTC 2024 and were told that nothing was officially in the works yet. Which does make sense—we didn't get GDDR6X until the second generation of graphics cards using GDDR6 came out, and GDDR7 isn't even shipping in products yet.
Micron isn't likely to discuss future plans for GDDR7X regardless of whether it's being worked on right now — just like Nvidia isn't talking about the future Rubin architecture for GPUs just yet. It wouldn't be surprising to see yet another Micron-Nvidia collaboration for GDDR7X, but we don't expect to hear about it one way or another for at least another two or three years. If it does become a thing, hopefully, it will provide at least a 20% boost in bandwidth, as with GDDR6X.
What products will use GDDR7?
At present, there is no official word on any products that will use GDDR7, but Nvidia's higher-tier Blackwell GPUs are widely rumored to be the first to use the new memory. We expect the first of those to arrive in the fall of 2024.
Earlier expectations were that AMD's RDNA 4 GPUs would also adopt the new memory type, but there are now indications that RDNA 4 will stick with GDDR6 memory. That's only a claim from a leaker, but there are also indications that AMD might be focusing on mainstream GPUs for its upcoming architecture. If that's correct, it wouldn't be too surprising to see a continued reliance on less expensive GDDR6 memory.
What about Intel's Battlemage GPUs? These are also targeting mainstream users, according to Intel representatives, and thus may go with GDDR6 as well. Or perhaps we could see a higher-end model with GDDR7 and mainstream solutions with GDDR6.
Whatever happens, current rumors suggest that AMD RDNA 4 may not arrive until 2025. There were whispers that Battlemage would ship this year, but now other rumblings say that it has also slipped into 2025. At present, there's no clear answer on when the various new graphics cards will ship.
GDDR7 could also be used on other devices, particularly with AI accelerators. The top AI accelerators have been using HBM memory types — mostly HBM3 and HBM3E now — but inference-focused designs could still benefit from the additional bandwidth that GDDR7 offers, even if it's not as dense a solution as HBM.
GDDR7 technical details
We've covered much about the speeds and bandwidth of GDDR7 as well as when and where we're likely to see it used, but what fundamental changes does the new memory bring?
One of the biggest changes will be a shrink in process node technology, from the current "10nm-class" to 21nm, down to 10nm to 15nm. Micron currently uses the smallest node for GDDR6/GDDR6X, calling it 10nm-class, but we anticipate it will move to a refined and/or smaller node with GDDR7. The same goes for SK hynix, which currently produces GDDR6 on a 21nm node, and Samsung, which also uses 10nm-class for GDDR6.
Current GDDR6 solutions typically use 1.35V, and GDDR7 will reduce that to 1.2V, with a potential 1.1V version also in the works (for lower clocks). This should reduce power requirements at equivalent performance, though the higher speeds may negate that advantage.
The biggest fundamental change with GDDR7 is that it will use PAM3 signaling, where GDDR6 uses NRZ (non return to zero) signaling — and GDDR6X uses PAM4 signaling. PAM3 (3-level pulse-amplitude modulation) reduces energy requirements compared to NRZ, while being less complex to implement than PAM4 (4-level PAM). That should make GDDR7 manufacturing equipment less complex and less expensive, though that doesn't mean it will be inexpensive.
GDDR7 will also support non-power-of-two configurations, and we expect to see 24Gb and, eventually, 48Gb memory chips. GDDR6 may also have 24Gb solutions coming, though no company has shipped a product using such a configuration at present. This means 50% more memory can be put on each 32-bit interface so that a typical 128-bit graphics card, for example, could have 12GB of VRAM instead of only 8GB.
Another change is that a 32-bit GDDR7 memory interface gets subdivided into four 8-bit channels, which helps facilitate fetching larger chunks of data. Where GDDR5 was an 8n prefetch, and GDDR6 was 16n, GDDR7 will have a 32n prefetch architecture. This is a way to pull larger amounts of data from DRAM while still operating at relatively low clocks.
GDDR7 also supports ECC (Error Correcting Code), which allows chips to continue functioning even if the occasional bit gets flipped. ECC can detect this and improve reliability, a critical factor as speeds and densities increase.
Looking beyond GDDR7
Just as sure as we've had GDDR2 through GDDR7 — as noted earlier, "GDDR1" was called DDR SGRAM — it's a safe bet we'll see GDDR8 in the future, probably four to seven years from now. The real question is what will happen after the seemingly inevitable GDDR9. Will we just add a digit and get GDDR10? Probably, though, we could also shift from DDR to QDR or ODR (Octal Data Rate) naming at some point instead.
GDDR2 from 2003 ran at a top speed of just 1 GT/s (Gbps), with a 32-bit chip yielding up to 4 GB/s of bandwidth. GDDR3 started seeing use the next year with a bandwidth of up to 8 GB/s per 32-bit chip. Nvidia skipped GDDR4 while AMD used it in a few GPUs from the X1000 and HD 2000 series in 2006–2007, with a top bandwidth of 9 GB/s. The jump to GDDR5 in 2009 brought a significant increase in bandwidth, with the slowest chips offering 16 GB/s, and later, GDDR5 would eventually double that to 32 GB/s.
Since GDDR5, the rate of introducing new variants has slowed down. GDDR5 stuck around for a good six years, and GDDR6 has done the same. It's likely that GDDR7 will be around for at least that long, coexisting alongside various forms of HBM (High Bandwidth Memory) used primarily in data centers and AI products. But at some point, even with up to 160 GB/s of bandwidth per 32-bit chip, even GDDR7 will eventually need to be replaced — and the engineers are likely already discussing ways to push even higher bandwidths for whatever comes next.
But right now, we're looking forward to seeing the first wave of GDDR7-equipped graphics cards. With more memory capacity and much higher bandwidth, GDDR7 will enable even higher levels of GPU compute. Those should arrive this fall if everything goes according to plan.