AMD Radeon HD 7970 review
Fantastic performance, although the sky-high price puts this out of reach for most
Review Date: 22 Dec 2011
Reviewed By: Mike Jennings
Price when reviewed: £360 (£432 inc VAT)
Features & Design
Value for Money
AMD might be having a tough time of it on the CPU front, but when it comes to graphics cards it's just taken a leap ahead of the competition. Its new single-GPU flagship - the Radeon HD 7970 - is the first to use a 28nm manufacturing process, and beats its major rival Nvidia to the punch.
The impact of the new process is significant: the HD 7970 die is 378mm2 compared to the 389mm2 size of its predecessor, the Radeon HD 6970, and AMD crams in 4.3 billion transistors - a huge bump up over the 2.6 billion included in last year's top-end single GPU. The HD 7970 compares favourably to Nvidia's GeForce GTX 580, too, which includes 3 billion transistors in a 520mm2 package.
AMD has also given its Very Long Instruction Word 4 (VLIW4) architecture the boot, deeming its bottlenecked parallel performance a hindrance. While VLIW4 cores and their schedulers proved adept at handling groups of identical operations concurrently, they struggled with varied groups of tasks required by more complex applications and games. Some tasks were scheduled and processed promptly but, often, the scheduler couldn't keep up, with instructions left behind and bottlenecks caused in the GPU.
It's a big change. VLIW-based architectures, including VLIW4, have been used in AMD graphics cards since the Radeon 9700's introduction in 2002. Instead, the new card uses multiple instruction multiple data cores (MIMD). These are constructed from several single-instruction multiple data cores (SIMD) grouped together, and are capable of more efficiently handling a more diverse range of tasks, as well as making dynamic changes to the compute schedule - something VLIW4 couldn't do.
Each MIMD package is made from 64 SIMD cores, and each package has its own L1 cache, with L2 cache and memory controllers shared between several packages. AMD has given these cores their own name, too, with the marketing department swooping into action to dub each unit a Graphics Core Next.
The HD 7970 itself includes 2,048 SIMD cores inside 32 MIMD clusters, which is more than the 1,536 stream processors used in the HD 6970. The core clock is 925MHz, there's 3GB of GDDR5 RAM running at 1,375MHz, and the memory bus is 384-bits wide. The latter is an improvement on the 256-bit bus of last year's cards, and on a par with the GTX 580.
Welcome back AMD. Here's to 2012.
By dyagetme1 on 22 Dec 2011
Fingers crossed for a lottery win this weekend and this card will be mine, oh yes it will be mine. Actually make that four of these will be mine...
By skarlock on 22 Dec 2011
The power, the power
Are the power requirements getting just a little excessive?, I mean 800w for a dual card setup How many people have the monitor sets to actually need this. Hardcore gamer race only.
By mikepgood on 22 Dec 2011
Actually we use desktop computers with up to four GPU's in each at our work, which have pretty much replaced most of the very expensive server clusters we used to use. They're ideal for a lot of the number crunching our science requires. Most of the massive biological data sets we work on (long lists of millions of four letters that have to be compared to millions of lists of other strings of four letters) are ideally suited to utilise the massively parallel architecture of GPU's. So whereas previously we would have spent £20k+ per server, we can now spend under £2k on an 'ultimate gaming rig' to do the crunching far faster and while consuming much less power than previously. Plus it would make for great LAN gaming if only we could persuade the IT admins to allow that! ;)
By skarlock on 22 Dec 2011
Good point, and an arguement for this item in PCPro.
"Plus it would make for great LAN gaming if only we could persuade the IT admins to allow that" - Good luck with that!
By mikepgood on 22 Dec 2011
The AMD press release stated "Engineered with support for PCI Express 3.0" Does this mean we have the first PCI express 3.0 capable graphics card here? In which case just need the Ivy Bridge processor to go with it.
By jknight on 22 Dec 2011
Indeed it is and particularly when using the GPU for processing intensive tasks it makes a difference. It boosts the throughput from 8GB/s to 16GB/s, which should help with the CPU GPU bottleneck. It is still worth noting that the Llano latency between CPU and GPU is even better at 20GB/s.
By skarlock on 22 Dec 2011
Just been reading the full specs of the new GPU and it can turn out 3.79 TFLOPS of calculations, that's incredible given a 12 core Opteron is just 8.4 GFLOPS. Admitedly the GPU is more limited to processing strict groups of data, but even so that's pheniominal, it was only 15 years ago that the first ever 1 TFLOP super computer was switched on.
By skarlock on 23 Dec 2011
At 2,560 x 1,600 it has a 61% superiority to the 580 for 35% more power. It may use more from the wall but it is appreciably more power efficient.
They need to address the cooler though, how much louder is it than the 580? A decibel comparison would be useful.
By Deadtroopers on 23 Dec 2011
Noise in dB(A)
Idle noise : 40.2 (37.9 ZCP idle)
Load noise : 55.2-57.3
Idle noise : 41.0
Load noise : 52.7-59.3
So it's quieter when idle and under load in the mid-noise range for a 580.
By skarlock on 24 Dec 2011
This GPU has already blown the GTX580 out the water in the overclocking benchmarks, it makes you wonder what Nvidia have up their sleeves and whether Kepler is a similar leap forward.
BTW, PCIe 3.0 is already possible using the SB-E chips. I'm not convinced this upgrade is that important, the PCIe bandwidth is rarely a bottleneck in gaming.
Still, nice to see some healthy competition!
By mikes87 on 31 Dec 2011
- What's on this week's PC Pro podcast?
- Finally legal to rip music from CDs - just don't break DRM
- Hot hardware video: Google Glass
- Microsoft to launch two new Windows Phones
- Amazon reveals why ebooks should cost less than $10
- Self-driving cars will be on UK roads in six months
- Lords: right to be forgotten is "unworkable"
- Apple slashes £100 off updated MacBook Pros with Retina
- Windows Phone gets first wearables app from Fitbit
- Motorola working on a Nexus 6 phablet
- How Google Glass ruined my lunch hour
- Smartphone battery packs: can a USB power pack beat the festival battery blues?
- Windows Easy Transfer – not so "easy" in Windows 8.1
- Formula 1: what a difference virtualisation makes
- Office of the future: comfy chairs and tablets everywhere
- I went to Glastonbury and the only thing that got high was my smartphone
- Meet the robots helping teach children
- PaperLater: would you pay to print the internet?
- Amazon vs Kobo: how much to make the ebook switch?
- Phishing emails: how I nearly got caught out
- 13 computers that changed the world
- How to download YouTube videos to a PC or laptop: is it legal to download YouTube videos?
- Dropbox vs OneDrive vs Google Drive: what's the best cloud storage service of 2014?
- Hacking the Internet of Things: from smart cars to toilets
- BlackBerry Passport release date, specs, features, and rumours: when is the new BlackBerry coming out?
- What's changing in the computing curriculum
- Teaching kids to code
- Best free translation apps for iOS, Android and Windows Phone
- Five worst SMB security threats... and how to solve them
- Apple iOS vs Android vs Windows 8 – what's the best compact tablet OS?
- How to add in-app purchasing to an iPhone, Android or Windows app
- Remote-control ransomware: TeamViewer and software hardball
- Why laptops with serial ports matter to the Internet of Things
- Make your mobile battery last longer
- Small steps into handling Big Data
- Nexus 5: does it really run stock Android?
- How to get broadband to a garden office
- How to write your company's IT security policy
- Raspberry Pi and Wolfram: a must-have for every child
- Could you get by with Office Web Apps?