NVIDIA GTX 580 Reviewccokeman - November 9, 2010
Category: Video Cards
Price: $499 - $589
» Discuss this article (46)
Fourteen months ago we got the first look at NVIDIA's latest architectural marvel codenamed "GF 100" at their GPU Technology Conference (GTC) held in sunny San Jose. What we obtained was a view of the next top-of-the-line graphics core from NVIDIA that offered up a promise of some serious firepower with its 512 CUDA cores, 16 Streaming Multiprocessors (SMs) and 1.5GB of GDDR5 memory. As an enthusiast, I couldn't wait to see what this card would deliver in terms of FPS in games and was anxious to see how my folding farm would profit from that kind of computing power!
The problem was that we had to wait and watch while AMD filled up their product stack and stores (and ultimately consumer's rigs) with DirectX 11 hardware. At CES this year, we were invited to a 'Deep Dive' on the Fermi architecture and got a glimpse of what would be coming to consumers when it was released. What we saw was a version that ended up being called the GTX 480 that was delivered with 480 CUDA cores and 15 Streaming Multiprocessors packed into the four Graphics Processing Clusters. A cut-down version of what was presented to us at the GTC. Seven months ago, the GTX 480 was released to the world and in reality, the card kicked ass! The problems were that it was a power hog and ran hot. So hot that one website decided to try and fry an egg on the heat sink (unsuccessfully I might add).
Today, we fast forward to the here and now. I have the latest revision of NVIDIA's Fermi architecture code named GF 110. What we have here is a GPU that has been re-engineered down to the transistor level and comes fully equipped with the full 512 CUDA cores and 16 SMs. The re-engineering has (according to NVIDIA), taken care of the power and heat problems. So let's take a look and find out just how good of a job they have done!
Since we don't have any packaging to go through, we can just get right to the GTX 580. Measuring 10.5 inches long, the GTX 580 mirrors the GTX 480 in size. The first thing you notice is that the heat sink is not visible from outside the shroud. Considering there are still three billion transistors that need cooling, the cooling solution must be something special! From the front, the only difference between this reference card and the initial offerings will be the manufacturer's sticker affixed to the shroud. The back side does not show anything out of the ordinary. Under the hood is where all the action takes place and NVIDIA has redesigned the core down to the transistors for a cooler running and more efficient product. With the GTX 580 we get the full 512 CUDA cores and increased clock speeds for an added performance boost. This card is meant to be used in motherboards with a 16x PCIe slot.
Connectivity comes in the form of two Dual link DVI ports and a Mini HDMI port. This port type has supported bit streaming for Dolby True HD and DTS HD Master Audio over HDMI since the 260 driver release. Meaning that if you choose to use this card in a multipurpose rig, you can take advantage of this ability for HD sound for a full 3D Blu-ray movie experience. The back end of this card has a small vent and detent to pull fresh air into the fan assembly to keep the card cool when running with two or more cards in an SLI setup (such as when they are installed side by side).
The power connections needed for use with the GTX 580 are one eight pin and one six pin PCIe power connector. The recommended power supply for use with the GTX 580 is 600 watts. The GTX 580 has a TDP of 244 watts or 56 watts less than the GTX 480. Multi-GPU setups are a way to increase your gaming performance and the GTX 580 supports up to a three card Tri-SLI setup. The 580 has the same two bridge connections on the top spine as well as several ventilation slots for removing heat from the shroud. By using two of these cards in an SLI setup, you can take advantage of NVIDIA's Surround technology using three monitors at resolutions up to 7680 x 1600. If that's not enough, you can add one more element from the NVIDIA ecosystem and hook up a 3D Vision system to the surround setup as long as you have 120hz monitors for an out-of-screen experience.
One of the big complaints with the GTX 480 was that it was power hungry and therefore ran hot. You needed to dissipate that energy somewhere. That meant you needed very good airflow and an efficient heat sink design. With the heat pipe based solution on the GTX 480, you had to ramp up the fan speed to push more air. This resulted in more noise so we ended up with complaint number two. The cooling solution on the GTX 580 is quite a bit different from the heatpipe solution used on the GTX 480. NVIDIA fixed both problems on the GTX 580 with the use of a liquid-based "Vapor Chamber" cooling solution. This solution uses a large liquid-filled chamber that functions much like a heatpipe but is of a more efficient design. To cool the power regulation circuit there is a large aluminum plate that serves to both stiffen the card and draw heat from these components. Below is a picture that illustrates how the Vapor Chamber works to remove heat from the GTX 580's core.
Last but not least is a look at the GTX 580's GF 110 core. This core is the fully loaded version of Fermi we were expecting when this architecture was originally introduced. Inside is a complete redesign, engineered to reduce the power consumption and heat load all the way down to the transistors. The GF 110 core is equipped with four Graphics Processing Clusters, 16 Streaming multiprocessors, 512 CUDA cores, 16 Polymorph engines, 64 texture units, 48 ROP units with higher clock speeds of 772Mhz on the fixed function units and 1544Mhz on the CUDA cores. The GTX 580 uses the same 1536MB of GDDR5 memory running through a 384bit bus with 768k of shared L2 cache. The 12 memory ICs used by NVIDIA are manufactured by Samsung.
The GTX 580 looks to have the firepower to meet the needs of the enthusiast and hardcore gamer. Based on specifications alone, it's a beast and is what we have been waiting for from NVIDIA. Lets see how it performs.