NVIDIA GTX 690 Review

ccokeman - 2012-04-27 19:24:10 in Video Cards
Category: Video Cards
Reviewed by: ccokeman   
Reviewed on: May 3, 2012
Price: $999


Just over a month ago NVIDIA proved it was on the right track when it released the GTX 680 that featured an all new GPU micro architecture called Kepler. The GTX 680 proved to be the fastest single GPU card on the block and addressed issues that seemed to dog the Fermi-based offerings including power consumption and heat. With the drop to 28nm, the GK104-based GTX 680 delivered excellent power consumption and thermal performance to match its impressive gaming performance. It soundly beat AMD's HD 7970 in just about every test it was run through including in Surround (Eyefinity) resolutions of 5760x1080. Again, impressive to say the least. Now here we are a month later and NVIDIA just announced the successor to the GTX 680 with the introduction of the GTX 690. This balls out card features not one but two full powered 28nm GK104 GPU cores with 3072 CUDA cores and 4GB of 6Gbps GDDR5 memory. I have to think NVIDIA learned some valuable lessons from the GTX 590 that was down clocked to meet a power envelope to put a card such as this out as its flagship offering. If it is as good as GTX 680 SLI it will prove to be a success.

NVIDIA brought new technologies to the party when it released the GTX 680 including Adaptive VSync, FXAA, and dynamic clock control called GPU Boost. To go with those we get Temporal Anti-Aliasing (TXAA) for improved visual quality with a lower hardware overhead performance hit. Built like a tank, the GTX 690 should prove to be the ultimate gaming card for those that have to have the best multiple screen or high resolution gaming experience. Lets see what makes it tick!


Closer Look:

About a week ago I received a package from NVIDIA with a strange item in it; A pry bar with the inscription "For use in case of Zombies or....." with the NVIDIA logo to the side. A pretty cryptic thing to be sure. The obvious implication is that the card would arrive in some kind of special packaging and indeed it did. A crate with the words "Caution Weapons Grade Gaming Power" burned into the wood that needed the pry bar sent last week or a least a big screwdriver to open. Having seen the industrial design of the GTX 690 during NVIDIA's announcement this past Saturday an industrial strength package was needed to drive the point home. That it does. Inside the crate under several layers of foam was the object many (including myself) have been anxious to see.









The GTX 690 uses the same 28nm Kepler SMX architecture introduced on the GK104-based GTX 680. There are just double the components with the GTX 690. Each GPU consists of a series of GPCs (Graphics Processing Clusters), four in this case, on each GK104 with two SMX units each with 192 cores for a total of 1536 CUDA cores per GPU core or 3072 on the GTX 690. Six times what is available on the GTX 580 for a point of reference. To more effectively manage power consumption, the traditional method of running the shader clock at twice the core clock was abandoned and now the clock speeds run at a 1:1 ratio. Each GPC has a single raster engine and dynamically share 1MB of L2 cache. Each GPU core features 128 texture units and 32 ROPs. A new feature with GK104 is hardware and software based GPU Boost technology, which dynamically boosts the clock speeds of the GPU cores when there is available TDP headroom, much like the latest CPUs from Intel and AMD. The base clock speeds for the cores on the GTX 690 are 915MHz with a GPU Boost core clock speed of around 1019MHz. The GTX 690 memory subsystem is the same as that used on the GTX 680 with four 64-bit (256-bit) memory controllers per GPU core each handling 2GB of GDDR5 memory running at 1500MHz (6000MHz effective).



The architecture has been discussed endlessly online so let's see what went into making the GTX 690 arguably the highest performing video card on the planet.

Closer Look:

NVIDIA's GTX 690 is a thing of beauty and incorporates many unique features into the design of the card from the rugged industrial looks to the impressive performance characteristics. These prove not only to be for looks but are indeed functional elements of the card. Well some of it is purely for looks at least. The GTX 690 is built using two GK104 Kepler GPUs. Measuring 11 inches in length the GTX 690 is an inch longer than the GTX 680. The front frame of the card is made from durable trivalent chromium-plated cast aluminum with two clear polycarbonate windows over the nickel-plated fin arrays. The fan housing is made from injection molded magnesium alloys. NVIDIA sums up the reasons for using this exotic metal like this, "Magnesium alloys are used throughout the automotive and aerospace industry (including the engines of the Bugatti Veyron and F-22 Raptor) for their light weight, heat dissipation, and acoustic dampening properties - which are the same reasons we use it in the GTX 690." A process called thixomolding is used to form the housings. The axial mounted fan used was as thoroughly thought out as the rest of the design with a new fin pitch and configuration to make this card quieter than two GTX 680's in SLI.

A 10 layer, 2oz copper-based PCB is used to help with heat dissipation and electrical signal strength all while running cooler for enhanced longevity. The back side of the PCB shows that all of the available real estate on the 10 inch PCB has been utilized. One last trick to make this slick looking card draw more attention is the green back lit laser cut GEFORCE GTX logo on the top of the card that can be seen through the side window of the chassis. The GTX 690 is DirectX 11.1 and PCIe 3.0 compliant and is designed to be run in a PCIe 16X slot on the motherboard and is backwards compatible with PCIe 2.0.













Connectivity on the GTX 690 allows the end user the ability to run up to four monitors at one time in a Surround 3 +1 setup using three Dual Link DVI and a single Displayport 1.2 port. By connecting three 120Hz 3D Vision-ready monitors such as the three ASUS VG236 used in this testing and a 3D Vision kit you can enjoy 3D Surround gaming. As anyone who has seen it first hand it is absolutely worth it if you want to enjoy that added dimension. It adds another element to DiRT 3 for me. Above the DP and DL-DVI port is the exhaust vent for the front GPU heat sink. The rear heat sink vents the thermal load out the back of the card and into the chassis. A larger flow path for the airflow helps drop the noise level of the GTX 690. The back end of the card is vented allowing airflow out the back of the card so that each heatsink has a distinct airflow path resulting in temperature parity on the GK104 cores. By venting the rear GPU's thermal load into the chassis you will want to make sure there is nothing directly behind the card and improve the airflow through your chassis or you could see an increase in the thermal load on all of the components in the case. I measured the airflow stream out the back of the housing at 56.7C while under load using my Kestral 4100. The polycarbonate windows, another design element, allow the nickel-plated fin arrays on the dual vapor chamber heat sink to be shown off at least before installation.



The GTX 690 supports Quad SLI configurations using another GTX 690 in motherboards that support this graphics configuration. A single bridge connection points to this as the only opportunity for more than two GPUs in SLI. Two 8-pin PCIe power connections are used to supply up to 375 watts to the GTX 690 when power from the 16x PCIe slot is taken into account. NVIDIA says that typical power loads run in the 276 watt range for this card in typical gaming scenarios. Ironically that is the load I saw duing the power testing for this card in the OCC test system. Power supply recommendations are in the 650 watt range but of course that's running on the edge.



The cooling solution employed by NVIDIA for the GTX 690 is a very robust piece with two independent copper Vapor Chambers attached to two nickel-plated fin arrays. By using separate solutions for each GPU there is temperature parity with both GPUs getting an equal share of the cool intake charge from the axial mounted fan. An aluminum baseplate is used to cool the rest of the components on the PCB and features channels to direct airflow through to the fin stack. The fan assembly uses an optimized blade to push air through the heat sink assembly as quietly and as efficiently as possible for the best cooling and noise performance possible.



NVIDIA's GTX 690 is built using a pair of 28nm GK104 Kepler-based cores on a single 10 layer, 2oz copper PCB. By using such a robust PCB component, temperatures are reduced and power efficiency is increased with improved signal integrity. A 10 phase power delivery system is used to supply power to the cores and GDDR5 memory. A PLX bridge chip is used to deliver a full 16 PCIe lanes to each core for maximum throughput. The dual GPUs used on the GTX 690 feature a total of 8 GPC (Graphics Processing clusters), 16 SMX(Streaming Multiprocessors), 3072 CUDA cores, 256 Texture units, 64ROPs and 7.08 billion transistors all clocked at 915MHz with a turbo boost of at least 1019MHz. The memory architecture has been reworked with a quartet of memory controllers for each core resulting in a 2 x 256-bit memory bus controlling the 4GB of GDDR5 clocked at 1500MHz (6000MHz effective). As you can see fitting all of the hardware on board the 11 inch PCB means that every bit of space is utilized.



Just the specifications and hardware list on their own point to a card that is going to be an all-out monster capable of delivering performance with the eye candy on and the ability to use the entire NVIDIA ecosystem with just a single card. All while running cooler, quieter and consuming less power than previous generation cards. Let's see if it lives up to the hype.


Graphics Processing Clusters
Streaming Multiprocessors
CUDA Cores
Texture Units
ROP Units
Base Clock
915 MHz
Boost Clock
1019 MHz
Memory Clock
6008 MHz
L2 Cache Size
Total Video Memory
4096MB GDDR5
Memory Interface
2 x 256-bit
Total Memory Bandwidth
384.4 GB/s (192.2 GB/s per GPU)
Texture Filtering Rate (Bilinear)
234.2 GigaTexels/sec
Fabrication Process
28 nm
Transistor Count
7.08 Billion
3 x Dual-Link DVI
1 Mini DisplayPort
Form Factor
Dual Slot
Power Connectors
2x 8-pin
Recommended Power Supply
650 Watts
Thermal Design Power (TDP)
300 Watts
Thermal Threshold
98 °C




All information courtesy of NVIDIA


Testing of the NVIDIA GTX 690 will consist of running it and comparison cards through the OverclockersClub.com suite of games and synthetic benchmarks. This will test the performance against many popular competitors. Comparisons will be made to cards of a range of capabilities to show where each card falls on the performance ladder. The games used are some of today's newest and most popular titles, which should be able to provide an idea of how the cards perform relative to each other.

The system specifications will remain the same throughout the testing. No adjustment will be made to the respective control panels during the testing, with the exception of the 3DMark Vantage testing, where PhysX will be disabled in the NVIDIA Control Panel, if applicable. I will first test the cards at stock speeds, and then overclocked to see the effects of an increase in clock speed. The cards will be placed in order from highest to lowest performance in each graph to show where they fall by comparison. The latest press release driver will be used in testing of the GTX 690 and GTX 680. Other NVIDIA comparison cards will be using the 296.10 drivers; AMD will be using Catalyst 12.3 drivers.


Comparison Video Cards:



Overclocking a Kepler-based GPU is somewhat different from what we have been used to with prior generation NVIDIA video cards. Or for that matter any video card. While you still have to raise the clock speed, voltage and memory clocks, how they are applied is where the difference comes into play. NVIDIA uses GPU Boost to dynamically raise the clock speeds under load to increase performance as high as possible while still falling into the thermal (98C) and power design (300 watt) envelopes. If you have had fun overclocking a Sandy Bridge-based CPU from Intel you kind of get the gist of how it all works. NVIDIA uses both a hardware and software-based set of controls to ensure thermal and power envelopes are not exceeded while still allowing the highest possible performance. The baseline clock speed of 915MHz is used when the most demanding games are played yet in every game I tested the clock speed was usually above the 1019MHz GPU Boost clocks for the majority of the test. It seems as though the controls used by NVIDIA seem to allow this kind of speed at will.

To raise the clock speed above these levels a software based utility such as EVGA's Precision or MSI's Afterburner can be used to adjust the memory and GPU core clock speeds. A difference from past NVIDIA cards is the lack of a third clock domain to worry about as Kepler GPUS only use the GPU and memory core clocks. The highest clock speeds I could reach on this card are 1202MHz on the two cores and 1620Mhz on the GDDR5 memory. To reach the highest clock speeds I adjusted the GPU core voltage to a maximum of 1175mv and raised the core clock speed by 100MHz and proceeded to test stability using Unigine Heaven Benchmark 3.0 at a resolution of 5760x1080 using the maximum settings. I continued in 10MHz increments until the benchmark failed. Failure came pretty quickly at just +123MHz for a +13% increase in clock speed or 1028MHz. GPU Boosted clocks ranged from 1189MHz to 1202MHz in this configuration. To get the most from the GDDR5 memory I followed the same scaling and ended up with a memory clock speed of 1620 MHz or 6480MHz effective or an 8% bump in speed. Although not massive increases on a percentage basis the increases deliver measurable increases in performance across the board from an already fast card.


Maximum Clock Speeds:

Testing for the maximum clock speed consists of looping Unigine 3.0 for 30 minutes each to see where the clock speeds fail when pushed. If the clock speed adjustment fails, then the clock speeds are adjusted and the test is rerun until each card passes the testing.



  1. Metro 2033
  2. Batman: Arkham City
  3. Battlefield 3
  4. Sid Meier's Civilization V
  5. Unigine Heaven Benchmark 2.5
  6. DiRT 3
  7. Mafia II
  8. 3DMark 11
  1. Temperature
  2. Power Consumption


Part first-person shooter, part survival horror, Metro 2033 is based on the novel of the same name, written by Russian author Dmitry Glukhovsky. You play as Artyom in a post-apocalyptic Moscow, where you'll spend most of your time traversing the metro system, with occasional trips to the surface. Despite the dark atmosphere and bleak future for mankind, the visuals are anything but bleak. Powered by the 4A Engine, with support for DirectX 11, NVIDIA PhysX, and NVIDIA 3D Vision, the tunnels are extremely varied – in your travels, you'll come across human outposts, bandit settlements, and even half-eaten corpses. Ensuring you feel all the tension, there is no map and no health meter. Get lost without enough gas mask filters and adrenaline shots and you may soon wind up as one of those half-eaten corpses, chewed up by some horrifying manner of irradiated beast that hides in the shadows just waiting for some hapless soul to wander by.











Starting out with Metro 2033 it is clearly evident that at 192 x1080 the GTX 690 has double the performance of the previous GTX 590. At 5760x1080 the gap is even greater.


Batman: Arkham City is the sequel to Batman: Arkham Asylum released in 2009. This action adventure game based on DC Comics' Batman super hero was developed by Rocksteady Studios and published by Warner Bros. Interactive Entertainment. Batman: Arkham City uses the Unreal 3 engine.















In Batman: Arkham City we again see significant gains over the previous generation cards. Scaling in Surround mode is again excellent but not quite as high as seen in Metro 2033 when compared to the GTX 590.


Battlefield 3 is a first-person shooter video game developed by EA Digital Illusions CE and published by Electronic Arts. Battlefield 3 uses the Frostbyte 2 game engine and is the direct successor to Battlefield 2. Released in North America on October 25, 2011, the game supports DirectX 10 and 11.
















Playing a shooter like BF3, FPS is key to taking out your enemy as well as a fair amount of skill. In the case of the GTX 690 it is in no uncertain terms the fastest card in this comparison. Again almost doubling the performance of the Fermi-based GTX 590 at 1920x1080.


Unigine Heaven Benchmark 3.0 is a DirectX 11 GPU benchmark based on the Unigine engine. This was the first DX 11 benchmark to allow testing of DX 11 features. What sets the Heaven Benchmark apart is the addition of hardware tessellation, available in three modes – Moderate, Normal and Extreme. Although tessellation requires a video card with DirectX 11 support and Windows Vista/7, the Heaven Benchmark also supports DirectX 9, DirectX 10, DirectX 11 and OpenGL 4.0. Visually, it features beautiful floating islands that contain a tiny village and extremely detailed architecture.














Nothing to say here but the train continues to gain steam as the GTX 690 just blows away the cards in this comparison including the GTX 590.


Civilization V is a turn-based strategy game. The premise is to play as one of 18 civilizations and lead the civilization from the "dawn of man" up to the space age. This latest iteration of the Civilization series uses a new game engine and massive changes to the way the AI is used throughout the game. Civilization V is developed by Firaxis Games and is published by 2K games and was released for Windows in September of 2010. Testing will be done using actual game play with FPS measured by Fraps through a series of five turns, 150 turns into the game.















Dual GPU cards or multi GPU combinations will deliver frame rates well above what is deemed playable in this game!


DiRT 3 is the third iteration of this series. Published and developed by Codemasters, this game uses the EGO 2.0 game engine and was released in the US on PC in May of 2011.

















I have got say I love me some DiRT 3 in Surround with the GTX 690. Game play is smooth with no glitches in performance.


Mafia II is a third-person shooter that puts you into the shoes of a poor, Sicilian immigrant, Vito Scarletta. Vito has just returned home from serving overseas in the liberation of fascist Italy, to avoiding his jail sentence, to finding his family in debt. The debt must be repaid by the end of the week, and his childhood friend, Joe Barbaro, conveniently happens to have questionable connections that he assures will help Vito clear the debt by that time. As such, Vito is sucked into a world of quick cash. Released in North America for PC in August of 2010, the game was developed by 2K Czech, published by 2K, and uses the Illusion 1.3 game engine.














As expected the GTX 690 delivers great FPS numbers in this game exceeding the results of the GTX 590 in each resolution at stock and overclocked by wide margins. SLI scaling at 5760x1080 is close to 100%.


3DMark 11 is the next installment in Futuremark’s 3DMark series, with Vantage as its predecessor. The name implies that this benchmark is for Microsoft DirectX 11 and with an unintended coincidence, the name matches the year proceeding its release (which was the naming scheme to some prior versions of 3DMark nonetheless). 3DMark 11 is designed solely for DirectX 11, so Windows Vista or 7 are required along with a DirectX 11 graphics card in order to run this test. The Basic Edition has unlimited free tests on performance mode, whereas Vantage is only allowed for a single test run. The advanced edition costs $19.95 and unlocks nearly all of the features of the benchmark, while the professional edition runs $995.00 and is mainly suited for corporate use. The new benchmark contains six tests, four of which are aimed only at graphical testing; one to test for physics handling and one to combine graphics and physics testing together. The open source Bullet Physics library is used for physics simulation and although not as mainstream as Havok or PhysX, it still seems to be a popular choice.

With the new benchmark, comes two new demos that can be watched, both based on the tests. Unlike the tests, however, these contain basic audio. The first demo is titled "Deep Sea" and involves a few vessels exploring what looks to be a sunken U-Boat. The second demo is titled "High Temple" and presents a location similar to South American tribal ruins with statues and the occasional vehicle around. The demos are simple in that they have no story – they are really just a demonstration of what the testing will be like. The vehicles have the logos of the sponsors MSI and Antec on their sides – the sponsorships helping to make the basic edition free. The four graphics tests are slight variants of the demos. I will use the three benchmark test preset levels to test the performance of each card. The presets are used as they are comparable to what can be run with the free version, so that results can be compared across more than just a custom set of test parameters.










In 3DMark 11 the GTX 690 scales nicely delivering not quite 100% scaling above the GTX 680 in the Extreme preset where the GPU is taxed most heavily.


Temperature testing will be accomplished by loading the video card to 100% using Unigine's Heaven Benchmark Version 3.0, with EVGA's Precision overclocking utility for temperature monitoring. I will be using a resolution of 1920x1080 using 8xAA and a five-run sequence to run the test, ensuring that the maximum thermal threshold is reached. The fan speed will be left in the control of the driver package and video card's BIOS for the stock load test, with the fan moved to 100% to see the best possible cooling scenario for the overclocked load test. The idle test will involve a 20-minute cool-down, with the fan speeds left on automatic in the stock speed testing and bumped up to 100% when running overclocked.











Cooling twice the cores used on the GTX 680, NVIDIA had a significant challenge to meet when delivering on the thermal characteristics of the GTX 690. In three out of the four tests the GTX 690 is cooler running than the HD 7970 and is cooler running than the GTX 590 in all four tests showing how efficient the Kepler architecture is. NVIDIA used separate copper vapor chamber-based heat sinks for each GPU eliminating any thermal load stacking seen on earlier dual GPU cards. Even at 100% fan speeds the GTX 690 is quiet by comparison to earlier cards like the GTX 590. The work done to maximize airflow and reduce noise is evident here.


Power consumption of the system will be measured at both idle and loaded states, taking into account the peak wattage of the entire system with each video card installed. I will use Unigine's Heaven Benchmark version 3.0 to put a load onto the GPU using the settings below. A 15-minute load test will be used to simulate maximum load with the highest measured wattage value recorded as the result. The idle results will measured as the lowest wattage value recorded with no activity on the system.













There is no doubt that feeding a dual GPU card is going to provide some power challenges, but a measure of success is going to be how well it fares against previous generation products. In all four scenarios the GTX 690 uses less power than the GTX 590 at idle and under load by measurable margins. Especially impressive when you look at the performance delivered by the GTX 690.


To say that I was impressed with what the GTX 690 has to offer would be an understatement. In the past NVIDIA has delivered excellent gaming performance when they dropped a dual GPU card on the market such as the 9800GX2, GTX 295, and the GTX 590. Each offered significant gains in the all important FPS arena but suffered from the same challenges of running two GPUs on one card with increased power consumption and compromises made to cooling the card down that resulted in lower clock speeds than single GPU cards. The GTX 690 for the most part is almost immune to these challenges with a robust 10 phase power circuit and 10 layer, 2oz copper PCB to manage the power needed to maximize performance and stay within the 300 watt TDP of the card; in most games running near a quoted 276 watts. Ironically this is the consumption delta between the idle and load results I saw in my testing at stock speeds.

For cooling NVIDIA does not use a design that uses air passed from one heat sink into the other like earlier dual GPU cards such as the original GTX 295. By using a split system with an axial mounted fan, each independent vapor chamber based heat sink is able to deliver temperature parity for increased overclocking potential by keeping the load temperatures within a degree or two at most. The base plate and fan design help channel all that airflow through the fin arrays helping deliver the cool temperatures and reduce the noise signature. Surprisingly the GTX 690 is incredibly quiet for a card of this type. NVIDIA went and optimized the fan profile to increase airflow without an increase in noise, which is significant with the reduced venting on the mounting bracket. What is not noticed is that it has also changed the fan speed algorithms from noticeable steps to a linear profile that gradually increases fan speed, of course staying quiet in the process. So much so that I had to pull the side off to make sure the fan was running during the testing. Manually setting the fan to its maximum level results in an barely audible (outside the chassis) whirring from the fan and a rushing from the airflow out of the mounting bracket. Anything below 73% was not audible in this scenario. Both wins all the way around. The only concern with the cooling solution is that some of the GPU's discarded thermal load is recycled into the chassis from the vent on the tail end of the GTX 690. Anything behind this vent will get hot. I measured a constant 55+C temperature during my load testing from the rear vent of the card. The better the airflow in the chassis the less of a concern this is.

You have got to give it to NVIDIA for delivering a stunning looking card that has all the right bling factors from a pair of polycarbonate windows over the fin arrays to the laser cut GEFORCE GTX logo on the spine of the card that glows green when powered on to the use of exotic metals and coatings for a truly badass industrial-looking video card that anyone would be proud to own. Being able to enjoy the entire spectrum of NVIDIA technologies on a single card from 3D Vision and Surround technologies to enabling PhysX in game delivers a truly engaging experience. You can turn it all on. If you need more, Quad SLI is supported using two GTX 690s in a motherboard that supports it. Adaptive Vsync can be used to deliver FPS improvements while reducing the texture tearing seen when it is not enabled at high FPS. TXAA and FXAA can be used over MSAA for the same or better visual quality without the hardware overhead; In the end delivering higher FPS and improved visual quality.

Overclocking, another way to improve performance, is fairly simple to accomplish when you get the hang of it and is as simple and straight forward as in the past. You have new tools and techniques, but in a nutshell bump the power target up, bump the clocks up and test with a final tweak to the voltage to get you all the way to the top. This sample overclocked to a maximum of 1200MHz on the GK104 cores and 1620Mhz on the memory; both fairly decent bumps that delivered impressive performance scaling. Impressive as the stock numbers are overclocking improves on these marks delivering almost an almost 20,000 score using the Entry preset in 3DMark 11.

With a card of this nature you know it is going to carry a steep price tag. There is no way around it as we have seen in the past with dual GPU cards costing nearly as much or more as the $999 entry point for the GTX 690. It's not cheap or even close but the performance delivered speaks for itself. In the end it all comes together with NVIDIA hitting a home run as it did with the GTX 680, delivering the ultimate gaming gear for the gamer that has to have the best card on the market to run the highest resolutions and detail levels while enjoying all the NVIDIA ecosystem has to offer. It's got great performance to go with the great looks. What more is there to say.

We will have an article SLI vs. Crossfire coming up in the next couple of weeks so stay tuned for that. With driver issues from AMD we couldn't get its cards tested in time for this review.