NVIDIA GTX 680 Review

Bosco airman - 2012-03-18 08:02:51 in Video Cards
Category: Video Cards
Reviewed by: Bosco   airman   
Reviewed on: March 22, 2012
Price: $499

Introduction:

For computer enthusiasts around the world, both AMD and NVIDIA fans, today is a very special day. Today, it is finally here. The GTX 680 has arrived! Many rumors have been circulating for months as to what NVIDIA has been up to with Kepler and what we should all expect. Countless questions have come up regarding Kepler's performance, technologies, power consumption, and general comparability to AMD's current flagship card, the HD 7970 (Tahiti). Today, those questions will be put to rest for good. AMD and NVIDIA fans will either laugh or cry, and those who simply don't care will hopefully be educated enough as to which card to buy if they have been holding out for Kepler! Though no "leaked" information has been presented by staff here on OCC, a brisk trot around Google prior to today would have landed you onto plenty of websites with pictures of the card, hardware specifications, and even some benchmarks. As such, some of you reading this may already have an idea what to expect — at least what it looks like and some other physical trait — but benchmarks prior to today don't really have any credibility. For those who have played fair and haven't peaked, I hope you are as excited as I am to see what this card can do.

Two weeks ago, NVIDIA held a press conference in San Francisco, California where hundreds of editors gathered from around the world for the opportunity to witness the unveiling of the company's latest card. At that time, sitting down and waiting for the presentation to start, I'm sure that many of us really didn't know where to set our expectations. We only knew one thing — Kepler is coming. By the end of the presentation, I can say that I was very impressed at what the card had to offer when it came to its hardware. A clear message that NVIDIA conveyed is that it's not always about raw performance numbers and that efficiency now plays a huge role in the overall rating of a product. NVIDIA was proud to introduce the GTX 680 as the fastest, most efficient GPU ever built, and performance per watt is what is important. Though a bold claim, NVIDIA's confidence was very contagious at this event. That being said, I can't wait to put the card on the hot seat to really see if NVIDIA is truly as proud of the GTX 680 as it seems to be!

I cannot express in real words how excited I am to have the opportunity to prepare the launch review for this card. Since I can't tell you how excited, I'm just going to have to sum it up as "Squeee!" Okay, so maybe I might be too excited, but I'm trying to stay in my chair. I really hope that the "Kepler" GTX 680 is everything and more than NVIDIA says it is! So, let's get into it.

 

Closer Look:

The card to be presented in this review is NVIDIA's reference card. As such, I have no packaging to share, so I will be starting right away with pictures of the card. Featuring a black cooler with green accents and a black PCB, the GTX 680 retains the typical NVIDIA reference card style. It is 10.5" in length, just like the GTX 580 and GTX 480. I'll be taking the cooler off soon to get a look at the cooler and lay eyes on the Kepler core. Minding the fact that this card has 1536 CUDA cores, three times that of the GTX 580, I'm interested to see what's under there.

 

 

In case you haven't heard, it had previously been confirmed that the Kepler cards would support NVIDIA Surround on a single card. NVIDIA Surround is the equivalent to AMD Eyefinity, with a few added features, such as bezel-peeking and windows that maximize to a single monitor (instead of maximizing to the entire array). To support NVIDIA Surround on one card, the NVIDIA GTX 680 has two Dual-Link DVI ports, an HDMI 1.4a port, and a full-size DisplayPort 1.2 connection. With four connections, four monitors can be simultaneously operated — such as a 3+1 NVIDIA Surround setup for up to an incredible 7680 x 1600 landscape. A small set of holes in this I/O plate allow hot air from the card to be vented out the back of the case. The opposite side of the card is sealed, unlike the GTX 580, which had a small vent whose purpose was to allow additional airflow to enter the card while in an SLI setup. If the cooling for the NVIDIA GTX 680 is half as good as NVIDIA says it is, then the lack of a little vent on the end won't be an issue.

 

 

The part that definitely dropped a lot of jaws at the conference was when we learned that the card only uses two 6-pin connectors. Not a 6-pin and an 8-pin, two 6-pin connectors. Not even my old HD 5870 or a GTX 280 used two 6-pin connections. Some how, NVIDIA has managed to fit three times the CUDA cores than a GTX 580 on a board with a TDP of less than 200W, which is 20% less than the 250W TDP of the HD 7970. GPU power delivery is handled by a 4-phase power design, with two additional power phases dedicated for the GDDR5 memory. The secret and the proof is all in the pudding. I may not know the secret, but I can provide the proof. As usual, we still find the SLI bridge connections opposite the power connections, closest to the I/O bracket. The amount of CUDA cores with running just two GTX 680s in SLI would require six GTX 580s and a mind-blowing 1400W of power. Sure, a core-for-core comparison is not proper for assuming performance metrics, but it lends itself to a comparison of performance per watt. NVIDIA measures Kepler as providing two times the performance-per-watt over Fermi.

 

 

NVIDIA has told us about power consumption being lower, but what about noise levels and temperatures? Well, we were specifically briefed on the three key features that allow the GTX 680 to deliver a quiet gaming experience. First, acoustic dampening material used in the highly-balanced GPU fan keeps noise and vibration levels low (blade passing frequency tone is minimized). A unique fin stack design is shaped for better airflow, minimizing noise levels further. Finally, a high-efficiency embedded vapor chamber at the heart of the cooler is able to provide the cooling that the Kepler core requires at lower airflow, reducing the maximum fan speed. With the shroud removed, the cooler looks similar to the GTX 580, but lacks the slanted edge of the fin stack. The cooler is constructed from a copper base with aluminum fins so that it remains lightweight and cost-effective.

 

 

What's under the cooler and at the heart of the PCB is what is truly groundbreaking. The Kepler core employs 8 SMX units. Each SMX unit contains 192 CUDA cores, totaling the 1536 that we have mentioned. Fermi has 16 SM units, but with only 32 CUDA cores per unit. The Kepler core has 8 geometry units, 4 raster units, 128 texture units, and 32 ROP units. A very exciting figure that we got to hear is that the GTX 680 has the world's first 6Gbps GDDR5 memory. What does 6Gbps compare to in MHz? Well, the math is easy — 6000 MHz. That's right, 2048MB of 6GHz (6008MHz to be exact), 256-bit, 192.26 GB/s memory. It's sickening! In a good way, of course.

 

 

 

Now that the card has physically been explored, I'd like to take the time to share with you its specifications and features, as we always do. This way, numbers and other physical stuff can easily be seen! Let's keep moving.

Specifications:

Graphics Processing Clusters
4
Streaming Multiprocessors
8
CUDA Cores
1536
Texture Units
128
ROP Units
32
Base Clock
1006 MHz
Boost Clock
1058 MHz
Memory Clock
6008 MHz
L2 Cache Size
512KB
Total Video Memory
2048MB GDDR5
Memory Interface
256-bit
Total Memory Bandwidth
192.26 GB/s
Texture Filtering Rate (Bilinear)
128.8 GigaTexels/sec
Fabrication Process
18 nm
Transistor Count
3.54 Billion
Connectors
2x Dual-Link DVI
1x HDMI
1x DisplayPort
Form Factor
Dual Slot
Power Connectors
2x 6-pin
Recommended Power Supply
550 Watts
Thermal Design Power (TDP)
195 Watts
Thermal Threshold
98 °C

 

Features:

Testing:

Testing of the NVIDIA GTX 680 will consist of running it and comparison cards through the OverclockersClub.com suite of games and synthetic benchmarks. This will test the performance against many popular competitors. Comparisons will be made to cards of a range of capabilities to show where each card falls on the performance ladder. The games used are some of today's newest and most popular titles, which should be able to provide an idea of how the cards perform relative to each other.

The system specifications will remain the same throughout the testing. No adjustment will be made to the respective control panels during the testing, with the exception of the 3DMark Vantage testing, where PhysX will be disabled in the NVIDIA Control Panel, if applicable. I will first test the cards at stock speeds, and then overclocked to see the effects of an increase in clock speed. The cards will be placed in order from highest to lowest performance in each graph to show where they fall by comparison. The latest press release driver will be used in testing of the GTX 680. Other NVIDIA comparison cards will be using the 296.10 drivers; AMD will be using Catalyst 12.3 drivers.

 

Comparison Video Cards:

 

Overclocking:

 

Really getting into the overclocking of the GTX 680 is an interesting task. The intent of NVIDIA's overclocking for Kepler cards (right out of the box) is unique to what we're probably all used to. Built into the GTX 680's circuitry is a power usage monitor. By default, the GTX 680 targets 100% power usage by increasing the core clock of the card if there is "opportunity" available. For example, a less-stressing game running at 100% GPU usage may not require the same amount of power (100% power target) as a very intense game running the GPU at 100% usage. So, the hardware circuitry recognizes that for this less-stressing game, say 85% of the power target, it increases its clock speeds to reach the desired power target up to the stock GPU Boost clock of 1058MHz. In some cases, clock speeds will actually go below the base clock — even when there is GPU activity (aside from a very low, 2D clock). If a process does not require 100% GPU usage, clock speeds will decrease. Only at very high GPU usage levels (probably 80% or higher) will the card run at its "Base Clock".

User overclocking comes in by using EVGA's Precision software. Just like any other overclocking software, it displays current clocks, temperatures, fan speeds, and a monitor of all the values. There are three main adjustments that can be made for overclocking purposes. These are the card's power target, GPU clock offset, and memory clock offset. Adjusting the offset values simply increases the Base and Boost clock. The power target slider is going to control how often those clock speeds are met. Since I am not concerned about my power usage for my overclocked testing, I will set this slider to a maximum value. By doing so, the card will never meet this power target — meaning the card will never try to decrease its clocks, resulting in steady clock speeds.

A good bit of negative feedback is present on the Internet about how the Kepler cards overclock. Even I was worried, but after realizing that the level to which the power target can be set can not possibly cause the card to decrease its clocks, those worries went away. As soon as I saw my set clock speed steady for every benchmark, I felt loads better. So, to put this to rest, your overclocking potential is not going to be limited AT ALL! So, looping Unigine 3.0 while I was overclocking, I was able to reach 1305MHz on the core and 1628MHz (6500MHz effective) on the memory. That's nearly a 30% increase on the core and just under 10% on the memory. I was hoping for 7GHz memory here, but it couldn't quite make it. Even while pushing the card past its limit and having the overclock fail, the system never crashed. Only the graphics card driver fails, but successfully recovers. It is certainly the most painless overclocking I've had for sure; the system never locked up!

 

Maximum Clock Speeds:

Testing for the maximum clock speed consists of looping Unigine 3.0 for 30 minutes each to see where the clock speeds fail when pushed. If the clock speed adjustment fails, then the clock speeds are adjusted and the test is rerun until each card passes the testing.

 

 

  1. Metro 2033
  2. Batman: Arkham City
  3. Battlefield 3
  4. Sid Meier's Civilization V
  5. Unigine Heaven Benchmark 2.5
  6. DiRT 3
  7. Mafia II
  8. 3DMark 11
  1. Temperature
  2. Power Consumption

Testing:

Part first-person shooter, part survival horror, Metro 2033 is based on the novel of the same name, written by Russian author Dmitry Glukhovsky. You play as Artyom in a post-apocalyptic Moscow, where you'll spend most of your time traversing the metro system, with occasional trips to the surface. Despite the dark atmosphere and bleak future for mankind, the visuals are anything but bleak. Powered by the 4A Engine, with support for DirectX 11, NVIDIA PhysX, and NVIDIA 3D Vision, the tunnels are extremely varied – in your travels, you'll come across human outposts, bandit settlements, and even half-eaten corpses. Ensuring you feel all the tension, there is no map and no health meter. Get lost without enough gas mask filters and adrenaline shots and you may soon wind up as one of those half-eaten corpses, chewed up by some horrifying manner of irradiated beast that hides in the shadows just waiting for some hapless soul to wander by.

 

Settings:

 

 

 

 

 

 

 

 

These results put the GTX 680 off to a good start! We see about a 10% performance margin over HD 7970 and HD 7950, even outperforming the GTX 590.

Testing:

Batman: Arkham City is the sequel to Batman: Arkham Asylum released in 2009. This action adventure game based on DC Comics' Batman super hero was developed by Rocksteady Studios and published by Warner Bros. Interactive Entertainment. Batman: Arkham City uses the Unreal 3 engine.

 

Settings:

 

 

 

 

 

 

 

 

 

 

 

 

Here again, the GTX 680 is topping the charts — with the GTX 590 still in its tracks.

Testing:

Battlefield 3 is a first-person shooter video game developed by EA Digital Illusions CE and published by Electronic Arts. Battlefield 3 uses the Frostbyte 2 game engine and is the direct successor to Battlefield 2. Released in North America on October 25, 2011, the game supports DirectX 10 and 11.

Settings

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A LOT of people play Battlefield 3. Anyone who plays first person shooters knows that FPS is key — GTX 680 yet again taking the trophy.

Testing:

Unigine Heaven Benchmark 3.0 is a DirectX 11 GPU benchmark based on the Unigine engine. This was the first DX 11 benchmark to allow testing of DX 11 features. What sets the Heaven Benchmark apart is the addition of hardware tessellation, available in three modes – Moderate, Normal and Extreme. Although tessellation requires a video card with DirectX 11 support and Windows Vista/7, the Heaven Benchmark also supports DirectX 9, DirectX 10, DirectX 11 and OpenGL 4.0. Visually, it features beautiful floating islands that contain a tiny village and extremely detailed architecture.

 

Settings

 

 

 

 

 

 

 

 

 

 

 

Interesting results here; the GTX 590 comes back and outperforms the GTX 680. Here we get to see the HD 7970 creeping up, but still behind the GTX 680 and GTX 590.

Testing:

Civilization V is a turn-based strategy game. The premise is to play as one of 18 civilizations and lead the civilization from the "dawn of man" up to the space age. This latest iteration of the Civilization series uses a new game engine and massive changes to the way the AI is used throughout the game. Civilization V is developed by Firaxis Games and is published by 2K games and was released for Windows in September of 2010. Testing will be done using actual game play with FPS measured by Fraps through a series of five turns,150 turns into the game.

Settings

 

 

 

 

 

 

 

 

 

 

 

 

 

Though the GTX 680 has a clear margin over the HD 7970 in the stock testing, the HD 7970 comes right up to its heels when it's overclocked!

Testing:

DiRT 3 is the third iteration of this series. Published and developed by Codemasters, this game uses the EGO 2.0 game engine and was released in the US on PC in May of 2011.

Settings

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DiRT3 is no contest for these NVIDIA cards!

Testing:

Mafia II is a third-person shooter that puts you into the shoes of a poor, Sicilian immigrant, Vito Scarletta. Vito has just returned home from serving overseas in the liberation of fascist Italy, to avoiding his jail sentence, to finding his family in debt. The debt must be repaid by the end of the week, and his childhood friend, Joe Barbaro, conveniently happens to have questionable connections that he assures will help Vito clear the debt by that time. As such, Vito is sucked into a world of quick cash. Released in North America for PC in August of 2010, the game was developed by 2K Czech, published by 2K, and uses the Illusion 1.3 game engine.

 

Settings

 

 

 

 

 

 

 

 

 

 

 

Here's another close one between NVIDIA and AMD, by only about 10%.

Testing:

3DMark 11 is the next installment in Futuremark’s 3DMark series, with Vantage as its predecessor. The name implies that this benchmark is for Microsoft DirectX 11 and with an unintended coincidence, the name matches the year proceeding its release (which was the naming scheme to some prior versions of 3DMark nonetheless). 3DMark 11 is designed solely for DirectX 11, so Windows Vista or 7 are required along with a DirectX 11 graphics card in order to run this test. The Basic Edition has unlimited free tests on performance mode, whereas Vantage is only allowed for a single test run. The advanced edition costs $19.95 and unlocks nearly all of the features of the benchmark, while the professional edition runs $995.00 and is mainly suited for corporate use. The new benchmark contains six tests, four of which are aimed only at graphical testing; one to test for physics handling and one to combine graphics and physics testing together. The open source Bullet Physics library is used for physics simulation and although not as mainstream as Havok or PhysX, it still seems to be a popular choice.

With the new benchmark, comes two new demos that can be watched, both based on the tests. Unlike the tests, however, these contain basic audio. The first demo is titled "Deep Sea" and involves a few vessels exploring what looks to be a sunken U-Boat. The second demo is titled "High Temple" and presents a location similar to South American tribal ruins with statues and the occasional vehicle around. The demos are simple in that they have no story – they are really just a demonstration of what the testing will be like. The vehicles have the logos of the sponsors MSI and Antec on their sides – the sponsorships helping to make the basic edition free. The four graphics tests are slight variants of the demos. I will use the three benchmark test preset levels to test the performance of each card. The presets are used as they are comparable to what can be run with the free version, so that results can be compared across more than just a custom set of test parameters.

 

Settings

 

 

 

 

 

 

 

 

 

This is where the monster inside the GTX 680 is unleashed. 15k points on a single card! NVIDIA absolutely dominates these tests, there's no question.

Testing:

Eyefinity & Surround:

This page will show how each card in the testing can run at a resolution of 5760x1080 in either AMD Eyefinity or NVIDIA Surround mode. Higher and lower end cards are being pushed to deliver on this type of display solution for gamers, as well as in office productivity. The reality is that a high end GPU is required for gaming at this resolution with moderate AA and AF settings. I will be using the same settings used in the standard GPU testing to run each card with a single large surface display. For the displays, I will be using three Acer 23.5" 1080p monitors.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This is where it got interesting when it comes to numbers. The AMD cards battle back and forth with NVIDIA here, which makes for interesting battles. For the Unigine Heaven 3.0 benchmarks, the GTX 590 was having some issues (driver-related, most likely). We're working on getting in touch with NVIDIA to try to sort out the issue, so we'll hopefully have those numbers soon.

Testing:

Temperature testing will be accomplished by loading the video card to 100% using Unigine's Heaven Benchmark Version 3.0, with EVGA's Precision overclocking utility for temperature monitoring. I will be using a resolution of 1920x1080 using 8xAA and a five-run sequence to run the test, ensuring that the maximum thermal threshold is reached. The fan speed will be left in the control of the driver package and video card's BIOS for the stock load test, with the fan moved to 100% to see the best possible cooling scenario for the overclocked load test. The idle test will involve a 20-minute cool-down, with the fan speeds left on automatic in the stock speed testing and bumped up to 100% when running overclocked.

Settings

 

 

 

 

 

 

 

 

 

 

Temperature results here show what those 1536 CUDA cores do in their free time — the extra heat has to go somewhere. The  GTX 680 is a couple of degrees cooler than the GTX 580 in overclocked testing.

Testing:

Power consumption of the system will be measured at both idle and loaded states, taking into account the peak wattage of the entire system with each video card installed. I will use Unigine's Heaven Benchmark version 3.0 to put a load onto the GPU using the settings below. A 15-minute load test will be used to simulate maximum load with the highest measured wattage value recorded as the result. The idle results will measured as the lowest wattage value recorded with no activity on the system.

Settings

 

 

 

 

 

 

 

 

 

 

 

 

This was a neat test. In all but one scenario, the GTX 680 has the lowest bottom-line power consumption values of all the comparison cards. So, we've seen "the most powerful GPU"; now we just saw "the most efficient GPU".

Extras:

A new dawn is approaching:

Some of you may recognize this face.  She's coming back!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Max Payne 3 Screenshots:

Many MP fans out there may appreciate a little bit of a look at what's to come!

 

 

 

NVIDIA Fur and Fracture Demos:

NVIDIA has shared with us some special insight on some upcoming technologies. Videos have already made it out onto YouTube that show off NVIDIA's Fur and Fracture software. The fracture software is especially neat. Compared to pre-determined fracturing in games today that are easy to render, "fracture" is an interactive, real-time fracture simulation.

 

 

FXAA and TXAA:

Some of you may have already heard of FXAA, as it already exists in some games today, such as BF3 and Skyrim. It is a highly optimized anti-aliasing method that allows for superb image quality with very little overhead. TXAA is still under development, but it offers even higher image quality than multisampling (MSAA) can provide.

 

Conclusion:

Airman's (Mike's) Thoughts:

I hope by now, after seeing these results, that everyone can come to an agreement with, or at least partially accept, NVIDIA’s claim that the GTX 680 is the fastest and most efficient GPU on the market. In nearly every test, the NVIDIA GTX 680 beats the AMD HD 7970. In some cases, the performance numbers of the GTX 680 actually top the dual-GPU card from NVIDIA. So not only is the NVIDIA GTX 680 the fastest single-GPU card on the market, it’s bordering the edge of the fastest GPU on the market, period! That’s a huge statement, but the numbers don’t lie. Not only does it have more juice than pretty much every other card out there, it does the amount of work that those cards can do and more, with less power consumption across the board. The Kepler architecture is clearly very impressive and I can’t wait for what’s next from NVIDIA.

Overclocking the NVIDIA GTX 680, as I mentioned at the beginning, is really no different than what us hardcore overclockers are used to. However, the card itself is smart about it on its own, even without the user’s interaction. The GPU Boost (out of the box) is really just a way for the card to vary its core clock in real time to achieve a constant power usage level. By being aware of its own power usage, it has the ability to ramp up its clock speeds to make up for the "performance opportunity" in a less power-hungry game. Many readers met this with little acceptance, due to the scare that the card may only overclock so far before its "hardware limitation" takes over and a wall is hit. I am happy to report that in all my testing, never did I have the card down-clock itself under load. I was able to maintain constant clock speeds throughout all my testing before the power target of 132% became a limiting factor. Rest assured that the GTX 680 is a mean overclocker. With core speeds of 1250MHz attainable by just about everyone out there, over 1300MHz for Dave and myself, the proof is readily available. I wish I could have gotten the memory clocked higher, but I still managed to break 6.5GHz on it! Not quite as high as what many HD 7970s can boast for memory clocks, but that’s not really what is important!

The Kepler architecture really is something special. It is a perfect representation of what we can continue to expect to see in the future. Core sizes are staying the same or even getting smaller, heat output and power requirements are decreasing, despite packing more and more transistors into a tiny little area, and the architecture is being optimized to really accomplish something magical. Managing to fit 1536 CUDA cores onto something smaller than 300mm2 boggles my mind, and don’t even tell me that there is one transistor in there for every two people on this world; 3.5 billion transistors! Hell, I’m already ready for the next big thing from either side!

With the GTX 680 and Kepler architecture in general, we get to see some cool new technologies, such as adaptive VSync and the current and upcoming anti-aliasing methods, FXAA and TXAA. FXAA is already available out of the box in some games, which is a much more efficient and even more effective AA method in comparison to MSAA (multisampling). TXAA is on the way, but it has been said to offer image quality similar to that of 8X MSAA with a load overhead of less than or equal to that of 2X MSAA. Adaptive VSync is a simple software implementation that still eliminates the tearing effects of frame rates higher than the refresh rate of your monitor, but it also reduces the stuttering effect from where it may have to drop from 60fps down to 30fps, 15fps, etc. Adaptive VSync essentially turns the VSync mode off when frame rates drop below 60fps, and back on above 60fps — resulting in a smooth transition as opposed to a hard switch from 60fps to 30fps.

I have long awaited the release of the GTX 680 for months since I first learned that it was on the way. I had a blast over the past couple of weeks learning more and more about the card and finally after having one in my hands. Seeing its results, I really don’t know how I was able to keep all this to myself. As we've even experienced at home here on OverclockersClub, there has been a lot of speculation and hearsay about what the GTX 680 is going to do and what it is capable of doing. Strong supporters of both camps have vehemently defended their own sides even while no hard evidence had surfaced, but hopefully agreements can now be met and everyone can be happy. To conclude, I have to say the GTX 680 is a great addition to the market and I can’t wait to see what the price of $499 does to the AMD cards. Price wars: right after this!

 

Bosco's (Dave's) Thoughts:

Ok, so where does Kepler leave us? After looking at our tests, you can tell NVIDIA has got the edge again in the benchmark wars. In some tests it was close, while other times you can see the GTX 680 pull ahead by decent margins. There is a ton of new features that can be talked about, but I will cover a few key things I think are worth noting:

Drivers - I will admit it's rather refreshing to have some decent drivers come launch time. Basically what I mean is everything works, no crashes, no stuttering, no flickering in games, nothing to complain about, unlike the last few driver revisions from AMD. I have said it a lot lately and I will say it again, if you don't have driver support for your products, raw performance is only going to get you so far.

Four Monitors - Four monitors supported on one card is a welcomed sight. Looking back at the 400 and 500 series cards, you needed two cards in order to run Surround, unless you had a GTX 590. You always had superior game play running SLI for the most part, but at an added cost. Now a single card can give you three monitors or even four if you want to go to that extreme — so again, a welcomed sight.

Price - Given the past track records from both camps, having the highest performing card usually commanded the highest price tag. Well not in this case! NVIDIA comes out of the gate with a MRSP of $499, which will be a huge surprise to a lot of people. First you pretty much beat your competitor in performance, power and features, then you beat them in price too. Ouch! From a buyers point of view, this is great news, as it may lead to the start of a price war. From a business stand point, you could see this as a smart move by NVIDIA to grab some market share.

Overclocking - My card managed to get a little higher than Mike's did toping out around 1355 on the Core. I fully expected cards to be different, but with a close distance, and for the most part that is the case here . But overall, 300+MHz overclocks on the GPU Core is quite impressive to say the least.

Features - Besides the GPU Boost, three other features I am looking forward to testing are the improved FXAA, TXAA, and Adaptive Vsync. In my opinion, from the few hours of testing I have been able to do on FXAA, some games do look better than MSAA, so it will be interesting spending a lot more time testing it out. Adaptive Vsync also seems to be a lot smoother than normal Vsync, but more testing will need to be done. Talking with NVIDIA you can now force FXAA in the NVIDIA Control panel for all games, looking forward to testing it out.

There is a lot of rumors that this card is actually a mid-range card and to expect more powerful cards later this year. If that is in fact the case, I for one can't wait to see how the rest of 2012 plays out. Overall, I am glad to see the launch of the GTX 680 and a chance to upgrade from GTX 500 cards without killing the bank.

 

Pros:

 

Cons: