Windows Vista brought with it DirectX 10, and with DirectX 10 came a completely new approach to handling shaders. Gone are the distinct pixel and vertex shaders, replaced by unified shader technology that's much more flexible.
With each GPU I'll be noting the number of unified shaders in that part, but I want to make clear that Nvidia and ATI use completely different approaches to their shader designs. For example, Nvidia's top end part on the desktop has 240 unified shaders, while ATI's has a staggering 800. If you look at the raw numbers, the ATI part should be monumentally faster, but the designs of the shaders are actually radically different and as a result, Nvidia's top end outperforms ATI's. Thus, the number of shaders should only be used to compare same-branded parts and not ATI vs. Nvidia.
More is, of course, better, but will also draw more power and throw more heat.
MEMORY BUS WIDTH AND TYPE
One thing that hasn't really changed much in the time past is memory bus technology. In general, you will see three different memory bus widths on mobile parts: 64-bit, 128-bit, and 256-bit. Parts worth gaming on will generally never have a 64-bit memory bus, which is the thinnest and slowest. A 256-bit bus, on the other hand, is much more expensive to produce and so will only appear on absolute top end cards. The happy medium is often a 128-bit bus.
There are also four types of memory in circulation for mobile graphics. The first three differ generally in the top speed they can run at, while the fourth is newer and very different from its predecessors.
These first three are, in order of performance capacity, DDR2, DDR3, and GDDR3. Many manufacturers will mix up “DDR3” and “GDDR3,” and for the most part that's okay as they'll have pretty similar performance characteristics. DDR2 is the slowest by a mile and on most parts is going to be the second biggest performance bottleneck, next to the memory bus width. If you're going to be gaming, you'll really want to avoid DDR2 if possible.
The fourth and still somewhat rarefied memory technology is GDDR5. GDDR5 actually runs at a quadruple data rate instead of double like the other memory technologies, and can produce mountains of bandwidth. The use of GDDR5 almost effectively works as a jump in memory bus width. GDDR5 used on a 128-bit bus can produce memory bandwidth comparable to a 256-bit bus, and on a 256-bit bus can produce staggering bandwidth comparable to a 512-bit bus! As someone who actually has a desktop card using GDDR5, I can say it works pretty much as advertised; when tweaking the clock speeds on my graphics card, the memory speed is almost never the bottleneck.
COMPARABLE DESKTOP PART
Outside of the new G200 lineup that Nvidia has recently announced, mobile GPUs are always cherry-picked desktop GPUs. It's the exact same silicon with tweaked clock speeds. As a result, each mobile part has a desktop analogue that it can be compared to. Since reviews of mobile graphics are so rarefied (I try to do my share but it doesn't seem like enough of them ever pass through my hands), it can be helpful to be able to search for a desktop part and at least get a ballpark figure of how the mobile part you're looking at will run.
DIRECTX 10.1 vs. PHYSX/CUDA
One of the big differences between ATI and Nvidia right now are the technologies they're pushing to compete with one another. ATI has been the only vendor up until this point (the point of Nvidia's announced G200 parts) that produces DirectX 10.1 (as introduced in Windows Vista SP1) compatible parts. DirectX 10.1 support has been fairly rarefied, with the most notable introduction so far having been Ubisoft's Assassin's Creed. If you're looking to buy that particular game, do not buy the Steam version. Instead, buy a retail copy and do NOT patch it. Ubisoft removed DirectX 10.1 support they claimed was buggy (it wasn't) in a patch, and with that support in place, ATI cards have a massive performance advantage against the competition. Outside of this instance, DirectX 10.1 hasn't been terribly relevant.
But then again, neither has PhysX. On-chip PhysX is only usable on Nvidia's higher end parts, and can add additional detail to games that support it, like realistic cloth, breaking glass, etc. Unfortunately, like DirectX 10.1, PhysX hasn't proved remarkably compelling either, with the only notable title using PhysX hardware acceleration being the game Mirror's Edge.
Alongside PhysX in the Nvidia corner is CUDA, which is Nvidia's general purpose GPU computing platform. CUDA is seeing decent support and may be of interest to videophiles, where GPU-accelerated video encoding can produce healthy performance gains in CUDA-enabled applications. That said, I edit video on my desktop and have yet to have seen a need for a CUDA-enabled application. More than that, CUDA's shelf life may not be that long with the OpenCL standard beginning to surface. OpenCL is similar to CUDA, except that it's platform-independent. I can't imagine developers playing the vendor lock-in game and only using CUDA when OpenCL (and even Microsoft's upcoming DirectX 11 Compute Shaders) can run on either company's GPUs.
These are things to be aware of, but they shouldn't affect your decision.
MOBILE DRIVERS
This, on the other hand, probably should impact your decision process. As much as I have a stated preference for ATI's hardware, they're woefully behind on the front of providing unified mobile drivers. The Nvidia user is going to be able to update his or her video drivers with new releases (meaning new fixes and performance improvements) just by visiting Nvidia's site and downloading new ones. ATI users aren't so fortunate; if they want to update their drivers they have to either rely on the notebook manufacturer to update (good luck with that) or use third party software to modify desktop drivers (a chore).
I don't have too much of a problem doing the latter, but it can be a real headache for the more novice users, and for that reason I would tend toward recommending Nvidia's mobile hardware for the time being until ATI can pick up the slack and make mobile drivers available on the ATI website.
CROSSFIRE AND SLI
Both ATI and Nvidia have multi-GPU solutions for notebooks that will, with two exceptions, only appear in massive desktop replacement units. ATI's is called Crossfire; Nvidia's is called SLI. Please note that these solutions typically don't bring a linear performance improvement; two GeForce GTX 280Ms aren't going to run twice as fast as one, as latency and driver optimization come into play. With this technology, the aforementioned driver situation becomes more important ... because if a game isn't properly profiled by the vendor in question the game won't reap the benefits of SLI or Crossfire.
Now, those two exceptions: ATI and Nvidia both have integrated graphics parts that, when combined with a low-end discrete part, can be used in Crossfire/SLI and thus improve performance substantially. These solutions are still nowhere near as good as mid-range and higher options, but they're also economical and good for battery life. Nvidia's solution with the GeForce 9400M, in particular, can also swap between a mid or high-end discrete part to the IGP when the notebook is running on the battery, resulting in substantial power savings.
A BRIEF NOTE ON INTEL
Planning to play cutting edge 3D games? Excellent! Don't buy anything using Intel graphics. Intel's integrated graphics performance is historically poor and rife with compatibility issues. When you're looking between Intel parts, you're really only dealing with levels of unplayability.
A BRIEF NOTE ON VIDEO ACCELERATION
In addition to being miserable for gaming, Intel's parts outside of the 4500MHD are also the only ones in the lineup (excepting ATI's Radeon X1200 integrated graphics series) that don't support high definition video decoding and acceleration. All other parts are designed to offload high definition video playback from the main processor to the GPU.
COMPLAINT DEPARTMENT
Finally, I'd just like to thumb my nose at all three graphics vendors (ATI, Nvidia, and Intel) for their complete lack of consumer-oriented business practices. ATI's mobile graphics driver situation is a nightmare; miles behind Nvidia's driver support. ATI's marketing department isn't doing them any favors either; Nvidia routinely works with game developers to make sure games run well on their hardware, and their “The Way It's Meant to be Played” program is everywhere. Whether Nvidia pays developers to cripple games or not (see the Assassin's Creed controversy), ATI's not out there hustling.
Intel's driver situation is, if at all possible, substantially worse than ATI's. I'm fairly certain their graphics driver team is either one over-caffeinated teenager in a basement somewhere, or a bunch of programmers that weren't good enough to code for Creative (at least two readers should laugh at this one). Intel has basic compatibility issues with games, and they've made promises about basic performance in their hardware that they have failed to keep. Marketing lies, but Intel's integrated graphics are still essentially broken as far as I'm concerned.
Finally, whomever is responsible for Nvidia's mobile graphics branding needs to suffer at the hands of angry consumers ... or just be fired. It's not bad enough that the market is over-saturated with mobile parts that are essentially the same but named differently, but the brands of their mobile parts almost never line up with their desktop ones. The most egregious offenders are the GTX 280M and 260M, which are actually just G92 silicon -- in other words, these are not mobile versions of the desktop GTX 280M and 260M, which are worlds more powerful.
UNIFIED SHADERS
Posted by am13n
Subscribe to:
Post Comments (Atom)
0 Responses to "UNIFIED SHADERS"
Post a Comment