We noticed you're blocking ads.

Keep supporting great journalism by turning off your ad blocker.

Questions about why you are seeing this? Contact us

Font Size

- Aa +

Sat 16 Feb 2008 04:00 AM

Font Size

- Aa +

It’s playtime

DirectX 10 game titles look fantastic and are truly the future of gaming, so you're interested in jumping on-board, read on as Windows explains all about your mid-range GPU choices...

DirectX 10 game titles look fantastic and are truly the future of gaming, so you're interested in jumping on-board, read on as Windows explains all about your mid-range GPU choices...

Every game you play is powered by a game engine that is responsible for creating the visuals you see on-screen. These engines use specific techniques such as shading etc by talking or interfacing with a graphics API (Application Program Interfaces) to get the job done.

Essentially an API is the interface sitting between the app or game and the physical hardware. It basically talks to the hardware and tells it exactly what the program wants to do.

Currently, the best looking games (Crysis and Gears of Wear for example) make use of the newest DirectX 10 (DX 10) API, which in turn offers support for a specification known as Shader Model 4.0 (SM 4.0) (older titles still use DirectX 9 (DX 9)).

However, while developers have a choice of whether or not to use DX 10 at the moment, in the not too distant future, DX 10 will likely become the standard API of choice. For your graphics or VGA card to render these games properly - i.e. how the developers intended their games to look on your screen - however, the GPU (Graphics Processing Unit) powering your card needs to ‘speak the language'.

Thankfully, both AMD and nVidia have cost-effective, mainstream GPUs that fit the bill. In most cases, the GPUs, and thus the graphics cards they're built into, that fall into this mainstream segment retail for between US $150 and $300.

It's true that mainstream GPUs can be found on more expensive cards, but these are generally specialist cards that are either pre-overclocked (set at a higher clock frequency than normal for further enhanced performance) or feature more graphics memory. These models can sell for as much as $100 more than their standard, reference-design brethren. Let's get started examining what mid-range, DirectX10 GPUs AMD and nVidia offer.

CLOSE-UP: AMD GPUs

The firm released its latest mid-range series of GPUs (AKA graphics chipsets) last November, known as the Radeon HD 3800 series. At present two GPUs are being shipped to card manufacturers; the Radeon HD 3870 (codenamed RV670 XT) and the Radeon HD 3850 (RV670 Pro).

The launch of this GPU series involved the initiation of a new GPU naming scheme for AMD. In the past, the firm had differentiated its GPUs' various speed grades by using either two or three characters (XT/XTX /Pro) after the GPU series' model number. (Generally ‘XTX' signified the highest-end component, so if you came across a Radeon X1950 XTX card, it was the top model in the X1950 series.)

Now though, AMD has done away with the character labeling and instead varies the last two digits in the model number itself, with higher numbers representing a higher-end, more expensive part. So now when you look at the 3870 and 3850, the first two digits signify both cards belong to the 3800 series and the 70 signifies the higher end chipset.

That's not the end of the story however as AMD has plans to release cards featuring multiple GPUs on a single PCB (Printed Circuit Board) in coming months.

Nine vs tenMicrosoft's DirectX 10 (DX 10) API is much more feature rich than its predecessor, DX 9. In a nutshell, the new API makes it possible for developers to create almost photo-realistic gaming graphics. Both of the images above are from Microsoft's Flight Simulator X though the screen at the bottom is rendered using DX 10 whilst the one at the top is created using DX 9 .

The bottom image looks far better thanks to its more realistic looking clouds and water effects. All of the GPUs we talk about in this feature, as well as in the review pages thereafter, are capable of DX 10 rendering.

DX 10.1, expected by the middle of this year, will add minor updates to DX 10 but you'll still need a compatible GPU if you want your games to look their best.

These cards may adopt a slightly different naming scheme. At the time of going to press, AMD revealed that while these cards will follow the numbering scheme above, they will also feature two characters (such as X2) after the model number, to show that the card is a dual GPU model.

The only dual-GPU card that Windows has knowledge of at present is the Radeon HD 3870 X2 (R680), which will feature two 3870 GPUs.

All 3800 series GPUs boast a number of common features but AMD is touting one feature the most; support for Microsoft's existing DX 10 API as well as the upcoming 10.1 update (the latter is expected to be incorporated into the forthcoming Service Pack 1 for Windows Vista). The 10.1 upgrade also brings support for Shader Model 4.1 (SM 4.1).

To date, most games are still being coded using DX 10 and SM 4.0 so AMD is slightly ahead of times in this regard, but when software arrives that is written to take advantage of DX 10.1 and SM 4.1, this does mean you should be able to play them on any member of the existing 3800 series.

AMD's 3800 series of GPUs feature multi-GPU support - allowing you to combine graphics cards to boost performance. This AMD VGA feature is called CrossFire X. To run GPUs in CrossFire X mode, you'll need a motherboard with a compatible chipset such as AMD's 790FX or 790X though certain Intel chipsets such as the 975X, P965, P35, X38 and X48 are also CrossFire-compatible.

The 3850 and 3870 are again similar here in that both support four-way CrossFire. The upcoming Radeon HD 3870 X2 however is only going to support two-way CrossFire (though this will effectively result in a four-way GPU system as each card features two chipsets).

All the current members of the 3800 family have been manufactured using AMD's 55nm fabrication process. This makes the entire series more power-friendly and thus run cooler than their immediate predecessors - the 2900 and 2600 series.

The 3800 series of GPUs also pack in HD (High Definition) decoding as well. This technology, known as Unified Video Decoder (UVD), makes it possible for the GPU to off-load certain HD decoding tasks from a system's CPU - making it possible for even systems with slower CPUs to process HD DVD or Blu-ray content.

The decoder is compatible with content encoded using the H.264 and VC-1 video codec standards (like nVidia's PureVideo 2); both very popular codecs used by most HD content creators.

The only differences between the 3870 and 3850 then come down to their core and memory speeds. 3870 cards run at standard frequencies of 775MHz and 2250MHz respectively, while 3850 cards run at 670MHz and 1660MHz.

CLOSE-UP: nVidia GPUs

nVidia's GPU portfolio is larger than AMDs because in addition to producing entry-level and mainstream GPUs - which AMD does - nVidia also offers monolithic, single high-end cards such as the GeForce 8800 GTX and 8800 Ultra.

Content friendlyIf your PC features a relatively slow single- or dual-core processor (sub-2GHz for example) and you're interested in watching 720p or 1080p HD DVD or Blu-Ray content on your PC, some AMD and nVidia GPUs offer HD-content decoding capabilities.

While these don't off-load the entire decoding process from your machine's CPU, these GPUs do handle the most intensive calculations, which normally force lesser CPUs to grind to a halt.

AMD has abandoned building single, monolithic high-end GPUs and suggests instead that users opt for its CrossFire X multi-GPU solution to squeeze out the very quickest in-game framerates.

As a result, nVidia's high-end parts released in November 2006 and May 2007 haven't been changed or updated yet as these are still, comfortably, the fastest single GPUs on the market (Industry rumours suggest that nVidia might release a new series of high-end GPUs later this month, known as the GeForce 9800 GTX and 9800 GX2).

Three GPUs currently make up nVidia's mainstream lineup; the GeForce 8800 GTS (G92 revision), 8800 GT and the 8800 GS. The GTS is the fastest and thus most-expensive chipset, followed by the GT and GS. Check out the table on page 35 for details on the technical differences between these three chipsets.

The 8800 GTS is actually something of a second coming as nVidia originally launched the GTS back when the 8800 series was first introduced in 2006. The re-launched GTS however was released in December of 2007 and is actually based on the G92 core, which also powers the 8800 GT and 8800 GS.

(The original GTS was powered by the G80 core, which still powers the GTX and Ultra cards.) GTS equipped cards, like its three other mainstream counterparts, are compatible with Microsoft's DX 10 and SM 4.0 specifications (meaning spec-wise, AMD's 3800 series has a longevity edge, as this family supports DX10.1 and SM 4.1).

The 8800 GT was the first of nVidia's new mainstream cards, released in October of 2007. It sits smack bang in the middle of the mainstream segment and compared to the GS and GTS GPUs also offers support for nVidia's PureVideo 2 technology.

This is essentially a HD content processing engine, much like AMD's Unified Video Decoder, that offloads some of the intense video decoding tasks from a system's processor. Using this it becomes possible to play 720p or 1080p HD DVD or Blu-Ray content encoded using the H.264/AVC video codec on systems with even low- or mid-range CPUs.

The latest nVidia GPU, the 8800 GS, is positioned as the least expensive mainstream card from nVidia. While it offers DX 10 and SM 4.0 compatibility, it is designed to run these games at lower resolutions, much like AMD's Radeon HD 3850. Like its higher-priced counterparts, this GPU too is fabricated using a 65nm process, which is the most advanced process nVidia employs right now.

In terms of nVidia's ‘SLI' multi-GPU support, the GTS is identical to the GT and GS in that all three GPUs are limited to two-way SLI (GTX and Ultra cards support three-way SLI).

As with AMD's GPUs, you'll need a compatible board to run these cards in multi-GPU mode. Unlike AMD's multi-GPU technology however, you can only run SLI with nVidia-made core-logic-chipsets. Chipsets that support SLI are easily identifiable as they sport the ‘SLI' lettering in their model numbers i.e. 680i SLI or 780i SLI.

That's the background, now let's get to the testing...

Juicing upAll of the GPUs here, though efficiently built using up-to-date fabrication processes, are still power hungry in that under 100% load they will need over 100 watts of power to function properly. So depending on the rest of your system's configuration, you'll need a capable power supply (PSU).

We recommend no less than a 450-watt PSU for a mid-range machine and no less than 600 watts for a high-end PC.

Arabian Business: why we're going behind a paywall

For all the latest tech news from the UAE and Gulf countries, follow us on Twitter and Linkedin, like us on Facebook and subscribe to our YouTube page, which is updated daily.