Hardware Review: An overview of the GPU

In this second installment in our hardware review guide, I am going to talk about the "GPU", otherwise known as the "Graphics Processing Unit".  This isn't so much of a review of a specific product as it is the history and general understanding of which component in your system brings you the graphics that you see on your screen when playing your favorite game, or simply working on a spreadsheet document on your desktop.  To view further information about components inside of your computer, visit our System Performance Guide !:

GPU Designers - ATI, NVIDIA, Matrox, and Intel

The existence of the GPU (Graphics Processing Unit) began in the 60s, well before modern personal computers came onto the scene. I'll skip a little bit ahead to the current era, which had its roots in the late 80s. During this time dozens of designers developed rival solutions, especially once the personal computer became a standard in the early 90s. Note that there are dozens of manufacturers of video cards, but they simply use existing GPU designs to market slightly different physical products.

Thanks to our resident forum moderator Kaolian for a few additions to the review! Click the "More..." link below for the rest of the guide!

To get additional terminology out of the way, a GPU is the main processor components of a video card, which is the physical form that can be removed from a computer. There is no such thing as "internal graphics cards", as these are simply referred to as GPUs themselves, since the "card" is part of the main components of the computer itself and not removable as a separate piece. I will use video card for this piece of hardware when appropriate, and GPU as the overall component. To note, in the early years (pre-1998), 2D and 3D graphics were on separate video cards, not integrated like today's cards are

Matrox, Nvidia, and ATI are the three main designers of the past decade. However, Matrox has recently fallen by the way-side in 3D graphics development, currently marketing their cards to video editing professionals. Founded in 1976, Matrox had one huge advantage for a long time and that was in the quality of 2D graphics, which is what you are using now to view this article. 2D graphics is the basis for displaying your computer's boot screen, interface layout, and just about everything else that isn't in a game or other 3D environment. Matrox was able to develop sharpness and quality in text and graphics that neither Nvidia nor ATI could match, up until the past few years at least.

ATI, founded in 1985 and bought out by the CPU manufacturer AMD in 2006, had a head start from Nvidia, often achieving great successes. Their Rage series were extremely popular, even though they fell behind Matrox in 2D quality and in 3D quality with Nvidia. ATI has almost always been on the leading edge in terms of innovation. This is not to say that Nvidia hasn't innovated, as they have produced amazing advances in graphics technology over the years. However, you will often see ATI ahead in technology and feature support. This is sometimes attributed to their balancing of features vs. performance. Until recently, Nvidia was the dominant performance leader, so they had to find a niche market to survive.

Nvidia, found in 1993, saw where the computer revolution was headed, and that was in advanced 3D graphics horsepower. They began with the product "NV1", a very simple video card that didn't do a whole lot compared to today's cards. In 2000, Nvidia bought out the manufacturer 3dfx, which was then developing their Voodoo line of video cards. 3dfx had a lot going for them, as the Voodoo series was quite popular. They required their own slots in a computer. After the 3dfx buyout, Nvidia quickly scaled to RIVA based cards which included the 2D component onboard, to the Nvidia branded "GeForce" series you have come to recognize over the past several years.

 

What is a GPU?

Just like everything else in your computer, electronic components are built on substrates called printed circuit boards (PCBs). This is where your transistors, capacitors, wires and all the processors are laid out. Without the PCB, it would be like a city without buildings. It has no infrastructure to work and maintain order with. GPUs have their own PCBs in two primary configurations: onboard and dedicated. As mentioned earlier, the dedicated solutions are referred to as video cards, and more often today as graphics cards.

Onboard GPUs are found right on the main circuit board itself (motherboard), right next to the chipset, memory, and other components. Dedicated GPUs are individual cards you insert into slots found on the motherboard, and can be removed for faster components as they are introduced. Since onboard GPUs are simplified (and thus often less powerful) versions of their dedicated brethren, I'll just be talking about the dedicated versions. These video cards will have several key components, found in two categories: Internal and External Connectivity.

 

Internal Connectivity Features:

There have been many ways over the years to connect video card to your system's motherboard. Video cards are perhaps the most diverse of components when it comes to this capability. While there are less-used slots, such as: MCA, VLB, and PCI-X, I will only focus on the more popular and significant ones to remember.

ISA - ISA slots (Industry Standard Architecture) were the very first type of slot found on motherboards for sound cards, video cards, network cards, modems (and now Ethernet cards), and many other components. They are no longer in use for mainstream systems and barely even supported video cards, as video cards grew in importance during the PCI era. ISA slots are black in color.

PCI - This is the slot many first and second generation video cards are connected to the motherboard with. They are similar to an ISA slot, except for one important feature-the cards are inserted upside-down! PCI offered a lot more power over ISA, but it wasn't until the AGP slot that video cards really started to increase in horsepower. PCI slots are white in color.

AGP - AGP slots (Accelerated Graphics Port) were the first slots developed exclusively for video cards. Now that they had so much power potential, games on the scale of what we see today became possible. Even with its potential as a dedicated graphics port, AGP's advances through speed increments of 2x, 4x, and finally 8x, were ultimately limited. There needed to be something even better, and thus we finally arrive at the new universal slot standard, PCI express. An AGP slot is gray or tan in color.

PCI-E - PCI-E (Peripheral Component Interconnect Express) offered what ISA and PCI slots used to offer, the ability to connect anything to them, including video cards. There are different versions and sizes of these slots, however. PCI-E is a huge advancement over AGP, and is powerful enough to run the latest and greatest games for several years to come. PCI-E, with support for multiple video card connectivity, can have multiple slots on the motherboard, much like ISA and PCI have.

 

External Connectivity Features:

Now that we know how video cards are connected from within a computer, let's take a look and what options there are to connect the video card to a monitor or external display device. As with the internal components, we've cut out a few of the more miscellaneous and less-used references for the sake of simplicity and usefulness for you.

VGA - VGA (Video Graphics Array) connections began to appear in the late 80s and were the primary connection to the CRT (Cathode Ray Tube) based monitor. This is the same technology you find in many of your current, non-HD televisions. It offered versatility in resolution and color support. Its underlining caveats were its limited range and signal-to-noise ratio problems. VGA is an analog connection, while the superseding and superior connector is DVI.

DVI - DVI (Digital Video Interface) is the same basic idea of VGA, but in a digital format. Introduced in 1999 and still used as the primary display connection device today, it allows significant increases in quality, resolution, and distance to devices than VGA isn't capable of. The key with DVI is that it is digital and thus less susceptible to signal degradation.

Composite/S-Video/Component - All of these connectors are what I would call "Secondary Connections", in that they are not typically used for the primary output connection for your monitor or display device. A few years ago there were used to connect a computer to a CRT television or for input into the video card for incoming television and camcorder signals. They are a great way to record your home videos onto your computer!

HDMI/DisplayPort - HDMI (High-Definition Multimedia Interface) is a new standard developed in 2003 and just now beginning to supersede DVI in connecting your computer to your display, especially new HD television displays that allow connectivity to consoles as well. HDMI's key feature is its combination of video and audio connectivity. All previous technologies were video only. DisplayPort is similar to HDMI, with support for higher resolutions, encryption standards, and other miscellaneous features.

 

What are GPUs for, besides games?

GPUs have been used for just about everything from researching DNA to designing cars through a CAD program. I'll list here the four key areas of use, why they are used in those areas and by whom. Most users will reside in more than one of these areas. Video cards are extremely versatile in order to satiate the needs of as many users as possible. Today you will see companies like Intel and their "Larrabee" project only further driving this ‘universal support' goal.

The Gamer - The thirst for 3D graphics power is unquenchable here. The gamer must ensure liquid smooth gameplay, while maintaining beautiful graphics and sharp details in order to effectively defeat their opponent. The gamer has to upgrade their GPU every few years to keep up with this need. Owning a dedicated video card solution is usually the way to go here, in opposition to relying on integrated GPUs. Price vs. performance is especially important in this category.

The Graphics Designer - Has no use for games or plays them only on occasion. The graphics designer is all about the job and getting work done. Fortunately, there are video cards available to help in this role. Almost identical to the gamer's GPU, the video card of the graphics designer has one key difference-specialized software drivers. Tailored to enhance CAD-like functions and overall processing of design components, these drives will be inefficient when attempting to run a game with them. These types of GPUs are usually not found as an integrated solution.

The Number Cruncher - While the CPU until recently has remained the dominate processor for SETI, Folding@Home and other ‘floating point' heavy calculations, it has been found that a GPU can be designed for this task in an incredibly more efficient way than a CPU could ever do. Those doing research will find GPUs to be the dominant form of their processing in future.

The General User - This type of user might not even need a video card, well, a dedicated 3D one anyway. The general user will use the computer for office notes, writing documents, and listening to music or watching a video update from their favorite news website. The power of the video card is almost entirely wasted on these tasks, so upgrading is infrequent and not specific to any task they need an upgrade for, especially in the past few years as overall system horsepower has quickly risen above general usage requirements.

 

Performance of GPUs

Let's talk games... as that is what most of you reading this are likely interested in. I know I am! Speed in GPUs has naturally increased year-over-year. This has been a consistent phenomenon since the very beginning, much like how CPUs have advanced according to Moore's Law. So, we know that speed increases with each generation, but how? What do they do to these GPUs to make them faster? I won't get into nitty gritty details that will put you to sleep, so let's just go over the very basics, and that starts first with the size of the processors.

Like any processor, the GPU consists of nano-scale metallic wires running around each other, similar to the interstate highway system of the United States. Processors increase the number of total paths every generation, just like when adding new highways to connect growing cities. What is different is in the size of these highways. In processors, for each generation the highway is "down-sized", the wires are made smaller and smaller so you can pack in more and more that are for specific tasks. This is the key in creating faster, smaller, and more power efficient processors, and the chief reason today why you can run World of Warcraft instead of just Pong.

There are many other ways to increase the speed and efficiency of a processor. Adding buffers, much like car-pool lanes, as well as adding additional memory, like adding wayside rest-stops, are just a few possibilities. These are crude and slightly inaccurate depictions of what a processor's features consist of, but it's a start that should give you a beginning understanding of how the industry always seems to be able to make things faster and faster every year, without significantly increasing costs.

 

Value of GPUs

As the industry has matured, you see an increase in diversity of products offered from the increasingly few manufacturers. This is in an attempt to appeal to everyone's desires and needs, which are continually growing in diversity as well. Sometimes Nvidia will come out with a new generation of GPUs that beats performance and price of AMD's offerings. At other times it is AMD beating out Nvidia. Until recently, AMD was targeting the low to mid performance user with lower price points. With their current generation of GPUs, they have pushed farther into the high-end market, while keeping with lower price points. We'll like see this shift again as new generations of GPUs are introduced.

 

The Future of the GPU

The entire electronics industry, from Intel and their CPUs, to AMD and Nvidia with their GPUs, are realizing that the integration of the two is inevitable. While CPUs are good at the tasks they were designed for, the same going for the GPU, it has been recognized that games and other tasks are requiring the power of both of these types of processors in a way that simply requires the two to come together in one package.

Intel, a CPU manufacturer, has made a first-strike effort with their Larrabee project. Larrabee is a CPU/GPU hybrid processor. While in the short-term we likely won't see games use Larrabee based processors due to integration limitations, the future of the hybrid processor will eventually supersede any current performance shortcomings. You can bet laptops will see Larrabee processors early on, as laptops and other portable devices will have use for these "one-chip" solutions. On the opposite end of the battlefield, Nvidia is developing a technology called "CUDA", which is designed to bring the CPU closer to the GPU. Each has its advantages and disadvantages, but both should be a win-win for the gamer.

As expected, there will be the continual advances in features within the GPU technology itself. One of the most exciting advances we're seeing at this time is a lighting technology called "Ray Tracing". In a simple explanation of the technology, it allows a more accurate calculation for how light reflects off surfaces, and thus how realistic you see shadows and overall lighting environments. The downside to this technology is a heavier processing load on the system, at least in its current state of development. Expect ray traced games to start appearing when DirectX 11 is released, sometime late 2009/early 2010.

The most exciting feature that gamers will be looking forward to in future GPU advances is not so much through the connectivity and merger of the GPU with the CPU, but GPUs with other GPUs. You've likely heard about Nvidia's SLI (Scan-Line Interface) technology, or AMD's Crossfire. SLI and Crossfire are ways to connect separate video cards with each other, so that a game or other graphics intensive program can take advantage of both. I mentioned the PCI-E internal connector. Thank this technology for making SLI and Crossfire so successful.

Another way to combined two GPUs is to simply slap two onto one video card itself! You only use one slot in the computer and get nearly the performance of two (and in the near future, more than two) GPUs. There are performance bottlenecks, driver disadvantages, and other issues with either of these technology choices, but they are slowly being resolved with each successive generation. Nvidia and ATI both see the future of the GPU as a "Many Core" platform.

In short, expect every year to bring about faster GPUs, which will spur games to develop more complex and beautiful environments and interactivity, which will of course spur the need for faster GPUs... and thus the cycle continues as it has since the race began. Enjoy your games in all their 2D or 3D glory!

 

Comments

Free account required to post

You must log in or create an account to post messages.