Monday, October 20, 2008

Graphics - A story of convergence.

I've been quite busy with a multitude of assignments this week, and before i knew it, its the end of my guest week on this blog, so I've decided to just end off with a post on my favourite topic, graphics!

I've noticed that GPU technologies have been undergoing a rather dramatic transformation these few years. Every new generation of GPUs introduces new features and increased programmability. More and more parts of the graphics hardware now allow developers to introduce their custom code to generate some really cool effects. These pieces of custom code are called shaders by the way, and if you've played some of the latest games like Crysis, you've already seen these things in action.

But before we had all this programmability, graphics cards were monolithic, fixed function devices, much like your run of the mill calculator. It could only perform a limited range of tasks, and all of them did it the same way. So there wasn't much in terms of really cool graphics effects back in that era. But now, with programmable hardware, things have become really exciting.

Now if every new generation of graphics hardware added more programmability, there would certainly be a day when graphics hardware would become so flexible, they could be used for non-graphics applications. And in fact, that's what's happening today. Ever since programmable hardware appeared on the market, people have been programming it for non-graphics tasks. Researchers have used it for medical tomography, computer vision, molecular dynamics, financial options pricing, and the list goes on. This kind of development is termed GPGPU (General Purpose GPU) development, and it involved hacks and workarounds in shaders and graphics API (DirectX, OpenGL) to get applications to generate useful data.

The reason why developers go through all of that trouble to use graphics cards is because the hardware is incredibly good at performing a certain class of problems way way faster than a regular CPU can. This class of problems are known as stream processing problems and they have this characteristic known as data parallelism, where large datasets are broken down into smaller independent pieces that could be processed in parallel.

And real-time computer graphics is precisely a stream processing problem. Pixels were drawn onto the screen one at a time, and these pieces could be processed independently from each other. (Well, that's not quite the whole picture but just for the sake of simplifying the discussion, we'll stick to that understanding.) And so as it turns out, once we begin to think beyond the semantic confines of a pixel with programmable hardware, the GPU could be applied to stream processing problems.

In fact, if we were just to consider stream processing problems, when nVidia debuted its 8800 series GPUs, it was about 200 times faster than Intel's Core 2. Today's GPUs have hit 1 teraflops of computational performance. So the next time you're buyin a new graphics card, in a sense its like buying a compute cluster for about 500-700 bucks. And while In CPU-land, we're talking about 4-6 CPU cores, a GPU device these days can have up to 256 streaming processors. That's a lot of computing power.

And I think it is this combination of increasing programmability and stream processing capabilities that have led to the convergence that were seeing today. And this convergence has taken many different forms.

GPUs are becoming increasingly versatile beyond graphics applications. There's nVidia's CUDA (Compute Unified Device Architecture) where developers can write stream processing applications in C. Then there's nVidia's new PhysX driver that enables games to run complex physics calculations on the GPU. Also, there is currently a research explosion in shader development. And a recurring theme in many publications involves bringing traditionally non-realtime graphics techniques into the gpu, making them real-time. In fact there has been a lot of talk about real-time raytracing. So, we're seeing a lot of things that were traditionally CPU implementations being switched over to the GPU.

The distinctions between GPUs and CPUs have begun to blur out. Intel has predicted a future where GPUs will become fully programmable and it is attempting to jump ahead of the curve, with the Larrabee project, a device that's speculated to comprise of up to 32 lightweight x86 CPU cores. A well known industry figure even went as far as to speculate the death of standard graphics APIs in the future, as anyone will be able to come up with their own graphics APIs with fully programmable hardware. There are even rumors that AMD is planning on creating a fusion chip combining CPU and GPU into a single chip.

Well, between speculation and fact, its hard to tell what's really going to happen next. But I have a feeling this theme of convergence will continue to resonate in this field.

- Shijun

No comments: