2011年9月24日星期六

Video RAM - how much do you really need?

http://www.yougamers.com/articles/13801_video_ram_-_how_much_do_you_really_need-page2/

 

Video RAM - how much do you really need?



10diggsdigg
By: Nick Evanson Nov 02, 2007


So what's it used for then?

Since this isn't supposed to be a detailed thesis on modern rendering techniques, let's keep this simple! Video RAM is used for three things: storing the permanent data needed to make images, storing temporary data created along the way and storing the final image. For a long time, the biggest slice of this was the permanent stuff; the materials and information used to generate the pictures seen.
The world of Oblivion minus textures and shaders: each of the dots connecting the lines is called a vertex and games use thousands of them.
This info is stored in the memory as a buffer (a geek term for "chunk of memory") and modern 3D games often have three of these: one each for vertices, indexes and textures. Vertices are points in 3D space that are used to build the actual scenes but they also carry with them further details such as what colour they should be and what texture needs to be applied to them. An index buffer contains lists of vertices, which might seem like it's just a repetition of the vertex buffer, but they're only lists, not the actual vertices themselves: index buffers are used to help speed up the processing of vertices, if there's lots of them. The daddy of them all, though, and especially today, is the texture buffer.
Whenever you look at something in a 3D game, you're actually looking at a 2D image that has been mapped out onto a shape. In the early days of graphics cards, they were tiny in size (often nothing more than 64 x 64 pixels in size) but these days they can be huge; Crysis, for example, routinely uses textures up to 2048 x 1024, and can weigh in at up to and over 1MB in size. A single megabyte might not sound very much but NVIDIA's original Riva 128 graphics card (released ten years ago) only had 4MB of video memory!
How about some figures to put things into perspective? Take Futuremark's 3DMark series: their first one, 3DMark99, used up to 5MB of textures for a single rendered frame; fast forward to 2003 and 3DMark03 now waves around texture buffers 50MB in size, and their latest version makes that look like nothing. But wait a moment - haven't graphics cards had more RAM than this anyway? Indeed they have: NVIDIA's GeForce 256, released in 1999, had 32MB and one could easily buy cards with 256MB in 2003. So they've always had enough then, yes? Actually, yes they have... until recently.
Three big rendering technologies have become standard in the past few years and they place huge demands on the RAM: normal mapping, anti-aliasing and post-processing.

Getting greedy now!

One of the hundreds of textures used in the current Crysis demo.
We would have actually run into problems with a lack of video memory many years ago, if it wasn't for something called compression. A single 128 x 128 texture, in 24 bit colour (8 bits for each channel), is only 48kB in size but a 1024 x 1024 "alpha" texture (some of it is transparent) can be as large as 4MB! Fortunately, textures can be compressed without losing too much of their original detail and they can be squashed up to 8 times smaller. There are different types of textures, though, and not all compress without causing problems - one culprit in particular is called a normal map.
These are used with the "base" texture (the surface of a wall, for example) to make them look rougher or have more detail; they also tend to be pretty big too. Normal (or bump, or parallax) mapping is ubiquitous in 3D games today and quite often a game has several texture maps for a single surface, especially if it's water - for example, the surface of the water in the first HDR test of 3DMark06 uses two normal maps, four wave maps, and reflection and refraction textures. So despite compression, the size of the texture buffer has grown ever more rapidly of late and we'll look at how much memory they can potentially use later in this article.
Anti-aliasing has also exploded in terms of usage. Explaining how it works is and what the different types are is for another time, but virtually all desktop graphics cards employ a method called multisampling. This procedure helps to smooth out the edges of objects in the 3D world by "blurring" the boundary between the object and the background: think of it like a dark charcoal line that somebody has run a thumb along. Unfortunately, the improvement in image quality doesn't come free and one of the costs lies in using additional video memory. As a very rough guideline, one can make an estimate to the size of the maximum amount of RAM multisample anti-aliasing will need, by using this formula: 4 bits x ((X x Y) x (1+2A)) where x and y are the resolution values (e.g. 1024 x 768) and A is the anti-aliasing level. For example, running at 1280 x 1024 at 4xAA could use up to 45MB of memory, just for the AA. Could is the important word here, though, as different graphics cards, drivers and games will consume different amounts - some more than this, some less.
Post-processing is also de rigeur in 3D games right now. What the term means is that after (or sometimes during) the final image has been rendered, additional effects are applied to it for creative purposes. One of the most commonly known, and commonly over-used, post-processing trick is bloom: this simulates how lenses causes overly bright objects to look brighter than they should be; other effects include motion blur, depth of field and bullet trails. These involve an increased use of the video memory because the image has to be completely rendered several times before finally being displayed, and those "intermediate" images have to be stored somewhere!
Futuremark's 3DMark06 highlights just how much difference the lack of post-processing (top) makes...
...in games where they've been designed to it (bottom). Part of the payback involves more RAM usage.

What happens when you run out of RAM?

Contrary to popular belief, it's not the quite end of the world if your graphics card doesn't have enough onboard RAM to store everything. The drivers and hardware handle things quite nicely, and games can be programmed to ensure that the performance doesn't crash into the floor (especially those that constantly stream data out of system memory like Oblivion). The most common side-effect is something called "texture thrashing" - if there isn't enough onboard memory to store all of the textures needed, then some will remain in the system memory. When the graphics processor wants to use them, it copies them across into its RAM, deleting other stuff to make room. Cue a spot of stuttering or slow down in the frame rate; this is because it takes quite a bit longer to swap textures around than just accessing them in the onboard (or to give it the correct name, local) RAM.
Some graphics cards, mostly low-cost, budget models, only have sufficient local memory to store rendered frames and a few other bits and bobs: nearly all of the textures remain in the system memory. One might think that this is a stupid idea but budget cards are low in performance (compared to their high end relatives) anyway, so there's no point in slapping on several hundred MBs of high speed GDDR3, when the rest of the card will just slow things up. However, if you're an enthusiast gamer, and you want the best possible performance with greatest of visuals, then you want to avoid running out of VRAM.
All of this nerd talk is to get to the point that the RAM on a graphics card is for more than just storing textures and it's usage is growing very quickly. With all this in mind then, how much are our games actually using? To examine the amount of video memory being consumed, we used the appropriate plugin for the hardware monitor section of RivaTuner. This method is only suitable for Windows XP, because Vista doesn't differentiate video RAM from system RAM - it's all the same, as far the operating system and games are concerned. We used a current generation NVIDIA graphics card with 512MB of onboard memory for all of the testing; when using anti-aliasing, ATI drivers allocate RAM in a different way to how RivaTuner's plugin is expecting it, so the tool displays the wrong amount. This isn't a problem because however much memory is being used by the game, it's pretty much the same regardless as to what brand of graphics card one is using.

没有评论:

发表评论