热点科技

标题: 【转载】Nvidia's G80 完整解密 [打印本页]

作者: chenever    时间: 2006-11-2 23:43
标题: 【转载】Nvidia's G80 完整解密
【转载】Nvidia's G80 完整解密
http://we.pcinlife.com/thread-632666-1-1.html

前段时间,我已经完整地公布G80规格,特别是REALSTIC HDR LIGHTING EFFECTS WITH ANTI-ALIASING PROVIDES TWICE THE PRECISION OF PREVIOUS GENERTIONS

现在好了,INQ终于对G80完整解密

http://www.theinq.com/default.aspx?article=35483
THIS ARTICLE REVEALS all of the important information regarding GeForce 8800 series, which is set to be released to the world on November 8th, 2006 in San Jose. We have learned that during traditional Editor's Day in San Francisco nVidia kept its rules, so "no porn surfing" and "no leaks to the Inquirer" banners were shown. But, we have no hard feelings about that. It is up to the companies to either respect millions of our readers, including employees of Nvidia or... not.
As you already know, Adrianne Curry, a Playboy bunny, America's Next Top Model star and an actor from My Fair Brady is the demo chick for G80. After we posted the story, we received a growl from Graphzilla, but we are here to serve you, our dear readers. However, this was just a story about a person who posed for the G80. Now, it's time to reveal the hardware. Everything you want to know, and don't want to wait for November 8th - lies in this article. Get your pop-corn ready; this will be a messy ride.

For starters, the 8800 launch is a hard one, so expect partners to have boards in store for the big day's press conference at 11AM on the 8th. The board delivery will go in several waves, with the first two separated by days. The boards were designed by ASUSTeK, and feature a departure from usual suspects at Micro-Star International. This is also the first ever black graphics card from nVidia. Bear in mind that every 8800GTX and 8800GTS is manufactured by ASUS. AIBs (add-in board vendors) can only change the cooling, while no overclocking is allowed on 1st gen products. Expect a very limited allocation of these boards, with UK alone getting a mediocre 200 boards.

The numbers
G80 is a 681 million transistor chip manufactured by TSMC. Since Graphzilla opted for the traditional approach, it eats up around 140 Watts of power. The rest gets eaten by Nvidia's I/O chip, video memory and the losses in power conversion on the PCB itself.

If you remember the previous marchitecture, the G70 GPU embedded in 7800GTX 256MB, you will probably remember that the Pixel and Vertex Shader units worked at a different clock speed. G80 takes it one step forward, with a massive increase in clocks of Shader units.

GigaThread is the name of the G80 marchitecture which supports thousands of executing threads - similar to ATI's RingBus, keeping all of the Shader units well fed. G80 comes with 128 scalar Shader units, which Nvidia calls Stream Processors.

The reason Nvidia went with SP description is a DirectX 10 function called Stream Output, that those Shader units will now work on Pixel, Vertex, Geometry and Physics instructions, but not all at the same time. The function, in short, enables data from vertex or geometry shaders to be sent to memory and forwarded back to the top of GPU pipeline in order to be processed again. This enables developers to put in more shiny lighting calculations, physical calculations, or just more complex geometry processing in the engine. Read: more stuff for fewer transistors.

In order to enable that, Nvidia pulled a CPU approach and stuffed L1 and L2 cache across the chip. On the other hand, you might like to know that both Geometry and Vertex Shader programs support Vertex Texturing.

And when it comes to texturing itself, G80 features 64 Texture Filtering Units, which can feed the rest of the GPU with 64 pixels in a single clock. For comparison, GF7800GTX could manage only 24. Depending on the method of texture sampling and filtering used, G80 ranges from 18.4 to 36.8 billion texels in a single second. Pixel wise, the G80 churns out 36.8 billion of finished pixels in a single second.
When it comes to RingBus vs. GigaThread, DAAMIT's X1900 can branch granularity of 48 Pixels, X1800 can do 16. GeForce 8800GTX can do 32 pixel threads in some cases, but mostly the chip will be able to do 16, thus you can expect Nvidia to lose out on GPGPU front (for instance, in Folding@Home stuff).

However, Nvidia claims 100% efficiency, and we know for sure that ATI is mostly running in high 60s to high 70s in percentage points.

作者: csnet    时间: 2006-11-2 23:44
How many pixels can G80 push?
One of the things we are using to describe the traditional pixel pipeline is the number of pixels a chip can render in a single clock. With programmable units, the traditional pipeline died out, but many hacks out there are still using this inaccurate description.

To cut a long story short, on the pixel-rendering side, G80 can render the same amount of pixels as G70 (7800) and G71 (7900) chips.

The G80 chip in its full configuration comes with six Raster Operation Partitions (ROP) and each can render four pixels. So, 8800GTX can churn out 24, and 8800GTS can push 20 pixels per clock. However, these are complete pixels. If you use only Z-processing, you can expect a massive 192 pixels if one sample per pixel is used. If 4x FSAA is being used, then this number drops to 48 pixels per clock.
For game developers, the important information is that eight MRT (Multiple Render Targets) can be utilised and the ROPs support Frame Buffer blending of FP16 and FP32 render targets and every type of Frame Buffer surface can be used with FSAA and HDR.
If you are not a game developer, this sentence above means that Nvidia now supports FP32 blending, which was not a thing in the past, and FSAA/HDR combination will be supported by default. In fact, 16xAA and 128-bit HDR are supported at the same time.

Lumenex Engine - New FSAA and HDR explained
ROPs are also in charge of AntiAliasing, which has remained very similar to GeForce 7 series, albeit with quality adjustments. The G80 chip supports multi-sampling (MSAA), supersampling (SSAA) and transparency adaptive anti-aliasing (TAA). The four new 1GPU modes are 8x, 8xQ, 16x and 16xQ. Of course, you can't expect that you will be able to have enough horsepower to run the latest games with 16xQ enabled on a single 8800GTX, right?

Wrong. In certain games you can buy today, you can enjoy full 16xQ with the performance of regular 4xAA. The reason is exactly the difference between those 192 and 48 pixels in a single clock. But in games which aren't able to utilise 16x and 16xQ optimisations, you're far better off with lower AntiAliasing settings.

This mode Nvidia now calls "Application Enhanced, joining the two old scoundrels "Application Override" and "Application Controlled". Only "App Enhanced" is the new mode, and the idea is probably that the application talks with Nvidia's driver in order to decide which piece of a scene gets the AA treatment, and what does not. Can you say.... partial AA?

Now, where did we hear that one before.... ah, yes. EAA on Renderition Verite in late 90s of the past century and Matrox Parhelia in the early 21st century?

On the HDR (High Dynamic Range) side, Nvidia has designed the feature around OpenEXR spec, offering 128-bit precision (32-bit FP per component, Red:Green:Blue:Alpha channel) instead of today's 64-bit version. Nvidia is calling its new feature True HDR, although you can bet your arse this isn't the latest feature that vendors will call "true". Can't wait for "True AA", "True AF" and so on...

Anisotropiiltering has been raised in quality to match for ATI's X1K marchitecture, so now Nvidia offers angle-independent Aniso Filtering as well, thus killing the shimmering effect which was so annoying in numerous battles in Alterac Valley (World of WarCraft), Spywarefied (pardon, BattleField), Enemy Territory and many more. When compared to GeForce 7, it looks like GeForce 7 was in the stone age compared to the smoothness of the GeForce 8 series. Expect interesting screenshots of D3D AF-Tester Ver1.1. in many of GF8 reviews on the 8th.

Oh yeah, you can use AA in conjunction with both high-quality AF and 128-bit HDR. The external I/O chip now offers 10-bit DAC and supports over a billion colours, unlike 16.7 million in previous GeForce marchitectures.

Quantum Effects
Since PhysiX failed to take off in a spectacular manner, DAAMIT's Menage-a-Trois and Nvidia's SLI-Physics used Havok to create simpler physics computation on respective GPUs. Quantum Effects should take things on a more professional (usable) level, with hardware calculation of effects such as smoke, fire and explosions added to the mix of rigid body physics, particle effects, fluid, cloth and many more things that should make their way into games of tomorrow.

GeForce 8800GTX
Developed under a codename P355, the 8800GTX is Nvidia's flagship implementation. It features a fully fledged G80 chip clocked at 575MHz. Inside the GPU, there are 128 scalar Shader units clocked at 1.35GHz and raw Shader power is around 520GFLOPS. So, if anyone starts to talk about teraflops on a single GPU, we can tell you that we're around a year before that number becomes true. Before G90 and R700 these claims come from marketing alone.

768MB of Samsung memory is clocked at 900MHz DDR, or 1800 MegaTransfers (1.8GHz) wielding out a commanding 86.4 GB/s of memory bandwidth.

The PCB is massive 10.3 inches, or 27 centimetres, and on top of the PCB there are couple of new things. First of all, there are two power connectors, and secondly - the GTX features two new SLI MIO connectors. Their usage is "TBA" (To Be Announced), but we can tell you that this is not the only 8800 you will be seeing on the market. Connectors are two dual-link DVIs and one HDTV 7-pin out. HDMI 1.3 support is here from day one, but we don't think you'll be seeing too much of 8800GTX w/HDMI connection.

Cooling is not water/air cooled, but more manufacturer friendly aluminium with copper heat pipe. The I记爱好者s expected to be silent as a grave, and several AIBs are planning a more powerful version for 2nd gen 8800GTX, expected to be overclocked to 600 MHz for GPU and 1 GHz DDR for the memory.
The board's recommended price has changed couple of times and stands at 599 or dollars/euros, or 399 pounds. However, due to expected massive shortage, expect these prices to hit stratospheric levels.

GeForce 8800GTS
Codenamed P356, the 8800GTS is a smaller brother of the GTX. The G80 chip is the same as on the GTX, but the amount of wiring has been cut, so you have the 320-bit memory controller instead of 384-bit, 96 Shader units instead of 128 and 20 pixels per clock instead of 24.

The board itself is long and comes with a simpler layout than the GTX one. Dual-Link DVI, 7-pin HDTV out come by default. "Only" one 6-pin PEG connector is used, and power-supply requirements are lighter on the wallet.

The clocks have been set at 500MHz for the GPU, 1.2GHz for Shader Units, while the 640MB of memory has been clocked down to 800MHz DDR, or 1600 MegaTransfers (1.6GHz), yielding out bandwidth of 64GB/s. Both pixel and texel fill-rate fell by a significant margin, to 24 billion pixels and 16 to 32 billion texels.

Recommended price is 399 dollars/euros, but who are we kidding? Expect at least 100 dollars/euros higher price.

Performance is CPU Bound
Yes, you've read it correctly. Both GTS and GTX are maxing out the CPUs of today, and even Kentsfield and upcoming 4x4 will not have enough CPU to max out the graphics card – G80 chip just eats up all the processing power that a CPU can provide to them.

Having said this, expect fireworks with AMD's 4x4 platform once that true quad-core FX become available.

作者: hufengl    时间: 2006-11-2 23:46
99%大家都知道了...............
作者: jssl510    时间: 2006-11-2 23:49
For game developers, the important information is that eight MRT (Multiple Render Targets) can be utilised and the ROPs support Frame Buffer blending of FP16 and FP32 render targets and every type of Frame Buffer surface can be used with VCAA and HDR.

这一段对泛.A打击最大!FP16/32 HDR /w VCAA。!

R5xx只能FP16 HDR /w MSAA
作者: xxxvampire    时间: 2006-11-2 23:52
8号当天NV将会对intel fans们发布Nforce 680i,Conroe超频利器
作者: ruohong    时间: 2006-11-2 23:56
原帖由 PCINLIFEOVER 于 2006-11-2 23:52 发表
phk你没问题吧?
A有什么好打击,都YY一年多了 #

不过不用1G显存,算是有些进步了,不然笑话继续
那是FP16 HDR /w MSAA,如果要算,那么NV40早在04年就实现了FP16 HDR,那时R420还不支持呢
作者: zhw2007    时间: 2006-11-3 00:00
G80 可以飙一阵子了!

不如预期的强! 看来NV也遇到问题了!(还是准备对付R600呢?)
作者: arethere    时间: 2006-11-3 00:01
提示: 作者被禁止或删除 内容自动屏蔽
作者: hotyes    时间: 2006-11-3 00:01
原帖由 phk 于 2006-11-2 23:49 发表
For game developers, the important information is that eight MRT (Multiple Render Targets) can be utilised and the ROPs support Frame Buffer blending of FP16 and FP32 render targets and every typ ...
G80诞生是为了和R600较量的 #
总拿它和DX9级别的R5XX比不是自贬身份么 #
作者: bianbian1011    时间: 2006-11-3 00:03
原帖由 ayanamei 于 2006-11-3 00:01 发表

G80诞生是为了和R600较量的 #
总拿它和DX9级别的R5XX比不是自贬身份么 #
谁叫你R600假大空设计?
作者: djno1    时间: 2006-11-3 00:07
原帖由 PCINLIFEOVER 于 2006-11-3 00:04 发表


FP16 HDR???一年半啦?狗牙HDR?有什么好YY,太搞笑了。
讲N不得就不行啦,一定要像你一样谈忠诚?
R420连“狗牙”都不如,当年一帮泛.A只能跺起来不敢面对
作者: zwd753159    时间: 2006-11-3 00:09
HDR /w MSAA一样有狗牙的,某些泛.A一看见HDR /w VCAA便酸到不得了
作者: baggio918    时间: 2006-11-3 00:17
原帖由 PCINLIFEOVER 于 2006-11-3 00:15 发表
有空去查一下显卡编年史
MSAA只是太极监品AA来的,VCAA才是帝皇AA
作者: x4433    时间: 2006-11-3 00:24
原帖由 phk 于 2006-11-3 00:03 发表


谁叫你R600假大空设计?
目前来看还在太阳系内 #
作者: guwenqin    时间: 2006-11-3 00:25
g80跑极品10强否?
作者: kelon2006    时间: 2006-11-3 00:27
原帖由 ayanamei 于 2006-11-3 00:24 发表

目前来看还在太阳系内 #
ATI-AMD的人还拍着身体某个部位保证:R600绝对不让你失望的
作者: zizhulin    时间: 2006-11-3 00:28
原帖由 自自在在 于 2006-11-3 00:25 发表
g80跑极品10强否?
NV忙着搞G80的事情,demo还没完成呢
作者: l1986614    时间: 2006-11-3 00:35
原帖由 phk 于 2006-11-3 00:28 发表


NV忙着搞G80的事情,demo还没完成呢
p大居然用这个表情符,感动
作者: fyf780402    时间: 2006-11-3 00:37
原帖由 自自在在 于 2006-11-3 00:35 发表

p大居然用这个表情符,感动
EOS 400D还不买?
作者: 522271147    时间: 2006-11-3 00:47
呵呵。他那是发现自己的7900垃圾了。受了点打击。然后就拿G80出来*******一下而已。
嘿嘿。买了G80后就准备每个月多交200元电费吧。
NV就是等作靠他们这样的人败家了。要是没他们来败家买G80的话NV就准备喝西北风了。
作者: link68    时间: 2006-11-3 00:55
原帖由 yehaku01 于 2006-11-3 00:47 发表
呵呵。他那是发现自己的7900垃圾了。受了点打击。然后就拿G80出来自慰一下而已。
嘿嘿。买了G80后就准备每个月多交200元电费吧。
NV就是等作靠他们这样的人败家了。要是没他们来败家买G80的话NV ...
搞笑,硬件厂商要是指望靠顶级产品来维持利润的话,他们可真的要喝西北风了。
作者: aa123bb    时间: 2006-11-3 00:59
原帖由 酒后上网 于 2006-11-3 00:55 发表

搞笑,硬件厂商要是指望靠顶级产品来维持利润的话,他们可真的要喝西北风了。
别和那种人计较,他是精神享受,我们是物质享受
作者: gscreate    时间: 2006-11-3 01:12
原帖由 酒后上网 于 2006-11-3 00:55 发表




搞笑,硬件厂商要是指望靠顶级产品来维持利润的话,他们可真的要喝西北风了。
的确是这样。有钱的觉得钱多了没地方花的可以买G80为NV公司送点菜。
反正NV现在卖缩水卡也赚了不少了。高端再赚了就元满了。
作者: liliang0929    时间: 2006-11-3 01:17
p叔何时入G80啊?
作者: hch114    时间: 2006-11-3 01:19
原帖由 saintangel 于 2006-11-3 01:17 发表
p叔何时入G80啊?
你赞助我不反对
作者: red001black    时间: 2006-11-3 01:27
原帖由 phk 于 2006-11-2 23:52 发表
8号当天NV将会对intel fans们发布Nforce 680i,Conroe超频利器
看NF5做的那X样,实在不敢抱多大希望。
作者: luo840215    时间: 2006-11-3 01:35
原帖由 phk 于 2006-11-3 01:19 发表


你赞助我不反对

看来是决定等G81了……
作者: FANSI1111    时间: 2006-11-3 01:40
原帖由 saintangel 于 2006-11-3 01:35 发表



看来是决定等G81了……
某人告诉我,G81是8800nu级来的
作者: muyangqwer    时间: 2006-11-3 07:29
原帖由 phk 于 2006-11-3 01:40 发表
某人告诉我,G81是8800nu级来的
那就等G85吧
G80不能65nm化在我看来根本就不环保,也没有买的必要
作者: chenmingfa    时间: 2006-11-3 08:12
原帖由 孤石 于 2006-11-3 08:03 发表

就算是FP16 HDR,没有AA,还不是个瘸脚HDR?
我就喜欢ATi的理念,硬件资源还不够的时候,就INT16 HDR AA,条件成熟了,就FP16 HDR AA,什么时候都不会把AA丢掉。
P叔不用说了, 6系列还算是有点威风,毕竟可 ...
为了那个不成熟的FP16 HDR+MSAA,ATi不惜血本做了R5xx一整代,结果因为成本太高而市场认可度很低,产品卖不出去,最终自己也卖身
作者: 250095747    时间: 2006-11-3 08:39
原帖由 phk 于 2006-11-2 23:49 发表
For game developers, the important information is that eight MRT (Multiple Render Targets) can be utilised and the ROPs support Frame Buffer blending of FP16 and FP32 render targets and every typ ...
N卡刚刚能支持HDR+AA阿?
作者: maomi0528    时间: 2006-11-3 08:41
确实  65NM会从功耗上带来多大的改善啊
作者: aazxbb    时间: 2006-11-3 08:43
原帖由 killpmp 于 2006-11-3 07:29 发表

那就等G85吧
G80不能65nm化在我看来根本就不环保,也没有买的必要
买卡就要买第一代,工艺提升的第二代加强版根本没有买的价值
作者: Henry25    时间: 2006-11-3 09:15
最近屁兔不来phk的帖子了,不知道为什么
作者: kelvinxcdu    时间: 2006-11-3 09:39
除了耗电,没有缺点的显卡,其实想想,1度电也就6毛,够爽3小时,总比汽油便宜太多了
作者: shw791110    时间: 2006-11-3 09:49
原帖由 孤石 于 2006-11-3 08:03 发表

就算是FP16 HDR,没有AA,还不是个瘸脚HDR?
我就喜欢ATi的理念,硬件资源还不够的时候,就INT16 HDR AA,条件成熟了,就FP16 HDR AA,什么时候都不会把AA丢掉。
P叔不用说了, 6系列还算是有点威风,毕竟可 ...
你怎么不提ATI落后了一年半才实现FP16 HDR?那时脸子丢尽了,现在NV实现了FP32 HDR FSAA,泛.A却挑这挑那,不敢接受现实
作者: fueasy    时间: 2006-11-3 10:08
PRO-A真酸呀
作者: city2004    时间: 2006-11-3 11:03
提示: 作者被禁止或删除 内容自动屏蔽
作者: 13009709050    时间: 2006-11-3 11:04
原帖由 ayanamei 于 2006-11-3 00:01 发表

G80诞生是为了和R600较量的 #
总拿它和DX9级别的R5XX比不是自贬身份么 #
在K8L还没出来的情况下,C2D就和K8比
就这么简单
作者: ting2513147    时间: 2006-11-3 13:18
INQ的文章质量还是一如既往地差,文章信息和主观臆想仍然混杂在一起
作者: bobo568    时间: 2006-11-3 18:34
IQN````````````

飘过```




欢迎光临 热点科技 (http://www.itheat.com/activity/) Powered by Discuz! X3.2