市场历史
[编辑] 在DirectX前
NVIDIA的第一张3D显卡是NV1,于1995年推出。它是建基于二次曲面贴图作为立体图形的实现方式。这张卡亦整合了声卡(只能作播放用,并没有音效输入),和世嘉土星游戏手柄和操纵杆的接口。由于世嘉土星是建基于forward-rendered quads,几款世嘉土星的游戏亦被移植到电脑平台。例如铁甲飞龙和VR战士。但是,NV1只能艰难地行进,因为该市场已有很多对手。
之后,市场对NV1失去兴趣,因为Microsoft发布了DirectX规格。它以多边形作为立体图形的实现方式。随后,NV1继续秘密地发展,成为NV2计划。该计划由世嘉资助了几百万元。世嘉希望一个整合了音效和绘图的核心能减低下一代游戏核心的制造成本。但是,世嘉最终了解到二次曲面贴图是有缺点的,最终亦没有证据证明该核心被恰当地除错。这事件成为NVIDIA黑暗的一面。
[编辑] 一个新颖的开始
NVIDIA发布了两款失败的产品后,CEO黄仁勋领悟到公司要继续生存,就必须作出改变。他雇用了David Kirk, Ph.D.作为首席科学家。David Kirk原本是属于软件开发商Crystal Dynamics,一间提供优良视觉品质的公司。他基于对著色的熟悉,将NVIDIA的3D硬件经验合并起来,使NVIDIA得以翻身。
作为企业转形的一部分,NVIDIA放弃了一些专利界面,转为全面支援DirectX,亦弃掉一些多媒体功能,减低制造成本。NVIDIA亦采用了一个为期六个月的内部周期目标。将来,就算某一产品失败,亦不会威胁到公司的生存,因为下一代的代替物随时可用。
但是,自从世嘉NV2的合同隐蔽起来后,雇员都被投闲置散,很多工业观察者都认为NVIDIA不会再活跃于研发工作。所以当RIVA 128在1997年首次推出时,它的规格都是难以置信的。效能比市场领导者3dfx的好,还有一个完整的三角形生成引擎。RIVA 128大量销售,因为其低廉的价格,高效能的2D/3D加速,使它成为OEM受欢迎的选择。
[编辑] 市场领导
RIVA 128大量销售后,NVIDIA的内部目标是将像素流水线的数目加倍,使效能有实质的增长。NVIDIA随后发展TwiN Texel (RIVA TNT)引擎。它容许两款材质应用在单个像素中,或每条像素流水线每个周期处理两个像素。前者能提升影像质素,后者能提升效能。
新特色包括24-bit Z-缓冲,支援8-bit模板缓冲,非等方性过滤,和MIP mapping。TNT的复杂度可与Intel的Pentium处理器匹敌,但还不足以取代Voodoo 2,因为核心频率只有90 MHz,比原先估计少了35%。
但对Voodoo而言,这只是死刑缓期执行。NVIDIA更新了TNT的制程,由0.35微米提升到0.25微米。最终,TNT的核心频率是125 MHz,Ultra版本则是150 MHz。Voodoo 3只是勉强比TNT快,而且不支援32-bit色彩品质。RIVA TNT2就成为了NVIDIA的转捩点。NVIDIA终于拥有可以对抗市场最快的产品。它可以提供更多功能,更好的2D效能,所有的功能都整合在更好品质的芯片中,令频率得以提升。
[编辑] GeForce世代
在1999年下半年,NVIDIA推出了GeForce 256 (NV10),最特别的是它带来了硬件几何转换与光源(T&L)。GeForce 256的核心频率是120 MHz。它亦提供了先进的影像播放加速、动态补偿、硬件子像素alpha混合和四条像素流水线。配合DDR作为显示内存,使NVIDIA轻易成为性能领导者。
基于产品的成功,NVIDIA嬴得了Microsoft的合约 - 为Xbox研发绘图硬件。这令公司增加了二亿美元收入。纵使这计划用去了工程师很多时间,但短期内,并没有对公司做成很大的影响。卒之,GeForce 2 GTS于2000年夏天正式发售。
NVIDIA从研发高度合成核心时,得到很多额外的经验,并将之应用在GTS中,结果核心频率得到了改善。NVIDIA亦可以选出较高质素的芯片,用作高价产品。最终,GTS的核心频率是200 MHz。它的像素填充率是GF256的两倍;材质填充率是GF256的四倍,因为每条像素流水线都支援多层贴图。它亦新加支持S3TC压缩技术、FSAA和改善了的MPEG-2动态补偿。
随后,NVIDIA推出了GeForce 2 MX,针对低廉和OEM市场。它只有两条像素流水线,核心频率是175 MHz,随后增加到200 MHz。纵使价格低廉,但效能不俗。GeForce 2 MX成为史上最成功的显卡。而流动形号GeForce2 Go亦于2000年年尾装运。
同时,3dfx的Voodoo 5延期过久,这引起电脑史上最令引人注目的破产。起初,NVIDIA当时购买了3dfx引人争夺的技术,但还有反锯齿技术和大约100位工程师。
[编辑] GeForce FX的缺点
At this point NVIDIA’s market position looked unassailable, and industry observers began to refer to NVIDIA as the Intel of the graphics industry. However while the next generation FX chips were being developed, many of NVIDIA’s best engineers were working on the Xbox contract, developing the SoundStorm audio chip, and a motherboard solution.
It is also worth noting Microsoft paid NVIDIA for the chips themselves, and the contract did not allow for falling manufacturing costs, as process technology improved. Microsoft eventually realized its mistake, but NVIDIA refused to renegotiate the terms of the contract. As a result, NVIDIA and Microsoft relations, which had previously been very good, deteriorated. NVIDIA was not consulted when the DirectX 9 specification was drawn up. Apparently as a result, ATI designed the Radeon 9700 to fit the DirectX specifications. Rendering color support was limited to 24-bits floating point, and shader performance had been emphasized throughout development, since this was to be the main focus of DirectX 9. The Shader compiler was also built using the Radeon 9700 as the base card.
In contrast, NVIDIA’s cards offered 16 and 32 bit floating point modes, offering either lower visual quality (as compared to the competition), or slow performance. The 32 bit support made them much more expensive to manufacture requiring a higher transistor count. Shader performance was often only half or less the speed provided by ATI's competing products. Having made its reputation by providing easy to manufacture DirectX compatible parts, NVIDIA had misjudged Microsoft’s next standard, and was to pay a heavy price for this error. As more and more games started to rely on DirectX 9 features, the poor shader performance of the GeForce FX series became ever more obvious. With the exception of the FX 5700 series (a late revision), the FX series lacked performance compared to equivalent ATI parts.
NVIDIA started to become ever more desperate to hide the shortcomings of the GeForce FX range. A notable 'FX only' demo called Dawn was released, but the wrapper was hacked to enable it to run on a 9700, where it ran faster despite a perceived translation overhead. NVIDIA also began to include ‘optimizations’ in their drivers to increase performance. While some that increased real world gaming performance were valid, hardware review sites started to run articles showing how NVIDIA’s driver autodetected benchmarks, and produced artificially inflated scores that did not relate to real world performance. Oftentimes it was tips from ATI’s driver development team that lay behind these articles. As NVIDIA’s drivers became ever more full of hacks and ‘optimizations,' the legendary stability and compatibility also began to suffer. While NVIDIA did partially close the gap with new instruction reordering capabilities introduced in later drivers, shader performance remained weak, and over-sensitive to hardware specific code compilation. NVIDIA worked with Microsoft to release an updated DirectX compiler, that generated GeForce FX specific optimized code.
Furthermore, the GeForce FX series also ran hot, because they drew as much as double the amount of power as equivalent parts from ATI. The GeForce FX 5800 Ultra became notorious for the fan noise, and acquired the nicknames ‘Dustbuster’ and 'leafblower.' While it was withdrawn and replaced with quieter parts, NVIDIA was forced to ship large and expensive fans on its FX parts, placing NVIDIA's partners at a manufacturing cost disadvantage compared to ATI. As a result of the FX series' weaknesses, NVIDIA quite unexpectedly lost its market leadership position to ATI.
[编辑] 效能领导
NVIDIA推出GeForce 6显卡系列还击,成为FX灾难的关键解决方法。著色效能提升了,而电源消耗则减少了。透过与开发者,尤其是那些参与了NVIDIA的"The way it's meant to be played"计划的开发者紧密合作,NVIDIA做事更果断,以求更完善。这样就能更轻易制造一些与行业要求一致的硬件。
结果这样改善了企业的焦点,随后NVIDIA亦发布了GeForce 7系列。它拥有24条像素流水线,自ATI Radeon 9700发布后,NVIDIA第一次取得无可争辩的性能优性。更重要的是,产品发布当天,消费者就可买到相关产品,而价钱亦适中。而ATI的产品则饱受产品延迟发布之苦。
2005年,受惠于PCI-Express接口的高带宽,NVIDIA推出了SLI。透过这个技术,两张显卡就能合二为一,理论上使绘图效能加倍(实际上只有大约1.8倍)。它重新确立了NVIDIA在高端市场的名誉。ATI的同类型产品就是X1000显卡系列CrossFire版本。
[编辑] 独立显卡市场的优势
根据一份Jon Peddie Research的调查[5],在2006年第二季,NVIDIA在显卡的市场估有率是20.30%,排行第三位。而独立显卡的市场估有率是51.5%。
[编辑] 缺乏自由软件的支持
主条目:NVIDIA and FOSS
NVIDIA并不提供自家产品的技术文件。对于电脑专家,在编写适当和有效的开源驱动程序时,技术文件是必须的。取而代之,NVIDIA为X11提供自家的二进制GeForcce显卡驱动程序。另外,亦提供一个有限度开源的数据库予Linux、FreeBSD或Solaris核心和非自由绘图软件。NVIDIA对Linux的支援已被娱乐、可视化和模拟/训练工业共同采用。这些领域原先是由SGI、Evans & Sutherland和其他相较 昂贵的公司所支配。
由于NVIDIA的驱动程序是私有的,这就在Linux和FreeBSD社群中引起不间断的争议。很多Linux和FreeBSD的用家坚持只使用开源的驱动程序,并认为二进制驱动程序是不够格的。但是,亦有用家满意NVIDIA提供的驱动程序。
[编辑] 原来设备制造商
NVIDIA不会制造显卡,只会生产显示芯片。 显卡是由OEM厂商配装,以下是一个列表:
* AOpen
* ASUS
* BFG (also via its 3D Fuzion brand)
* BIG
* Chaintech
* Club 3D
* ELSA
* eVGA
* Gainward
* Gigabyte
* Inno3D
* Leadtek
* Micro-Star International (MSI)
* POV
* PNY
* XFX
* Zebronics
* Zogis |