请选择 进入手机版 | 继续访问电脑版

网络科技

    今日:404| 主题:271676
收藏本版
互联网、科技极客的综合动态。

[科技] Amazon adds Nvidia GPU firepower to its compute cloud

[复制链接]
mm离殇魅 发表于 2016年10月3日 19:47
309 16

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x

Amazon adds Nvidia GPU firepower to its compute cloud

Amazon adds Nvidia GPU firepower to its compute cloud-1-网络科技-businesses,management,available,companies,resources

Amazon’s Elastic Compute Cloud (EC2) offers businesses the opportunity to rent scalable servers and host applications and services remotely, rather than pay for and handle the infrastructure and management of those resources on their own. The service, which first entered beta a little more than ten years ago, has historically focused on CPUs, but that’s changing now, courtesy of a newly-unveiled partnership with Nvidia.
  According to joint blog posts from both companies, Amazon will now offer P2 instances that include Nvidia’s K80 accelerators, which are based on the older Kepler architecture. Those of you who follow the graphics market may be surprised, given that Maxwell has been available since 2014, but Maxwell was explicitly designed as a consumer and workstation product, not a big-iron HPC part. The K80 is based on GK210, not the top-end GK110 parts that formed the basis for the early Titan GPUs and the GTX 780 and GTX 780 Ti. GK210 offers a larger register file and much more shared memory per multiprocessor block, as shown below.
  

Amazon adds Nvidia GPU firepower to its compute cloud

Amazon adds Nvidia GPU firepower to its compute cloud-2-网络科技-businesses,management,available,companies,resources

  The new P2 instances unveiled by Amazon will offer up to 8 K80 GPUs with 12GB of RAM and 2,496 CUDA cores per card. All K80s support ECC memory protection and offer up to 240GB/s of memory bandwidth per card. One reasonAmazon gave for its decision to offer GPU compute as opposed to focusing on scaling out with additional CPU cores is the so-called von Neumann bottleneck. Amazon states: “The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.”
This is a significant oversimplification of the problem. When John von Neumann wrote “First Draft of a Report on the EDVAC” in 1945, he described a computer in which program instructions and data were stored in the same pool of memory and accessed by the same bus, as shown below.
  

Amazon adds Nvidia GPU firepower to its compute cloud

Amazon adds Nvidia GPU firepower to its compute cloud-3-网络科技-businesses,management,available,companies,resources

  In systems that use this model, the CPU can either access program instructions or data, but it can only access one or the other. It cannot simultaneously copy instructions or data at the same time, and it cannot transfer data directly to or from main memory nearly as quickly as it can perform work on that data once the information has been loaded. Because CPU clock speeds increased far faster than memory performance in the early decades of computing, the CPU spent an increasingly large amount of time waiting on data to be retrieved. This wait-state became known as thevon Neumann bottleneck, and it had become a serious problem by the 1970s.
  

Amazon adds Nvidia GPU firepower to its compute cloud

Amazon adds Nvidia GPU firepower to its compute cloud-4-网络科技-businesses,management,available,companies,resources

  An alternative architecture, known as the Harvard architecture, offers a solution to this problem. In a Harvard architecture chip, instructions and data had their own separate buses and physical storage. But most chips today, including CPUs built by Intel and AMD, can’t be cleanly described as Harvard or von Neumann. Like CISC and RISC, which began as terms that defined two different approaches to CPU design and have been muddled by decades of convergence and common design principles, CPUs today are best described as modified Harvard architectures.
Modern chips from ARM, AMD, and Intel all implement a split L1 cache with instructions and data stored in separate physical locations. They use branch prediction to determine which code paths are most likely to be executed, and they can store both programs and instructions in case that information is needed again. The seminal paper on the von Neumann bottleneck was given in 1977, before many defining features of CPU cores today had even been invented. GPUs have far more memory bandwidth than CPUs do, but they also operate on far more threads at the same time and have much, much smaller caches relative to the number of threads they keep in-flight. They use a very different architecture than CPUs do, but it’s subject to its own bottlenecks and choke points as well. I wouldn’t call the von Neumann bottleneck solved — when John Backus described it in 1977, he railed against programming standards that enforced it, saying:
Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.
We’ve had good luck challenging the von Neumann bottleneck through hardware. But the general consensus seems to be that the changes in programming standards that Backus called for never really took root.
  I’m not sure why Amazon went down this particular rabbit hole. Incorporating GPUs as part of its EC2 service makes good sense. In the nearly ten years sinceNvidia launched the first PC programmable GPU, the G80, GPUs have proven that they can deliver enormous performance improvements relative to CPUs. Nvidia (and to a lesser extent, AMD) has built a significant business around the use of Tesla cards in HPC, scientific computing, and major industry. Deep learning, AI, and self-driving cars are all hot topics of late, with huge amounts of corporate funding and a number of smaller companies trying to stake out positions in the nascent market.



上一篇:谷歌10月4日发布会前瞻 或推出五大新产品
下一篇:业内人士来聊聊影视 CG 后期这份工作
461288527 发表于 2016年10月3日 20:39
☆:签到是人气、荣誉的比拼,让我们的签到见证一份坚持吧!!!
回复 支持 反对

使用道具 举报

方苏 发表于 2016年10月3日 20:50
顶顶更健康!
回复 支持 反对

使用道具 举报

J"Vě|J豪? 发表于 2016年10月3日 20:55
会发帖就交得起电费
回复 支持 反对

使用道具 举报

748657sdfa 发表于 2016年10月3日 21:07
鼎力支持!!
回复 支持 反对

使用道具 举报

生死疲劳 发表于 2016年10月3日 21:19
放假前的节奏
回复 支持 反对

使用道具 举报

君君_2008 发表于 2016年10月3日 21:19
小时候缺钙,长大了缺爱。
回复 支持 反对

使用道具 举报

5413860 发表于 2016年10月5日 18:22
我对楼主的敬仰如滔滔江水,绵延不绝!
回复 支持 反对

使用道具 举报

夜阑珊人寂寞 发表于 2016年11月6日 11:25
是爷们的娘们的都帮顶!大力支持
回复 支持 反对

使用道具 举报

31321 发表于 2016年11月6日 11:33
好贴,好贴,必须顶一个!
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读


回页顶回复上一篇下一篇回列表
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )

© 2001-2017 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表