技术控

    今日:47| 主题:49507
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Why does deep and cheap learning work so well?

[复制链接]
請不要拒絕我 发表于 2016-10-5 13:08:21
170 3

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Why does deep and cheap learning work so well Lin & Tegmark 2016
   Deep learning works remarkably well, and has helped dramatically improve the state-of-the-art in areas ranging from speech recognition, translation, and visual object recognition to drug discovery, genomics, and automatic game playing. However, it is still not fully understood why deep learning works so well.
   So begins a fascinating paper looking at connections between machine learning and the laws of physics – showing us how properties of the real world help to make many machine learning tasks much more tractable than they otherwise would be, and giving us insights into why depth is important in networks. It’s a paper I enjoyed reading, but my abilities stop at appreciating the form and outline of the authors’ arguments – for the proofs and finer details I refer you to the full paper.
  A paradox

  How do neural networks with comparatively small numbers of neurons manage to approximate functions from a universe that is exponentially larger? Take every possible permutation of the neural network, and you still don’t get near the the number of possibilities for the functions you are trying to learn.
   Consider a mega-pixel greyscale image, where each pixel has one of 256 values. Our task is to classify the image as a cat or a dog . There are 256 1,000,000 possible input images (the domain of the function we are trying to learn). Yet networks with just thousands or millions of parameters learn to perform this classification quite well!
   In the next section we’ll see that the laws of physics are such that for many of the data sets we care about (natural images, sounds, drawings, text, and so on) we can perform a “combinatorial swindle”, replacing exponentiation by multiplication. Given n inputs with v values each, instead of needing v n parameters, we only need v x n parameters.
  We will show that the success of the swindle depends fundamentally on physics…
  The Hamiltonian connection between physics and machine learning

   Neural networks search for patterns in data that can be used to model probability distributions. For example, classification looks at a given input vector x , and produces a probability distribution y over categories. We can express this as p(y| x ) . For example y could be animals, and y a cat.
   We can rewrite p(y| x ) using Bayes’ theorem:
   
Why does deep and cheap learning work so well?-1 (discovery,understood,important,otherwise,learning)

   And recast this equation using the Hamiltonian H y ( x ) = -ln p( x |y) (i.e., simple subsitution of forms). In physics, the Hamiltonian is used to quantify the energy of x given the parameter y.
  This recasting is useful because the Hamiltonian tends to have properties making it simple to evaluate.
  In neural networks, the popular softmax layer normalises all vector elements such that they sum to unity. It is defined by:

Why does deep and cheap learning work so well?-2 (discovery,understood,important,otherwise,learning)

   Using this operator, we end up with a formula for the desired classification probability vector p( x ) in this form:
1234下一页
友荐云推荐




上一篇:Can Spring Security Be Auto-Generated?
下一篇:戴尔-EMC“力挺”华为打造的开源管理颠覆性方案
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

阿大使的散打 发表于 2016-10-7 03:46:28
楼上的说的很好!
回复 支持 反对

使用道具 举报

johv4fHy 发表于 2016-10-14 19:41:45
一笑万古春,一啼万古愁,此景非你莫有,此貌非你莫属。
回复 支持 反对

使用道具 举报

jiangyanfei 发表于 2016-10-15 15:53:56
为何要放弃治疗?
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表