技术控

    今日:41| 主题:49409
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Why Can’t Physics and Math Agree on Deep Neural Networks?

[复制链接]
久年 发表于 2016-10-4 22:54:25
170 1

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x

Why Can’t Physics and Math Agree on Deep Neural Networks?-1 (discipline,techniques,particular,Facebook,multitude)
      Betelgejze Shutterstock.com       The math doesn’t seem to add up about why techniques used in Deep Learning are so effective in solving complex problems. With all of the information available and the complex calculations required, how are Deep Neural Networks so accurate and so fast? The surprisingly simple laws of physics seem to have a better explanation than math alone.
  Complex Systems of Information

  What do the human brain, highways, an epidemic, and Facebook have in common? They are all  “complex” systems made up of a multitude of entities and information linked together under a particular set of rules.
  In recent years, a new discipline in Artificial Intelligence inspired by the biology of the brain has emerged, and models how our own complex neural networks interact to transmit and process information. Deep Neural Networks are another example of a complex system of information, and illustrate how, more and more, scientists are realizing that the basic laws of physics govern the mathematical possibilities of human and robotics evolution.
  Math and Physics over Deep Learning: “It’s Complicated”

  Deep  Learning layers information in a hierarchical structure, allowing for faster processing of more information.
   The layered structure of the networks doesn’t just demonstrate how complex the system is: it also demonstrates why .
  The fact that these networks are populated with almost limitless mathematical permutations and combinations gives them a wealth of information to use in  drawing a conclusion. It’s like when we see a spherical object on a grassy field, our experience and memories tell us that that object is mostly like me a ball.
  But as complex as these layered networks may be, and as much math as they contain, mathematical equations alone cannot explain why deep neural networks work as well as they do.
  How is this technique so fast if it is constantly lumbering through calculations?
  Mathematicians have held the view that the infinite number of possible functions should make be impossible for the deep neural network to handle.
  Physics makes the Rules, Math gives you the Scenario

  Henry Lin of Harvard University and Max Tegmark of MIT stand to change that view by offering that the laws of physics– not math– govern multi-layered networks. Despite the infinite number of mathematical possibilities, the networks can operate by considering only a simple set of parameters. This effectively limits the amount of information to the most relevant search keys. They system would then be required to process only a fraction of said mathematical  functions, and not all of them simultaneously.
  It’s like playing a game where Physics makes the rules and Math gives you the scenario.
  Composition or Division?

  To understand these complex systems, it is not enough to simply identify its individual components. Describing each neuron and how it works does not necessarily describe how the brain works.
  Therefore, just because a piece of something has a certain characteristic does not mean that the thing as a whole automatically has the same characteristic.
  Our seemingly logical understanding of Deep Neural Networks may be flawed by the “part-to-whole ” logical fallacy. Physics and Math have a complicated relationship, but until this point, scientists have tried to explain deep neural networks with math that created them.
  Lin and Tegmark, however, may have proved that just because the system is made up of complex mathematical expressions does not automatically translate to the system being governed by math.
  If Math is the map of possibilities, and offers a way for us to model the infinite possibilities of the universe, then Physics offers a way to boil down the countless combinations of information into just a few simple and mechanical principles.
  These same principles of physics are what allow deep neural networks to limit the amount of information that they process by boiling it down into simple subsets, therefore explaining why the technique is both so accurate and so fast.
  It Doesn’t add up, it Boils Down

  While deep neural networks are complex because they have a hierarchical structure that layers information, Math only seems to account for the massive amounts of information that the networks are processing.
  Processing, therefore, is not governed by the complex variability of math as a whole but by the simple parameters of physics in subsets.
友荐云推荐




上一篇:watchOS 3 Tutorial Part 1: Getting Started
下一篇:Estimating covariate effects after gmm
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

淚痕傾面難眠 发表于 2016-10-24 18:35:53
视死如归的架势啊!
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表