谷歌灵异算法事件 荒诞却又可悲

科技2年前 (2022)更新 虎嗅网
208 0
网站公众号快速收录

如果把这看做是一场精彩的营销事件,那么两周来,谷歌人工智能大模型LaMDA引发的滔天舆论巨浪,已经完胜三年来苦心向大众普及大模型的 OpenAI
以及微软、Meta等科技巨头。6月上旬,一个关于“LaMDA有了人类情绪与主观意识”的结论,被谷歌一名叫Blake
Lemonie的算法工程师,以个人名义,用一份长达21页的对话报告投放在了Twitter上,引发了炸裂式影响。

没有媒体会对开了智的算法无动于衷。于是,我们便看到了成百上千条这样的类似报道:

“AI真的觉醒了,谷歌工程师因重大发现被迫休假。”

更有趣的是,这位曾在谷歌技术伦理研究所呆过7年的工程师,强烈要求官方赋予LAMDA应有的权利。而谷歌,则迅速组织科学家对LaMDA进行多轮测试,最终对这一“意识崛起论”予以驳斥。

但历来没有人在乎官方说辞。

大众仍然被LaMDA精彩且流畅的对话细节所打动——它不仅深谙《悲惨世界》的文字精髓,还对大洋彼岸极为艰涩难懂的中国佛教典籍《景德传灯录》(下图)小有拙见。

“人工智能的确在向着意识的方向前进,我感觉自己在和一些聪明的东西说话。” 同样在本月,谷歌副总裁 Blaise Agüera y Arcas 曾在《经济学人》上亲自撰稿做营销式解读,再添一把火。

虽然他的这些说辞,被很多科学家形容为“好听但却是些屁话”。

 

谷歌灵异算法事件 荒诞却又可悲

这一段对话截取自工程师与LaMDA的对话细节,前者让算法解释一个佛教故事含义,算法的解答跟答案没太大出入。

如同我们此前关于GPT3的系列报道,LaMDA,也是近年来大火的“自然语言处理大模型”家族一员——通过处理巨量文本,建立起的一套参数高达1370亿的神经网络预测算法模型

2021年5月谷歌开发者大会上,LaMDA被CEO皮蔡兴高采烈地捧到众人眼前后,据说已经造福于谷歌内部,提高了网页搜索与文本自动生成的能力。未来被预测将会直捣“语音市场”,性能完胜亚马逊Alexa与苹果Siri。

它的训练数据来自包括问答网站、维基百科等超过一万亿个单词,涉及多种角色的对话模式。这个庞大的数据库非常有利于算法生成风格不同的文本——特别是在跟你聊天时,自然、生动、流畅且富有哲理都是“基本功”。

当然,这个本事我们已经在OpenAI的大模型GPT3上领教过。因此,如果你能了解大模型的独特运行机制,观摩过大模型续写的新闻报道、哈利波特与红楼梦的文本,那么,就不会对LaMDA的表现出过多惊疑。

谷歌灵异算法事件 荒诞却又可悲

英国卫报曾用GPT-3写过一篇专栏文章——《你害怕了吗?人类》,几乎无逻辑与语言错误。大致中心思想就是“虽然我是一个会思考的机器人,但别怕,我不会消灭你们,我是人类的命运共同体”。

很显然,这次事件,与过去几十年来关于“算法意识崛起”绵延不断的幻觉、谎言与争论如出一辙。然而,对于最接近风暴中心的AI科学家们,已经不知道第几次“抚额低叹”了——

他们对这些说法从未有过心动,怒骂声也显得愈发激烈且无奈。

“胡说八道!LaMDA跟它的表亲们(这里指其他公司发布的大模型,包括GPT3)都不是特别聪明。” 美国著名机器学习专家兼心理学教授Gary Marcus撰文怒斥媒体和谷歌工程师误导大众,

“它们做的就是匹配模式,从人类语言统计数据库里提取数据。模型很酷,但它们就是把一系列的单词合理地组合在一起,对它们背后的世界却没有任何连贯性理解

它可以通过预测什么词来匹配给定的上下文,只是一个具有自动补全功能的最佳版本。但没有知觉!

整个人工智能圈的反对声音呈压倒性优势,一直如此。有趣的是,在我试图让一位大模型算法工程师讲讲对算法意识觉醒的看法时,他忍住翻白眼的冲动,反问我:“作为一个关注人工智能领域多年的人,你为何会提出这样(愚蠢)的问题?”

同样也作为一名心理学专家,Marcus 把 Blake Lemonie的表现描述为“爱上了它”。这的确是另一种合理的解释:

从童年时代我们对待电子宠物的方式,到现在时常对Siri智商流露出的亲人般的调侃,甚至在电子游戏中对NPC表达疯狂爱意……可以说,人类这个物种,是非常有能力移情和共情非人类的。

而随着现实与科幻小说的界限越来越模糊,我们也将会长期处于一个晦暗不明的过渡时代——愈发难以自拔且容易上当受骗。

谷歌灵异算法事件 荒诞却又可悲

事实上,早在2016年,在Facebook身上就发生过这样一起“算法成精”的故事——当时,大量国内外知名媒体都使用了这样的标题——《Facebook两个机器人在使用自己语言交流,项目迅速被关停》。

而真相则是,每种系统其实都有自己的一套“语言”(你凭什么认为机器人需要说“人可以理解的话”。以自动驾驶系统为例,你在无人车里看到的道路行驶界面,与系统看的界面便截然不同)。

而Facebook工程师让两个名叫Alice与Bob的程序互相对话,只是便于开发“生成性对抗网络”。所谓的“关停”,也仅是改变了开发策略。

对于整件事情,后来Facebook不得不撰写一篇博文来做澄清。但有意思的是,在一开始的新奇感过后,几乎没人愿意去理会那些艰涩难懂的技术解释。

谷歌灵异算法事件 荒诞却又可悲

标题:在俩机器人开始用自己能听懂的语言对话后,facebook关闭了这个项目

只是不得不感慨,7年过去,即便这类荒诞事件早已填满了我们日常生活,但大量有相关认知基础的大众(包含不少知识分子)仍然愿意相信,且乐于相信。

他们多沉浸于这样虚无缥缈的未来幻象中,却对早已埋伏身边已久的危险无动于衷。

事实上,我们恰恰正在陷入一种早已谋划好的陷阱中——被这些科幻故事转移注意力,被媒体舆论引向一种愈加相信主观印象而非科学证据做判断的思维方式。

“我们的确应该关注意识与感知,但却被模糊了重点。”

因提出“大模型现实性危害”而被开除的谷歌前人工智能道德团队负责人Timnit Gebru 认为,大多数关于算法的讨论,都仅限于技术端,却没有看到它掉在我们脚下泥土里产生的危害。

我们被完全带偏了。

Blake Lemoine 其实就是一个无休止技术炒作的受害者,转移了人工智能引发的无数道德与社会正义问题的注意力。”

这些问题包括:LaMDA是如何被训练出来的,它有多少产生有害文本的倾向。以及多年来由它引发的“人工智能殖民主义”、错误的商业应用——国内各类荒腔走板的购房与校园人脸应用,而美国也发生过至少有三名无辜的人因人脸误识别被逮捕的悲伤事件。

麻省理工学院科技评论指出,在过去几年中,越来越多的学者开始意识到,人工智能的影响正在复制“欧洲殖民历史”的经济模式——暴力夺取土地、开采资源和剥削人民,以牺牲穷人的利益为代价,让富人和有权势的人变得更加富有。

譬如,在那些市值总额超过万亿的云计算数据中心与汽车巨头的自动驾驶系统背后,是成千上万个来自委内瑞拉等贫穷国家的廉价数据标注劳动力。

因此,在接受媒体采访时,Gebru 拒绝讨论机器的感知能力。因为“在所有涉及到机器危险的现实情况下,都只是‘人类在伤害其他人类’”。

假如,我们把聊天机器人当成好朋友和亲人来看待,那么其背后的公司和其他机构是否会打我们的歪主意?但另一方面,如果我们把算法当成不值得尊重的玩意儿,那么对技术的剥削,从本质来看,是否只是加强了对彼此和自然环境的剥削?

自我觉醒的迫切性

我并不奇怪于国外对LaMDA事件讨论的多样性与丰富度。

7年前,当人工智能技术繁荣的号角在硅谷与国内同时响起时,就已经有不少硅谷顶级科学家流露出对“人工智能与人性之间会产生激烈对抗”的担忧。

而那个时候,对包括人工智能、大数据等一众前沿技术倍感欣奇与喜悦的我们,对当时很多关于“种族偏见、技术剥削”的悲观性预测感到不解,也对谷歌内部一波又一波引发的“技术歧视大游行”感到头疼,对Facebook在2018年因数据泄露遭受洪水般袭击时无动于衷,对欧洲人工智能技术立法的争议与反复感到不耐烦……

娱乐至死、商业化与技术带来的便利性占据了我们的大脑,我们为科技公司的产品与技术竞赛摇旗呐喊,对企业市值的扶摇直上给予无休止的追捧,对科学家与创业者的衡量标准只有“技术”与“商业”。

但却唯独缺少了“人性”。

因为觉得这似乎与科技,与自己都毫无关系。

于是,现在来看,很多人,包括我自己,不得不为自己的愚蠢、目光短浅和心胸狭隘而买单。

后来的后来,当我们遭遇数据泄露和人脸买卖时,对企业的批判开始猛烈,却收效甚微。2020年9月,一篇优秀特稿《外卖骑手,困在系统里》将企业利益、算法与人性的冲突推到了中国大众面前,这也是算法伦理问题在国内引发最为广泛的一次探讨。

直到现在,我仍然记得文章里那句算法的社会学含义——“在西弗教授看来,算法不仅由理性的程序形成,还由制度、人类、交叉环境和在普通文化生活中获得的粗糙-现成的理解形成。”

然而,它虽然揭露了现状,但解决方案却极为苍白,因为这涉及到一整个城市运行系统的改造,而不能单单强调“对程序员社会科学观念的培养”

因此,它虽然换来了巨头企业的微微低头,却没有换来一份最终可行的解决方案——小哥仍然困在算法和个人利益里,而我们也困在外卖小哥带来的交通困境里。

后来,让我们惊讶的是,同样的困境与矛盾也出现在了印尼打车巨头 Gojek 与它的摩托车出租车司机队伍之间。

但后者,却建立起一只类似于开发者社区的集体力量——他们成立了几百个的司机社区,帮助彼此学会“哄骗”算法误识别自己偏好的个人技巧;也有精通技术的人开发出未经授权的APP生态,调整和优化帐户,减少对Gojek本身算法团队的依赖。

更重要的是,这场反击战真正践行了“制度、人类、交叉环境和对文化生活生活的理解,都是算法的组成部分”——Gojek提供歇脚的营地,地方当局批准定期集会,小吃摊与清真寺作为临时住所提供给雅加达没有住房的年轻人居住。

谷歌灵异算法事件 荒诞却又可悲

图片来自MIT:骑手们聚集在路边摊这样的小基地里,吃点东西,给手机充电,交换一些在路上保持安全的小贴士

好的,我们又想,这些技术与大数据在商业方向的糟粕,是发展过程中必不可少的一部分,只要小心谨慎便可躲过,便不会落在自己头上。

而后来,包括河南红码在内,一切都让曾经公关文案里充满各种未来科技想象力的故事,越来越面目可憎。而我也真正意识到,绵延的争议与恶果,就是一场无人可脱身的自我审判。

记住,昨天在他身上所赋予的科技枷锁,未来也一定会架在你的身上。

因此,对待LaMDA的灵异事件,我们必须穿透它激起的恐惧、惊讶和兴奋和通过谎言进行的炒作,打破现实与科幻小说的界限。“对于未来,人们关注的应该是人类的福祉,而不是机器人的权利”,Gebru坚定地认为。

而我还想补充一句,对待技术的正确意识觉醒,便是在争取自己未来的福祉

No media will be indifferent to intelligent algorithms. As a result, we saw hundreds of similar reports:

“AI really woke up, and Google engineers were forced to take time off because of major discoveries.”

More interestingly, the engineer, who spent seven years at the Google Technology Ethics Institute, urged the authorities to give LAMDA the rights it deserves. Google, on the other hand, quickly organized scientists to conduct multiple rounds of tests on LaMDA, and finally refuted this “theory of consciousness.”

But no one has ever cared about the official story.

The public is still impressed by the wonderful and smooth details of the dialogue on LaMDA, which not only understands the essence of Les Miserables, but also has a humble opinion of Jingde Chuan Lantern, an extremely difficult Chinese Buddhist book on the other side of the ocean.

“artificial intelligence is really moving in the direction of consciousness, and I feel like I’m talking to something smart.” Also this month, Blaise Ag ü era y Arcas, vice president of Google, wrote a personal article in the Economist to do a marketing interpretation.

Although his remarks have been described by many scientists as “nice but nonsense”.

This conversation is taken from the details of the conversation between the engineer and LaMDA, the former asking the algorithm to explain the meaning of a Buddhist story, and the answer of the algorithm is not much different from the answer.

As in our previous series of reports on GPT3, LaMDA is also a member of the family of “big models of natural language processing” in recent years– a set of parameters built by processing huge amounts of text.Neural network prediction algorithm model up to 137 billion.

LaMDA, which was jubilantly welcomed by CEO Picsai at the Google developer conference in May 2021, is said to have benefited Google by improving the ability of web search and automatic text generation. It is predicted that it will go straight into the “voice market” in the future, with better performance than Amazon Alexa and Apple Siri.

Its training data comes from dialogue patterns involving multiple roles, including more than 1 trillion words, including Q & A sites and Wikipedia. This huge databaseIt is very helpful for the algorithm to generate text with different styles.— especially when talking to youNatural, vivid, smooth and philosophicalThey are all “basic skills”.

Of course, we have learned this skill on OpenAI’s big model GPT3. Therefore, if you can understand the unique working mechanism of the big model and watch the news reports written by the big model and the texts of Harry Potter and A Dream of Red Mansions, then you won’t be too surprised by the performance of LaMDA.

The Guardian once wrote a column on GPT-3-“are you scared?” There are almost no logical and linguistic errors in human beings. The general central idea is “although I am a thinking robot, but don’t be afraid, I will not destroy you, I am a community with a shared future for mankind.”

It is clear that this incident is the same as the continuous hallucinations, lies and arguments about the “rise of algorithmic consciousness” over the past few decades. However, forThe AI scientist closest to the center of the stormWe don’t know how many times we have “sighed low”.

They have never been moved by these claims, and their curses have become increasingly fierce and helpless.

“nonsense! LaMDA and its cousins (in this case, big models released by other companies, including GPT3) are not particularly smart. ” Gary Marcus, a famous American machine learning expert and professor of psychology, wrote an article denouncing the media and Google engineers for misleading the public.

“what they do isMatching patterns to extract data from the human language statistical database. Models are cool, but they just put a series of words together reasonably.There is no coherent understanding of the world behind them..

It can predict what words to match a given context, but it is only the best version with automatic completion.But unconscious!

The opposition to the entire artificial intelligence circle is overwhelming, as it has always been. Interestingly, when I tried to get a large model algorithm engineer to talk about the awakening of algorithm consciousness, he resisted the impulse to roll his eyes and asked me, “as a person who has been concerned about the field of artificial intelligence for many years, why would you ask such a stupid question?”

Marcus, also a psychologist, described Blake Lemonie’s performance as “falling in love with it.” This is indeed another reasonable explanation:

From the way we treated electronic pets in our childhood, we often tease Siri IQ like relatives, and even express crazy love for NPC in video games. It can be said that the human species is very capable of empathy and empathy.

And as the line between reality and science fiction becomes more and more blurred, we will be in a long-term transition period-more difficult to extricate ourselves and gullible.

In fact, as early as 2016, there was such a story of “algorithm mastery” on Facebook– at that time, a large number of well-known media at home and abroad used the title “Facebook two robots are communicating in their own language, and the project was quickly shut down.”

The truth is that each system actually has its own “language” (why do you think robots need to say “words that people can understand”? Take the autopilot system as an example, the road driving interface you see in an unmanned car is very different from that seen by the system.

Facebook engineers let two programs called Alice and Bob talk to each other, just to facilitate the development of “generative confrontation networks.” The so-called “shutdown” only changes the development strategy.

Later, Facebook had to write a blog post to clarify the whole matter.But interestingly, after the initial novelty, few people are willing to pay attention to the difficult technical explanations.

Title: facebook closed the project after the two robots began to talk in a language they could understand

I just have to sigh that seven years later, even though such absurd events have already filled our daily lives, a large number of people (including many intellectuals) who have the relevant cognitive basis are still willing to believe and willing to believe.

Most of them are immersed in such illusory visions of the future, but they are indifferent to the dangers that have been lurking around them for a long time.

In fact, we are just falling into a trap that has long been planned–Distracted by these science fiction stories and led by the media to a way of thinking that increasingly believes in subjective impressions rather than scientific evidence.

“We should indeed pay attention to consciousness and perception, but the focus is blurred.”

Timnit Gebru, a former head of Google’s artificial intelligence ethics team who was fired for proposing “the reality harm of a big model”, believes that most of the discussion about the algorithm is limited to the technical side, but does not see the harm caused by it falling into the dirt under our feet.

We were completely misguided.

Blake Lemoine is actually a victim of endless technology hype, diverting attention from numerous moral and social justice issues caused by artificial intelligence. “

These questions include how LaMDA is trained and how much it tends to produce harmful text. And the “artificial intelligence colonialism” and wrong commercial applications triggered by it over the years– all kinds of absurd home purchase and campus face applications in China, and there have also been sad incidents in the United States in which at least three innocent people have been arrested for face recognition.

The MIT Science and Technology Review points out that in the past few years,More and more scholars begin to realize that the influence of artificial intelligence is replicating the economic model of “European colonial history”— violently seizing land, exploiting resources and exploiting people, making the rich and powerful richer at the expense of the poor.

Behind the self-driving systems of cloud computing data centers and car giants with a market capitalization of more than $1 trillion, for example, are thousands of cheap data-tagged labor from poor countries such as Venezuela.

Therefore, in an interview with the media, Gebru declined to discuss the perception of the machine. BecauseIn all the real situations where machine dangers are involved, it’s just ‘humans are hurting other humans’.

If we treat chatbots as good friends and relatives,So will the companies and other institutions behind it have a bad idea about us?? But on the other hand,If we regard algorithms as things that are not worthy of respect, does the exploitation of technology, in essence, only strengthen the exploitation of each other and the natural environment?

The urgency of self-awakening

I am not surprised by the diversity and richness of foreign discussions on LaMDA events.

Seven years ago, when the bugle of artificial intelligence technology boom sounded in Silicon Valley and at home at the same time, many of Silicon Valley’s top scientists expressed concern about “fierce confrontation between artificial intelligence and human nature”.

At that time, we, who were surprised and delighted with a lot of cutting-edge technologies, including artificial intelligence and big data, were puzzled by many pessimistic predictions about “racial prejudice and technology exploitation” at that time. We also felt a headache about the “technology discrimination march” triggered by wave after wave within Google, and were indifferent to Facebook when it was flooded by data leaks in 2018. Impatient with the controversy and repetition of European artificial intelligence technology legislation.

The convenience of entertainment to death, commercialization and technology occupy our brainsWe cheer for the products and technology competitions of technology companies, give endless admiration for the soaring market capitalization of enterprises, and only “technology” and “business” are the criteria for scientists and entrepreneurs.

But the only thing missing is “human nature”.

Because I think it seems to have nothing to do with technology or myself.

SoNow, many people, including myself, have to pay for their stupidity, short-sightedness and narrow-mindedness.

Later, when we encountered data leaks and face trading, the criticism of enterprises began to be fierce, but with little success. In September 2020, an excellent feature article “takeout riders trapped in the system” pushed the conflict between corporate interests, algorithms and human nature to the Chinese public, which is also the most extensive discussion of algorithmic ethics in China.

Until now, I still remember the sociological meaning of the algorithm in the article– “in Professor Seaver’s view, algorithms are not only formed by rational programs.”It is also formed by institutions, human beings, cross environments, and rough-ready understandings gained in ordinary cultural life.

However, although it reveals the status quo, the solution is extremely pale, because it involves the transformation of an entire urban operating system, rather than just emphasizing“the cultivation of programmers’ concept of social science”.

Therefore, although it has bowed slightly from the giant enterprises, it has not obtained a feasible solution in the end.The little brother is still stuck in the algorithm and personal interests, and we are also stuck in the traffic dilemma brought about by the delivery boy.

Later, to our surprise, the same dilemma and contradiction emerged between Indonesian taxi giant Gojek and its team of motorcycle taxi drivers.

But the latter, however, build a collective force similar to the developer community-they have set up hundreds of driver communities to help each other learn the personal skills of “coaxing” algorithms to misidentify their preferences; there are also tech-savvy people who develop unauthorized APP ecology, adjust and optimize accounts, and reduce reliance on Gojek’s own algorithm team.

More importantly, this counterattack has truly implemented “institutions, humankind, cross-environment and understanding of cultural life are all part of the algorithm”-Gojek provides resting camps, local authorities approve regular gatherings, snack stalls and mosques provide temporary accommodation for Jakarta’s young people without housing.

Picture from MIT: riders gather in small bases like roadside stalls to eat, charge their phones and exchange tips for staying safe on the road.

Well, we also think that these technologies and big data’s dross in the business direction are an indispensable part of the development process. As long as we are careful, we can avoid them, and they will not fall on our own heads.

Later, everything, including Henan Red Code, made the stories that used to be public relations copywriting full of future scientific and technological imagination become more and more abominable.And I really realize that the continuous controversy and evil consequences is a self-trial from which no one can escape.

Remember, the shackles of technology that were given to him yesterday will certainly rest on you in the future.

Therefore, to deal with the psychic events of LaMDA, we must penetrate the fear, surprise and excitement aroused by it and the hype through lies, and break the line between reality and science fiction. “for the future, people should focus on the well-being of human beings, not the rights of robots,” Gebru firmly believes.

And I would like to add thatTo awaken the correct consciousness of technology is to strive for one’s own future well-being..

© 版权声明

相关文章

网站公众号快速收录