“人工智能的确在向着意识的方向前进，我感觉自己在和一些聪明的东西说话。” 同样在本月，谷歌副总裁 Blaise Agüera y Arcas 曾在《经济学人》上亲自撰稿做营销式解读，再添一把火。
“胡说八道！LaMDA跟它的表亲们（这里指其他公司发布的大模型，包括GPT3）都不是特别聪明。” 美国著名机器学习专家兼心理学教授Gary Marcus撰文怒斥媒体和谷歌工程师误导大众，
同样也作为一名心理学专家，Marcus 把 Blake Lemonie的表现描述为“爱上了它”。这的确是另一种合理的解释：
因提出“大模型现实性危害”而被开除的谷歌前人工智能道德团队负责人Timnit Gebru 认为，大多数关于算法的讨论，都仅限于技术端，却没有看到它掉在我们脚下泥土里产生的危害。
Blake Lemoine 其实就是一个无休止技术炒作的受害者，转移了人工智能引发的无数道德与社会正义问题的注意力。”
后来，让我们惊讶的是，同样的困境与矛盾也出现在了印尼打车巨头 Gojek 与它的摩托车出租车司机队伍之间。
No media will be indifferent to intelligent algorithms. As a result, we saw hundreds of similar reports:
“AI really woke up, and Google engineers were forced to take time off because of major discoveries.”
More interestingly, the engineer, who spent seven years at the Google Technology Ethics Institute, urged the authorities to give LAMDA the rights it deserves. Google, on the other hand, quickly organized scientists to conduct multiple rounds of tests on LaMDA, and finally refuted this “theory of consciousness.”
But no one has ever cared about the official story.
The public is still impressed by the wonderful and smooth details of the dialogue on LaMDA, which not only understands the essence of Les Miserables, but also has a humble opinion of Jingde Chuan Lantern, an extremely difficult Chinese Buddhist book on the other side of the ocean.
“artificial intelligence is really moving in the direction of consciousness, and I feel like I’m talking to something smart.” Also this month, Blaise Ag ü era y Arcas, vice president of Google, wrote a personal article in the Economist to do a marketing interpretation.
Although his remarks have been described by many scientists as “nice but nonsense”.
This conversation is taken from the details of the conversation between the engineer and LaMDA, the former asking the algorithm to explain the meaning of a Buddhist story, and the answer of the algorithm is not much different from the answer.
As in our previous series of reports on GPT3, LaMDA is also a member of the family of “big models of natural language processing” in recent years– a set of parameters built by processing huge amounts of text.Neural network prediction algorithm model up to 137 billion.
LaMDA, which was jubilantly welcomed by CEO Picsai at the Google developer conference in May 2021, is said to have benefited Google by improving the ability of web search and automatic text generation. It is predicted that it will go straight into the “voice market” in the future, with better performance than Amazon Alexa and Apple Siri.
Its training data comes from dialogue patterns involving multiple roles, including more than 1 trillion words, including Q & A sites and Wikipedia. This huge databaseIt is very helpful for the algorithm to generate text with different styles.— especially when talking to youNatural, vivid, smooth and philosophicalThey are all “basic skills”.
Of course, we have learned this skill on OpenAI’s big model GPT3. Therefore, if you can understand the unique working mechanism of the big model and watch the news reports written by the big model and the texts of Harry Potter and A Dream of Red Mansions, then you won’t be too surprised by the performance of LaMDA.
The Guardian once wrote a column on GPT-3-“are you scared?” There are almost no logical and linguistic errors in human beings. The general central idea is “although I am a thinking robot, but don’t be afraid, I will not destroy you, I am a community with a shared future for mankind.”
It is clear that this incident is the same as the continuous hallucinations, lies and arguments about the “rise of algorithmic consciousness” over the past few decades. However, forThe AI scientist closest to the center of the stormWe don’t know how many times we have “sighed low”.
They have never been moved by these claims, and their curses have become increasingly fierce and helpless.
“nonsense! LaMDA and its cousins (in this case, big models released by other companies, including GPT3) are not particularly smart. ” Gary Marcus, a famous American machine learning expert and professor of psychology, wrote an article denouncing the media and Google engineers for misleading the public.
“what they do isMatching patterns to extract data from the human language statistical database. Models are cool, but they just put a series of words together reasonably.There is no coherent understanding of the world behind them..
It can predict what words to match a given context, but it is only the best version with automatic completion.But unconscious!“
The opposition to the entire artificial intelligence circle is overwhelming, as it has always been. Interestingly, when I tried to get a large model algorithm engineer to talk about the awakening of algorithm consciousness, he resisted the impulse to roll his eyes and asked me, “as a person who has been concerned about the field of artificial intelligence for many years, why would you ask such a stupid question?”
Marcus, also a psychologist, described Blake Lemonie’s performance as “falling in love with it.” This is indeed another reasonable explanation:
From the way we treated electronic pets in our childhood, we often tease Siri IQ like relatives, and even express crazy love for NPC in video games. It can be said that the human species is very capable of empathy and empathy.
And as the line between reality and science fiction becomes more and more blurred, we will be in a long-term transition period-more difficult to extricate ourselves and gullible.
In fact, as early as 2016, there was such a story of “algorithm mastery” on Facebook– at that time, a large number of well-known media at home and abroad used the title “Facebook two robots are communicating in their own language, and the project was quickly shut down.”
The truth is that each system actually has its own “language” (why do you think robots need to say “words that people can understand”? Take the autopilot system as an example, the road driving interface you see in an unmanned car is very different from that seen by the system.
Facebook engineers let two programs called Alice and Bob talk to each other, just to facilitate the development of “generative confrontation networks.” The so-called “shutdown” only changes the development strategy.
Later, Facebook had to write a blog post to clarify the whole matter.But interestingly, after the initial novelty, few people are willing to pay attention to the difficult technical explanations.
Title: facebook closed the project after the two robots began to talk in a language they could understand
I just have to sigh that seven years later, even though such absurd events have already filled our daily lives, a large number of people (including many intellectuals) who have the relevant cognitive basis are still willing to believe and willing to believe.
Most of them are immersed in such illusory visions of the future, but they are indifferent to the dangers that have been lurking around them for a long time.
In fact, we are just falling into a trap that has long been planned–Distracted by these science fiction stories and led by the media to a way of thinking that increasingly believes in subjective impressions rather than scientific evidence.
“We should indeed pay attention to consciousness and perception, but the focus is blurred.”
Timnit Gebru, a former head of Google’s artificial intelligence ethics team who was fired for proposing “the reality harm of a big model”, believes that most of the discussion about the algorithm is limited to the technical side, but does not see the harm caused by it falling into the dirt under our feet.
“We were completely misguided.
Blake Lemoine is actually a victim of endless technology hype, diverting attention from numerous moral and social justice issues caused by artificial intelligence. “
These questions include how LaMDA is trained and how much it tends to produce harmful text. And the “artificial intelligence colonialism” and wrong commercial applications triggered by it over the years– all kinds of absurd home purchase and campus face applications in China, and there have also been sad incidents in the United States in which at least three innocent people have been arrested for face recognition.
The MIT Science and Technology Review points out that in the past few years,More and more scholars begin to realize that the influence of artificial intelligence is replicating the economic model of “European colonial history”— violently seizing land, exploiting resources and exploiting people, making the rich and powerful richer at the expense of the poor.
Behind the self-driving systems of cloud computing data centers and car giants with a market capitalization of more than $1 trillion, for example, are thousands of cheap data-tagged labor from poor countries such as Venezuela.
Therefore, in an interview with the media, Gebru declined to discuss the perception of the machine. BecauseIn all the real situations where machine dangers are involved, it’s just ‘humans are hurting other humans’.
If we treat chatbots as good friends and relatives,So will the companies and other institutions behind it have a bad idea about us?? But on the other hand,If we regard algorithms as things that are not worthy of respect, does the exploitation of technology, in essence, only strengthen the exploitation of each other and the natural environment?
The urgency of self-awakening
I am not surprised by the diversity and richness of foreign discussions on LaMDA events.
Seven years ago, when the bugle of artificial intelligence technology boom sounded in Silicon Valley and at home at the same time, many of Silicon Valley’s top scientists expressed concern about “fierce confrontation between artificial intelligence and human nature”.
At that time, we, who were surprised and delighted with a lot of cutting-edge technologies, including artificial intelligence and big data, were puzzled by many pessimistic predictions about “racial prejudice and technology exploitation” at that time. We also felt a headache about the “technology discrimination march” triggered by wave after wave within Google, and were indifferent to Facebook when it was flooded by data leaks in 2018. Impatient with the controversy and repetition of European artificial intelligence technology legislation.
The convenience of entertainment to death, commercialization and technology occupy our brainsWe cheer for the products and technology competitions of technology companies, give endless admiration for the soaring market capitalization of enterprises, and only “technology” and “business” are the criteria for scientists and entrepreneurs.
But the only thing missing is “human nature”.
Because I think it seems to have nothing to do with technology or myself.
SoNow, many people, including myself, have to pay for their stupidity, short-sightedness and narrow-mindedness.
Later, when we encountered data leaks and face trading, the criticism of enterprises began to be fierce, but with little success. In September 2020, an excellent feature article “takeout riders trapped in the system” pushed the conflict between corporate interests, algorithms and human nature to the Chinese public, which is also the most extensive discussion of algorithmic ethics in China.
Until now, I still remember the sociological meaning of the algorithm in the article– “in Professor Seaver’s view, algorithms are not only formed by rational programs.”It is also formed by institutions, human beings, cross environments, and rough-ready understandings gained in ordinary cultural life.“
However, although it reveals the status quo, the solution is extremely pale, because it involves the transformation of an entire urban operating system, rather than just emphasizing“the cultivation of programmers’ concept of social science”.
Therefore, although it has bowed slightly from the giant enterprises, it has not obtained a feasible solution in the end.The little brother is still stuck in the algorithm and personal interests, and we are also stuck in the traffic dilemma brought about by the delivery boy.
Later, to our surprise, the same dilemma and contradiction emerged between Indonesian taxi giant Gojek and its team of motorcycle taxi drivers.
But the latter, however, build a collective force similar to the developer community-they have set up hundreds of driver communities to help each other learn the personal skills of “coaxing” algorithms to misidentify their preferences; there are also tech-savvy people who develop unauthorized APP ecology, adjust and optimize accounts, and reduce reliance on Gojek’s own algorithm team.
More importantly, this counterattack has truly implemented “institutions, humankind, cross-environment and understanding of cultural life are all part of the algorithm”-Gojek provides resting camps, local authorities approve regular gatherings, snack stalls and mosques provide temporary accommodation for Jakarta’s young people without housing.
Picture from MIT: riders gather in small bases like roadside stalls to eat, charge their phones and exchange tips for staying safe on the road.
Well, we also think that these technologies and big data’s dross in the business direction are an indispensable part of the development process. As long as we are careful, we can avoid them, and they will not fall on our own heads.
Later, everything, including Henan Red Code, made the stories that used to be public relations copywriting full of future scientific and technological imagination become more and more abominable.And I really realize that the continuous controversy and evil consequences is a self-trial from which no one can escape.
Remember, the shackles of technology that were given to him yesterday will certainly rest on you in the future.
Therefore, to deal with the psychic events of LaMDA, we must penetrate the fear, surprise and excitement aroused by it and the hype through lies, and break the line between reality and science fiction. “for the future, people should focus on the well-being of human beings, not the rights of robots,” Gebru firmly believes.
And I would like to add thatTo awaken the correct consciousness of technology is to strive for one’s own future well-being..