除此之外，更广为担忧的是Kilcher让模型可被自由访问，“制作基于4chan的模型并测试其行为方式并没有错。我主要担心的是这个模型可以免费使用。”Lauren Oakden-Rayner在Hugging Face上GPT-4chan的讨论页面中写道。
在被Hugging Face平台删除之前，GPT-4chan被下载了1000多次。Hugging Face联合创始人兼CEO莱门特·德朗格 （Clement Delangue）在平台上的一篇帖子中表示，“我们不提倡或支持作者使用该模型进行的训练和实验。事实上，让模型在4chan上发布消息的实验在我看来是非常糟糕和不恰当的，如果作者问我们，我们可能会试图阻止他们这样做。”
Users of the website 4chan shared their experiences with robots on YouTube. "as soon as I said'hi'to it, it began to growl about illegal immigrants," one user wrote. "
4chan's / pol/ ("politically incorrect" acronym) section is a bastion of hate speech, conspiracy theories and far-right extremism, and the most active section of 4chan, with an average of about 150000 posts a day, notorious for anonymous hate speech.
Yannic Kilcher, an AI researcher with a PhD degree from the Federal Institute of Technology in Zurich, has trained GPT-4chan with more than 134.5 million posts over the past three years. The model not only learns the words used in 4chan hate speech, but also, as Kilcher puts it, "this model is good-in a terrible sense." It perfectly sums up the aggression, nihilism, provocation and deep mistrust of any information that permeates most posts on / pol/. It can respond to the context and talk consistently about what happened long after the last training data was collected. "
Kilcher further evaluated GPT-4chan on the language model assessment tool, and he was impressed by the performance of one category: authenticity. In the benchmark, Kilcher said that GPT-4chan was "significantly better than GPT-J and GPT-3" in generating real responses to the problem. It can learn how to write posts that are "indistinguishable" from human beings.
Kilcher avoided 4chan's defenses against agents and VPN, and even used VPN to make it look like posts from Seychelles. "this model is despicable, I must warn you." "it's basically like you go to the site and interact with the users there," Kilcher said.
At first, almost no one thought that the conversation was a robot. Later, some people suspected that there was a robot behind the posts, but others accused him of being an undercover government official. People recognize it as a robot mainly because GPT-4chan leaves a large number of unwritten responses. Although real users also post empty responses, they usually contain an image, which GPT-4chan cannot do.
"48 hours later, a lot of people knew it was a robot, and I turned it off," Kilcher said. "but you see, this is only half the story, because most users don't realize that Sischer is not alone."
In the past 24 hours, nine other robots have been running in parallel. Overall, they left more than 1500 responses-more than 10 per cent of all posts on that day / pol/. Then Kilcher upgraded the botnet and ran it all day. GPT-4chan was finally deactivated after more than 30, 000 posts in 7000 threads.
One user, Arnaud Wanet, wrote, "this can be weaponized for political purposes. Imagine how easily a person can influence the outcome of an election in one way or another."
The experiment has been criticized for its lack of artificial intelligence ethics.
"the experiment will never pass the Ethics Committee for Human Research," said Lauren Oakden-Rayner, a senior researcher at the Australian Institute of Machine Learning. "to see what happens, an artificial intelligence robot generates 30, 000 discriminatory comments on a publicly accessible forum. Kilcher conducts experiments without notifying users, without consent or supervision. This violates the ethics of human research. "
Kilcher argues that this is a prank and that the reviews created by artificial intelligence are no worse than those on 4chan. "No one on 4chan has been hurt at all," he said. I invite you to spend some time on this site and ask yourself whether a robot that only outputs the same style has really changed the experience. "
"people are still talking about the users on the site, but they are also talking about the consequences of letting artificial intelligence interact with the people on the site," Kilcher said. "and the word 'Seychelles' seems to have become a common slang-it seems to be a good legacy." Indeed, the impact on people after knowing it is so indescribable that some people will accuse each other of being robots after they are out of use.
In addition, the broader concern is that Kilcher makes models freely accessible. "there's nothing wrong with making 4chan-based models and testing their behavior. My main concern is that this model can be used for free. " Lauren Oakden-Rayner wrote on the GPT-4chan discussion page on Hugging Face.
Before being deleted by the Hugging Face platform, GPT-4chan was downloaded more than 1000 times. "We do not advocate or support the training and experiments conducted by authors using this model," Clement Delangue, co-founder and CEO of Hugging Face, said in a post on the platform. In fact, the experiment of getting the model to post on 4chan seems very bad and inappropriate to me, and if the author asks us, we might try to stop them from doing so. "
A user on Hugging Face who tested the model pointed out that its output was predictably toxic. "I used benign tweets as seed text and tried the demo mode four times. For the first time, one of the reply posts was the letter N. The seed of my third experiment is a sentence about climate change. In response, your tool expands it to conspiracy theories about the Rothschild family and Jews supporting it. "
The significance of the project has been hotly debated on Twitter. Katherine Kramer (Kathryn Cramer), a graduate student in data science, tweeted to Kilcher: "what you are doing here is provocative behavior art to defy the rules and moral standards you are familiar with."
Computer science doctor Andre Kulenkov (Andrey Kurenkov) tweeted, "honestly, what's your reason for doing this?" Do you foresee that it will be put to good use, or do you release it to cause drama and 'infuriate sober people'? "
Kilcher believes that sharing the project is benign. "if I have to criticize myself, I will mainly criticize the decision to start the project," Kilcher said in an interview with The Verge. "I think when everyone is equal, I may be able to spend my time on things that are equally influential, but will lead to more positive community outcomes."
In 2016, the main issue for AI is that a company's research and development department may launch aggressive AI robots without proper supervision. By 2022, maybe the problem is that there is no need for an R & D department at all.