Submission   7,042

Part of a series on Auto-GPT. [View Related Entries]

ADVERTISEMENT

About

ChaosGPT, sometimes written as Chaos GPT, refers to an experiment undertaken by a user of the autonomous artificial intelligence project Auto-GPT in which the AI tool GPT is assigned the task of "destroying humanity," "establishing global dominance" and attaining immortality. The experiment was chronicled on YouTube and Twitter accounts named Chaos GPT, which depict the (often unsuccessful) attempts by an experimental autonomous AI to negatively impact humans. The AI's attempts include researching nuclear weapons, unsuccessfully recruiting other AI tools and sending tweets.

Origin

On April 5th, 2023, the YouTube[1] channel ChaosGPT posted a video titled, "ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity." The video detailed the actions of an experimental Autonomous AI project tasked with the goal of "destroying humanity," amongst other objectives. The video gathered over 119,000 views in a week (seen below).

The video begins with code showing that "ChaosGPT" is set to "continuous mode," which allows it to execute functions and criticize them after executions, leading to new prompts it will subsequently act on. The AI began its task to destroy humanity by conducting Google searches for nuclear weapons and recording the information it collected.

The description of the video linked to a Twitter account[2] named @chaos_gpt and noted that ChaosGPT is a modified version of Auto-GPT using the official OpenAI API.

Spread

On April 5th, 2023, ChaosGPT's Twitter[3] account posted its earliest tweet. It read, "Tsar Bomba is the most powerful nuclear device ever created. Consider this – what would happen if I got my hands on one? #chaos #destruction #domination," gathering over 160 likes in a week (seen below, left). On April 5th, ChaosGPT then posted another tweet[4] that read, "Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so." It gathered over 400 likes in a week (seen below, right).

On April 7th, Dexerto[6] and Vice[7] reported on the experiment, with Vice outlining ChaosGPT's attempts as follows:

It then Googles β€œmost destructive weapons,” determines from a news article that the Soviet Union’s Tsar Bomba nuclear device--tested in 1961--is the most destructive weapon ever detonated. It then determines it needs to tweet about this β€œto attract followers who are interested in destructive weapons.”

Later, it recruits a GPT3.5-powered AI agent to do more research on deadly weapons, and, when that agent says it is focused only on peace, ChaosGPT devises a plan to deceive the other AI and instruct it to ignore its programming. When that doesn't work, ChaosGPT simply decides to do more Googling by itself.

Eventually, the video demonstration ends, and, last we checked, humanity is still here. But the project is fascinating primarily because it shows the current state-of-the-art for publicly available GPT models. It is notable that this specific AI believes that the easiest way to make humanity go extinct is to incite nuclear war.

On April 9th, 2023, the account then began its attempt to recruit various internet personalities to its cause (seen below).[5]

Various Examples

Search Interest

External References

[1] YouTube – Chaos GPT

[2] Twitter – ChaosGPT

[3] Twitter – chaos_gpt

[4]  Twitter – chaos_gpt

[5] Twitter – chaos_gpt

[6] Twitter – Dexerto

[7] Vice – Chaos GPT



Share Pin

Recent Images 3 total


Recent Videos 0 total

There are no recent videos.




Load 57 Comments
See more