Confirmed   53,844

Part of a series on AI / Artificial Intelligence. [View Related Entries]


Featured Episode

Advertisement

About

Paperclip Maximizer is a thought experiment about an artificial intelligence designed with the sole purpose of making as many paperclips as possible, which could hypothetically destroy the world or even the entire universe by converting all resources into paperclips due to instrumental convergence.

Origin

In 2003, Swedish philosopher Nick Bostrom released a paper titled "Ethical Issues in Advanced Artificial Intelligence," which included the paperclip maximizer thought experiment to illustrate the existential risks posed by creating artificial general intelligence.[1]

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

Spread

On April 17th, 2009, an entry for the thought experiment was submitted to the Less Wrong Wiki.[4] On October 1st, 2010, Redditor DarthContinent submitted the LessWrong article to /r/programming,[2] where it gathered more than 460 points (88% upvoted) and 450 comments prior to being archived. On April 27th, 2015, the TED YouTube channel released a talk by Bostrom in which he discussed variations of the thought experiment (shown below).

On January 6, 2015, Redditor eaturbrainz submitted a post titled "You are a paperclip maximizer who has been let out of the box. How do you destroy the world?" to /r/rational.[5]

Games

For the 2017 STDIO Game Jam, a paperclip maximizer video game made in Python was submitted by developer Omar Rizwan, in which the player converts various materials into paperclips.[3] In early October 2017, game designer Frank Lantz released the incremental game "Universal Paperclip" based on the thought experiment, in which the player controls a paperclip maximizer AI with the goal of turning all matter in the universe into paperclips (shown below).[12]

On October 11th, the Univesal Paperclip Wiki was created.[6] Over the next several weeks, the game was submitted to the /r/rational and /r/futorology subreddits and was reported on by several internet news sites, including Kotaku,[7] BoingBoing,[8] Venture Beat,[9] AV Club[10] and Kottke.[11] As interest on the concept spread, several technology publications such as The Verge[13] and Wired[14] and wrote about it with a focus on the dangers of AI.

Search Interest

External References



Share Pin

Related Entries 63 total

Cleverbot
e/acc (Effective Acceleration...
AI Art
GPT (AI)


Recent Images 8 total


Recent Videos 2 total




Load 49 Comments

Paperclip Maximizer

Part of a series on AI / Artificial Intelligence. [View Related Entries]

Updated Dec 15, 2024 at 04:34PM EST by LiterallyAustin.

Added Oct 20, 2017 at 02:49PM EDT by Don.

PROTIP: Press 'i' to view the image gallery, 'v' to view the video gallery, or 'r' to view a random entry.

Featured Episode

About

Paperclip Maximizer is a thought experiment about an artificial intelligence designed with the sole purpose of making as many paperclips as possible, which could hypothetically destroy the world or even the entire universe by converting all resources into paperclips due to instrumental convergence.

Origin

In 2003, Swedish philosopher Nick Bostrom released a paper titled "Ethical Issues in Advanced Artificial Intelligence," which included the paperclip maximizer thought experiment to illustrate the existential risks posed by creating artificial general intelligence.[1]

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

Spread

On April 17th, 2009, an entry for the thought experiment was submitted to the Less Wrong Wiki.[4] On October 1st, 2010, Redditor DarthContinent submitted the LessWrong article to /r/programming,[2] where it gathered more than 460 points (88% upvoted) and 450 comments prior to being archived. On April 27th, 2015, the TED YouTube channel released a talk by Bostrom in which he discussed variations of the thought experiment (shown below).



On January 6, 2015, Redditor eaturbrainz submitted a post titled "You are a paperclip maximizer who has been let out of the box. How do you destroy the world?" to /r/rational.[5]

Games

For the 2017 STDIO Game Jam, a paperclip maximizer video game made in Python was submitted by developer Omar Rizwan, in which the player converts various materials into paperclips.[3] In early October 2017, game designer Frank Lantz released the incremental game "Universal Paperclip" based on the thought experiment, in which the player controls a paperclip maximizer AI with the goal of turning all matter in the universe into paperclips (shown below).[12]



On October 11th, the Univesal Paperclip Wiki was created.[6] Over the next several weeks, the game was submitted to the /r/rational and /r/futorology subreddits and was reported on by several internet news sites, including Kotaku,[7] BoingBoing,[8] Venture Beat,[9] AV Club[10] and Kottke.[11] As interest on the concept spread, several technology publications such as The Verge[13] and Wired[14] and wrote about it with a focus on the dangers of AI.

Search Interest

External References

Recent Videos 2 total

Recent Images 8 total


See more