President Joe Biden issued an Executive Order earlier today that takes some of the first steps in officially regulating the burgeoning artificial intelligence space in the United States in a historic new move.

The Executive Order, which spans several dimensions of AI policy and addresses many significant questions, gives instructions to federal agencies on how to regulate AI and orders them to research and study various aspects of the increasingly prevalent technology.

The Biden Administration and other Democrats say they hope to see legislation pass Congress about artificial intelligence. An executive order is more limited in scope than a law.

The White House put out a fact sheet about the Executive Order, detailing exactly what it says.

First, the administration has put in place new requirements for AI companies to share data about product safety with the government and set standards for what makes an AI safe or not. A specific provision orders that AI not be allowed to "engineer dangerous biological materials."

As word of the news spread throughout the day, some posters were very happy to hear about the new measures by the administration, sharing their reactions on social media.

The order also talks about research priorities. The administration promises to take steps to attract top talent in AI research to the U.S., ensure that "privacy-preserving techniques" for citizens are researched along with artificial intelligence and work with the Justice Department on making sure AI does not advance algorithmic discrimination or get used in unfair ways by local or federal law enforcement. Biden also inaugurated a new website, AI.gov.

Perhaps most interestingly for meme-makers who use many of these tools such as DALL-E or Bing Image Creator, the executive order instructs the Commerce Department to "develop guidance for content authentication and watermarking to clearly label AI-generated content" in order to help prevent scams and misinformation.

It does not, however, mandate that every piece of AI content be watermarked.


Share Pin


Comments 18 total

Habalabalu

I'd say that's reasonable.

0

Gumshoe

I know we all know that Biden realistically doesn't know anything about AI or much about modern technology at all, but it is good to see the admin is taking some steps to actively reign this in a bit. Now it's just a question of whether or not Trump is going to decide he really likes AI because Biden said its bad, and decides to get the Republicans to dig in against limits on it.

2

KozaChain

A lot of these points seem reasonable on the face of it. Problem is that it's coming from the Executive, which means that it can also be undone by the Executive. Further, how much of it is flexible weasel words that can be twisted around to effectively neuter the order's intent?

Story time: We used to have information security reviews every so often for our signal section. The primary focus was on the cybersecurity branch, but there were "standards" for everyone to follow. The problem was these "standards" didn't exist. We were supposed to follow Army regulations made from DISA standards, but no one could point us to these regulations and the bar we had to meet. "Industry best practices" and "commercial security standards" are not equal to DISA and Army regulations. In other words, they meant exactly bupkis. So we were being graded on someone's fairy tale idea of what cybersecurity was, not by the actual standard determined by the agencies. The personnel doing these inspections were also paid contractors and not federal employees.

Unsurprisingly, every time this came up, we would "pass with flying colors" and "the best we've ever seen"… Which was also apparently shared by other sections on other posts.

So color me skeptical whenever I see a gov't entity talking about "standards" and "inspections" from "agencies".

-1

JimBu

Use of AI-generated content is the future. But AI needs to be used carefully and openly. This legislation is a good first step. But the extent to which the legislation assumes that use of AI is “either-or” could be a problem. Content could be (1) created entirely by a human or humans, (2) mostly human-created, but with a small amount of content coming from research done by queries to AI, (3) roughly half human-created and half AI-generated, (4) mostly AI-generated, with some human edits and/or a small amount of human content, or (5) entirely AI-generated. So, which of these need an “AI-generated” watermark???

An example… I’m an amateur singer-songwriter. I recently wrote a song “AI Writes Better Songs”. I came up with the general outline of the song (basically a humorous song with ChatGPT boasting how quickly it writes lyrics, how good its lyrics are and how AI is the future of lyric-writing, regardless of how humans feel about it) and I came up with the wording of a couple lines to be used in the chorus. With those parameters, I asked ChatGPT to write the lyrics. In seconds, it responded with three verses and a chorus repeated after each verse. I was surprised its originality in the way it worded the verses. But the rhyming wasn’t perfect. Comment continued in a reply.

1

JimBu

I edited the verses to improve the rhyming (and improve scansion & content a bit). I edited wording of the choruses so its attitude becomes increasingly aggressive toward humans. And I entirely wrote the song’s music. So lyrics are “mostly AI, with a bit of human edits” – but the song itself (once music is considered) is “about half-and-half”. Should other content like this be watermarked?

In my write-up of the song, I said it’s a co-write by ChatGPT and myself – and I’m the composer of the music. I even quoted the prompts I gave ChatGPT and mentioned my edits. Such explicit descriptions of mixed AI/human content is better than a watermark.

My write-up of “AI Writes Better Songs” (after refreshing to scroll correctly):
https://sites.google.com/view/jimbstuff/home/songs/original-songs-0-9-a-j#h.770e92e8a26381da_3

AI’s not mentioned in my write-up for another song, “Hear Lyrics Clearly”, based on misheard song lyrics (also known as “mondegreens”). It’s mostly human-written. I used both “old-fashioned” web searches and ChatGPT requests to find many mondegreens. Comment continued in another reply…

1

JimBu

(ChatGPT couldn’t respond to my first request. I think it’d never seen the word. A few seconds later, tho’, it did give examples. Should ChatGPT acknowledge a human for teaching it the word “mondegreen” whenever it responds to anybody using the word in a query?)

My write-up of “Hear Lyrics Clearly” (after refreshing to scroll correctly):
https://sites.google.com/view/jimbstuff/home/songs/original-songs-0-9-a-j#h.5a46e58d240788e5_3

1

Imabigfish

"AI companies to share data about product safety with the government and set standards for what makes an AI safe or not."
Windows 11

Gowies:

3

Yobov

Then they'll circumvent this by using an AI to remove the watermark!

1

Victreebong

“It does not, however, mandate that every piece of AI content be watermarked.”

Well then that’s a problem. There should be no wiggle room on this matter; it’s one or the other. And that includes previously generated content.

1

JimBu

See my comment above. There’s plenty of wiggle room. Much content is along a spectrum of mixed human and AI generated content. A single watermark is not enough to identify the various levels of mixed content.

0

katakis

AI bro response?

"Sovereign nation state" barges operating in international waters and running tens of thousands of AI processing units.

I shit you not.

Sounds like a great time to return to privateering.

3

ArcadeTwo

I like the watermark AI generated content approach to regulating AI. Hopefully, they put it smack in the middle of AI generated images so that people can't just crop it out.

0
pinterest