AI Seinfeld was surreal fun until it called being trans an illness

Twitch has banned “Nothing, Forever,” the AI-generated Seinfeld stream, for at least 14 days following a transphobic and homophobic outburst. It’s the latest example of “hate in, hate out” when AI chatbots are trained on offensive content without adequate moderation.

Like Seinfeld, “Nothing, Forever” rotates between standup bits and scenes in the comedian’s apartment (he’s called “Larry Feinberg” in the AI version). As first reported by Vice, during one of the recent AI-scripted standup acts, the Seinfeld counterpart suggested that being transgender is a mental illness. In what almost seemed like an awareness of the material’s offensiveness, the AI comedian quickly added, “But no one is laughing, so I’m going to stop. Thanks for coming out tonight. See you next time. Where’d everybody go?” 

Although Twitch hasn’t confirmed that the “joke” was the reason for the ban, the stream was removed soon after the problematic segment aired. The program’s creators blame the hurtful rant on a model change that inadvertently left the stream without moderation tools.

“Earlier tonight, we started having an outage using OpenAI’s GPT-3 Davinci model, which caused the show to exhibit errant behaviors (you may have seen empty rooms cycling through),” a staff member wrote on Discord. “OpenAI has a less sophisticated model, Curie, that was the predecessor to Davinci. When davinci started failing, we switched over to Curie to try to keep the show running without any downtime. The switch to Curie was what resulted in the inappropriate text being generated. We leverage OpenAI’s content moderation tools, which have worked thus far for the Davinci model, but were not successful with Curie. We’ve been able to identify the root cause of our issue with the Davinci model, and will not be using Curie as a fallback in the future. We hope this sheds a little light on how this happened.”

Twitch

The team elaborated in another Discord post (via The Verge). “We mistakenly believed that we were leveraging OpenAI’s content moderation system for their text generation models. We are working now to implement OpenAI’s content moderation API (it’s a tool we can use to verify the safeness of the content) before we go live again, and investigating secondary content moderation systems as redundancies.”

Although the team sounds genuinely apologetic, stressing that the bigoted rant was a technical error that doesn’t represent their views, it reiterates the importance of consistent AI moderation. You may remember Microsoft’s Twitter chatbot, which only lasted about 16 hours after users taught it to spew conspiracy theories, racist views and misogynistic remarks. Then there was the bot trained entirely on 4chan, which turned out exactly as you’d expect. Whether “Nothing, Forever” returns or not, the next time a team of developers is faced with a choice between unexpected downtime and making sure those filters are in place, pick the latter.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.