Alec Foster • 2023-02-13
Generative AI, AI Models, Trust & Safety
The popular AI-generated Seinfeld show "Nothing Forever" on Twitch has been temporarily banned due to a transphobic joke made by the AI character "Larry." This incident highlights the ongoing challenge of AI safety and the importance of proper content moderation in the rapidly advancing field of AI.
In a recent turn of events, the popular AI-generated Seinfeld show "Nothing Forever" on Twitch has been temporarily banned due to a transphobic joke made by the AI character "Larry." The incident highlights the ongoing challenge of AI safety and the importance of proper content moderation.
According to reports, the transphobic joke was broadcast accidentally due to technical difficulties with the program. The show's creators issued a statement apologizing for the mistake and explaining that they mistakenly believed they were using OpenAI's content moderation system. The team is now working to implement OpenAI's content moderation API and investigating secondary content moderation systems as redundancies to prevent similar incidents from happening in the future.
In a Discord announcement, show staff explained that they switched to a different AI model after experiencing an outage with OpenAI's GPT-3 Davinci model, which resulted in the inappropriate text being generated. The staff has since identified the root cause of the issue with the Davinci model and will not be using the Curie model as a fallback in the future.
The incident is a clear example of how AI outputs can be influenced by the data it's trained on and why it's crucial for AI models to be properly moderated. The field of AI safety is constantly evolving and developing tools to mitigate the biases baked into AI models. However, many AI tools are moderated by underpaid workers in the developing world, which can lead to oversights.
The suspension of "Nothing Forever" on Twitch serves as a reminder of the ongoing challenges faced by AI and the importance of proper content moderation to prevent incidents of hate speech and discrimination. The show's creators have taken responsibility for the mistake and are working to ensure that similar incidents do not occur in the future.