HomeBlogEmployment, Generative AI AI and Content Moderation: A Balancing Act

background

AI and Content Moderation: A Balancing Act

Alec Foster2023-08-03

Employment, Generative AI, Trust & Safety

In the era of global social media platforms, content moderation has emerged as a vital component in maintaining the integrity of online communities. It's a role that often goes unnoticed and undervalued, yet it significantly shapes our online experiences. Sarah T. Roberts, in her insightful 2019 book "Behind the Screen: Content Moderation in the Shadows of Social Media,"  delves deep into the world of commercial content moderation (CCM), shedding light on the unseen workforce that curates our digital interactions.

The Invisible Guardians of the Internet

Roberts' book unveils the hidden world of over 100,000 commercial content moderators (CCMs) who tirelessly sift through and eliminate harmful and disturbing online content. These unsung heroes operate anonymously across various platforms, ranging from in-house teams at social media companies to boutique firms, call centers, and microlabor websites. Their geographical spread is vast, spanning from Silicon Valley to rural Iowa, Canada, Mexico, the Philippines, and India.

Roberts' research highlights the challenging nature of commercial content moderation (CCM) work, marked by low wages and low status. More critically, it exposes the significant psychological impact on moderators. They are exposed to a relentless stream of graphic violence, hate speech, explicit sexual content, and other forms of disturbing material. This constant exposure can lead to psychological issues, including anxiety, depression, and post-traumatic stress disorder (PTSD). Some moderators have reported experiencing nightmares, while others have found it difficult to disengage from work, with distressing images lingering in their minds long after their shifts have ended.

Moreover, moderators must navigate complex cultural contexts and adhere to strict guidelines, often under high productivity demands. This combination of exposure to harmful content, cultural and corporate pressures, and rigorous productivity standards contributes to a high-stress environment, leading to burnout and emotional exhaustion. Despite these challenges, moderators' crucial role in maintaining online community health often goes unrecognized and undercompensated.

The FairShake Experience: AI Augmenting Human Moderation

The insights from "Behind the Screen" have profoundly influenced my understanding of the psychological toll of content moderation and the irreplaceable role of human judgment in this process. During my tenure at FairShake, I had the privilege of building machine learning models to enhance our intake process. Implementing these models led to a substantial 50% reduction in claimant waiting time, a 70% improvement in assignment accuracy, and a savings of 20 weekly moderation hours. This not only streamlined our operations but also significantly alleviated the psychological burden on our team associated with processing claims.

However, we were mindful to use the model only to automatically approve claims, ensuring that a human reviewed any claim that the model predicted for rejection. This approach guaranteed that no individual was denied access to an essential consumer legal process without human oversight.

The Promise and Pitfalls of AI in Content Moderation

AI holds immense potential to improve the lives of moderators by reducing their workload and potentially mitigating some of the psychological pressures associated with the job. However, it is not without its limitations. AI models, trained on historical data, may struggle to respond accurately to new phenomena like emerging conspiracy theories or edge cases without enough historical data.

Moreover, biases in the training data can easily be replicated in moderation systems, leading to unfair outcomes. At FairShake, we were acutely aware of these limitations and conducted algorithmic bias audits. We implemented mitigations to counter biases in the training data against claimants without a college education or those who used speech patterns common in African American Vernacular English (AAVE).

The Future of Content Moderation: AI as an Ally, Not a Replacement

While AI has the potential to revolutionize content moderation, it is not a panacea. It should be perceived as a tool to aid human moderators, not to supplant them. As social media platforms continue to grow in power and user base, striking a balance between human judgment and AI becomes increasingly critical to ensure our online communities remain safe, inclusive, and fair.

AI can undoubtedly enhance our moderation efforts, but the discerning human eye remains irreplaceable in the intricate world of content moderation. It's equally important to recognize the value of human moderators working behind the scenes. They should be compensated fairly for their challenging work, provided with wellness benefits to help manage the psychological toll, and allowed to cycle between handling harmful and more benign moderation queues. This approach acknowledges their vital role and promotes their well-being, which is essential for maintaining the health of our digital communities.


More Posts

background

Lessons from Willy's Chocolate Catastrophe: Navigating AI's Role in Art and Labor

background

Racing Towards AI Supremacy: Balancing Open Innovation with Ethical Responsibility

background

ESG Echoes in AI Ethics: Drawing Lessons from Corporate Sustainability's Missteps for Effective Algorithmic Accountability

Show more

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.