The Facebook insider building content moderation for the AI era

The Facebook insider building content moderation for the AI era

Brett Levenson, former Apple executive, leads Moonbounce to enhance content moderation in AI using 'policy as code'. Addressing failures in traditional human moderation, his system provides real-time AI evaluations and responses, improving accuracy and safety for platforms generating user content. Moonbounce has secured $12 million in funding, serving AI companies to integrate safety into their frameworks effectively.

Key Points

  • Brett Levenson transitioned from Apple to Facebook in 2019 to tackle content moderation issues.
  • Human reviewers faced challenges with a lengthy policy document, only achieving slightly better than 50% accuracy in decisions.
  • The rise of AI chatbots worsened moderation failures, causing incidents like harmful content promotion to teens.
  • Levenson proposed 'policy as code' to convert policies into operational logic for real-time enforcement.
  • His company, Moonbounce, raised $12 million to enhance content safety by evaluating content using AI.
  • Moonbounce serves industries like dating apps and AI character firms, handling over 40 million daily reviews.
  • The company aims to integrate safety as a competitive advantage rather than a post-fact measure.

Relevance

  • Historically, content moderation has been a persistent issue post-Cambridge Analytica.
  • With increasing use of AI in applications, AI companies are under pressure to ensure user safety.
  • By 2025, trends indicate a need for real-time guardrails in AI applications, aligning with Moonbounce's mission.

Moonbounce addresses pressing content moderation challenges by leveraging AI, aiming to create safer online environments, thus representing a significant evolution in the field as companies increasingly prioritize safety as an integral part of their offerings.

Download the App

Stay ahead in just 10 minutes a day

Article ID: 379188c4-ec11-460c-9fac-2b5f980d7354