Technology
AI
Privacy
Philosophy

Our Content Moderation Policy for AI-Powered Features

S

Soumo Shekhar Nath

Founder, Vibratom Studios

Cover image for Our Content Moderation Policy for AI-Powered Features

Artificial Intelligence is a transformative technology that allows us to build powerful features that were once the stuff of science fiction. In our app SimplySub, for example, we use AI to automatically transcribe audio into text, saving creators countless hours of manual work.

However, with great power comes great responsibility. AI models, particularly large language models, are trained on vast amounts of data from the internet. This means they can sometimes generate content that is inaccurate, biased, or harmful.

As we continue to integrate AI into our suite of tools, we believe it is crucial to be transparent about our approach to content safety and moderation. Our policy is built on two core principles: user control and technical safeguards.

The Principle of User Control

We believe that you, the user, should always be the ultimate authority and editor of your own content. Our AI is designed to be a powerful assistant, not an unquestionable author.

  • AI as a Starting Point, Not an Endpoint: In SimplySub, the AI-generated transcript is presented as a draft. It's a huge time-saver, but we make it clear that it's the user's responsibility to review, edit, and approve the final text. The user interface is built around our interactive editor, which empowers the user to easily correct any errors or rephrase sentences.
  • No Autonomous Generation: Our current AI features are designed to process content that you provide (e.g., the audio from your video). We do not currently offer features where a language model autonomously generates creative text (like writing a blog post for you). This "human-in-the-loop" approach ensures that the user is always in control of the final output.

The Principle of Technical Safeguards

While we prioritize user control, we also implement technical measures to reduce the likelihood of our tools producing harmful content.

  • Choosing and Fine-Tuning Models: We carefully select our AI models from reputable providers who have their own robust safety filters. When possible, we fine-tune these models to be specialized for their specific task (like transcription), which narrows their scope and reduces the chance of them generating unrelated or inappropriate content.
  • Content Filtering APIs: For any future features that might involve more generative capabilities, we will pass all AI-generated text through a separate content filtering API. These APIs are designed to detect and block various categories of harmful content, including:
    • Hate speech
    • Harassment
    • Explicit content
    • Dangerous misinformation
  • Restricting High-Stakes Domains: We are committed to not using generative AI in high-stakes domains where inaccuracies could cause significant harm, such as providing medical, legal, or financial advice. Our tools are for creativity and productivity, and we will keep their scope focused on those areas.

Our Commitment to Transparency

The field of AI ethics is evolving rapidly, and so are we. We are committed to continuously learning and refining our approach. We will always be transparent about how we use AI in our products and what measures we are taking to ensure it is used responsibly.

Our goal is to harness the incredible power of AI to build tools that are helpful and empowering, while always prioritizing the safety and control of our users. As we roll out new AI-powered features, we will update our policies and continue to share our philosophy with you, our community. If you have questions or concerns about our use of AI, we encourage you to contact us.