top of page

AI and the Future of Free Speech

Today, the Supreme Court begins consideration that will impact our collective future interacting via online platforms like LinkedIn, Facebook, X, TikTok, Wikipedia, Reddit, and more. In reading the drama around this critical legal moment, I’m struck by the complete invisibility of the end user’s experience - their right to free speech and their desire to enjoy an online experience minus what they, individually, find offensive. Where is the consumer in all of this? The individual American? As business leaders, we can NEVER forget the customer. To shift the focus back where it belongs, I place each of us, the American consumer, at the heart of my approach and I paint a picture for how humans, partnering with AI, can create the online forums that we desire.


Image Made in Partnership with AI = Dr Lisa + Microsoft Designer

First, I quickly explain the legal situation. Then, I provide a vision for AI-enabled free speech and content moderation that also maintains profitability for social media providers.


Legal Cases Summary

The two main issues at the heart of the Supreme Court consideration stem from social media laws passed in 2021 by Florida and Texas. Moody v. NetChoice and NetChoice v. Paxton include “must-carry” provisions and censorship transparency requirements.


Must-Carry Provision

These provisions are about whether social media platforms can be legally required to host speech they would prefer not to, effectively compelling platforms to carry content against their wishes. The argument from the platforms is that this infringes on their First Amendment rights by forcing them to disseminate speech they find objectionable, analogous to a newspaper being compelled to publish an op-ed that they find distasteful. The states argue that these rules are simply regulating the conduct of platforms acting as public forums, not infringing on their speech rights.


Transparency Requirement

The laws also impose requirements on platforms to be transparent about their content moderation practices, specifically about why they remove or reduce the visibility of posts. This aspect of the law demands that platforms provide users with explanations for content removal or demotion decisions, aiming to make the platforms’ operations more transparent and accountable. The concern from the platforms is that these requirements could be overly burdensome, potentially forcing them to disclose proprietary or sensitive decision-making processes or, conversely, to simplify content moderation practices in a way that harms the platform’s ecosystem.


Together, these issues frame a significant legal debate over the balance between regulating the power of major tech platforms to control public discourse and protecting those platforms’ rights to manage their services as they see fit. The Supreme Court’s decisions on these matters could profoundly impact not just the future of social media regulation but also the broader landscape of digital speech and platform governance.


Having Our Cake AND Eating It, Too

Everyone I know wants social media to both support free speech AND not assail them with offensive content. So, let’s assume that both of these desires are supported by our laws, and I’ll paint you a picture of how AI could bring this to life.


Before AI, the idea of social media companies moderating content tailored to each user’s preferences on a grand scale was unfathomable. Now, thanks to technological advancements, this level of personalized content moderation is feasible.


A Vision for AI-Enhanced Transparency and Content Moderation

Imagine a social media platform, let’s call it “FutureNet,” that uses AI to navigate the challenges of both transparency in content moderation and the must-carry requirements. FutureNet’s AI system, “ModAI,” is designed not only to moderate content in line with community standards but also to provide clear, understandable explanations for its decisions, directly addressing the user’s right to free speech while maintaining a respectful and safe online environment.


How It Works: FutureNet’s Free Speech-First Approach

  1. Enhanced Content Evaluation: ModAI is designed with a free speech-first philosophy, prioritizing the right to express diverse opinions. The AI evaluates content with a bias towards inclusivity, intervening only when content clearly violates universally accepted standards such as hate speech, violence, or illegal activities. The emphasis is on safeguarding against harm while maximizing freedom of expression.

  2. AI Identifying and Tagging Sensitive Content: Instead of removing content, ModAI tags potentially sensitive or controversial posts with content warnings, allowing users to decide what they want to engage with. This approach respects user autonomy and diverse sensibilities without outright censoring speech.

  3. Transparent Decision Rationale: When intervention is necessary, ModAI provides a clear, concise explanation accessible to all users, not just the content creator. These explanations detail why certain content was tagged or restricted, linking directly to FutureNet’s community guidelines to educate users about free speech boundaries on the platform.

  4. Empowering User Choices: FutureNet empowers users with customizable content filters and preferences, enabling them to shape their own experience according to their comfort levels. This user-driven approach to content curation allows for a personalized online environment that respects individual freedom and responsibility.

  5. Open Feedback and Appeal Process: Recognizing the importance of dialogue in a free speech ecosystem, FutureNet offers an open feedback loop and an easy appeal process for moderation decisions. Users can challenge ModAI’s decisions, contributing to a constantly evolving understanding of community standards. Human moderators play a crucial role in the appeal process, ensuring that nuanced human judgment complements AI efficiency.

  6. Community-Driven Standards: FutureNet’s community standards are developed with broad user input, reflecting a collective agreement on the balance between free expression and safety. These standards are regularly reviewed and updated through community consultations, ensuring they remain relevant and reflective of diverse user values.

  7. Transparency Reports: FutureNet publishes detailed transparency reports, offering insights into moderation actions, the effectiveness of AI systems, and user feedback trends. These reports underscore FutureNet’s commitment to openness and accountability in promoting free speech.


Balancing Free Speech with Advertiser Confidence

Recognizing FutureNet’s revenue concerns that advertisers need to have control over their brands’ association with desired content, the following approach is taken:


  1. Content Segmentation for Advertisers: Develop advanced AI tools that allow advertisers to segment content and choose the environments where their ads are displayed. By using AI to categorize content based on themes, language, and user engagement, FutureNet can ensure that ads appear alongside content that aligns with the brand’s values and target audience preferences.

  2. Brand Safety Tools: Introduce comprehensive brand safety tools and controls that allow advertisers to exclude their ads from appearing next to content they deem inappropriate for their brand. These tools can include keyword blacklists, topic exclusions, and the option for manual review of placement settings, giving advertisers peace of mind and control over their ad placement.

  3. Ad Quality Score: Implement an ad quality scoring system that rewards advertisers who engage in ethical advertising practices and align with FutureNet’s commitment to free speech and respectful discourse. This can encourage a healthier advertising ecosystem and attract brands that share the platform’s values.

  4. Transparent Advertising Policies: Clearly communicate FutureNet’s advertising policies, emphasizing the platform’s dedication to free speech while outlining the boundaries of acceptable advertising content. Transparency about how ads are moderated and the criteria used can build trust with advertisers and ensure alignment with community standards.

  5. Engagement with Advertiser Councils: Create an advertiser council comprising representatives from key industries and brands to regularly discuss concerns, feedback, and suggestions for improving the advertising experience on FutureNet. This engagement ensures that advertisers feel heard and valued, fostering a collaborative relationship.

  6. Sponsored Content and Native Advertising Opportunities: Offer innovative advertising formats like sponsored content and native advertising, which seamlessly integrate with the user experience. These formats can be more appealing to advertisers aiming to engage with users in a non-disruptive manner, aligning with the platform’s ethos.

  7. User Consent and Preferences for Ads: Empower users with the ability to control the types of ads they see, including opting into or out of certain advertising categories. This not only enhances the user experience but also increases the value for advertisers by targeting users who are more receptive to their messages.

  8. Educational Initiatives for Advertisers: Launch educational initiatives that highlight the benefits of advertising on a platform committed to free speech, including access to a diverse and engaged user base. Showcase success stories and data-driven insights that demonstrate the positive impact of advertising in a free speech-oriented environment.


The Impact

This AI-driven approach allows FutureNet to create a dynamic, user-centric online environment that is profitable. Users feel heard and more in control of their experience, regardless of their political leanings, and they are better informed about content moderation practices, while advertisers feel confident in their brand safety. This enables FutureNet to navigate legal requirements effectively, maintaining a balanced and respectful platform that values free expression and individual preferences.


Conclusion

As we reflect on the Supreme Court’s deliberations and how AI can (and is) shaping our online experiences, we must remember that at the core of these discussions are real people —our rights, our voices, and our desires for a respectful digital environment. Essentially, we must remember the core tenant of good business - the customer comes first. Further, we must not make decisions based on an outdated mindset that simultaneous free speech AND content moderation isn’t possible. It IS possible! These can - and should - co-exist in our social media platforms. Leaning into what your customers value is always good business. By keeping people at the center of every decision, we can create a digital public square that is profitable because it mirrors our personal and collective values.

47 views0 comments
bottom of page