Home » Technology » EU Code of Conduct: Microsoft and Meta Clash Over Digital Standards

EU Code of Conduct: Microsoft and Meta Clash Over Digital Standards

by Omar El Sayed - World Editor

Meta Rejects EU AI Voluntary Code, Citing Legal Uncertainties and Overreach

Archyde Exclusive | Breaking News

Meta Platforms, teh parent company of Facebook, has officially rejected a voluntary code of conduct for artificial intelligence developers, citing concerns that it introduces “legal uncertainties” and extends beyond the scope of the EU’s AI Act. The decision marks a meaningful divergence from other major AI players like OpenAI and Mistral, who have already signed the “Code of Good Voluntary Practices.”

The code, developed by 13 independent experts, aims to provide legal clarity to signatories while requiring them to publish summaries of training data used for thier general-purpose models and to establish policies for complying with EU copyright law. It is indeed intended to complement the EU’s AI Act,which entered into force in June 2024,and applies to a wide range of companies,including Alphabet (Google),OpenAI,Anthropic,and Mistral.

Microsoft President Brad Smith expressed cautious optimism about the code’s potential, stating, “I think it’s likely to sign. We have to read the documents.” He acknowledged the importance of industry collaboration with the EU’s AI office, emphasizing the goal of “supporting and, at the same time, one of the things we really welcomed with satisfaction is the direct commitment of the AI office with the industry.”

However, Meta’s Director of Global Affairs, Joel Kaplan, articulated a different viewpoint in a LinkedIn post, stating, “Meta will not sign it. This code introduces a series of legal uncertainties for models developers, as well as measures that go far beyond the scope of the AI Act.” Kaplan echoed the sentiments of 45 European companies, expressing concern that such measures could hinder the advancement and deployment of advanced AI models in Europe, possibly stifling innovation and business creation.

evergreen Insights:

Meta’s rejection of the EU’s voluntary AI code highlights a growing tension between regulatory aspirations and the rapid pace of AI development. This situation offers several long-term insights into the evolving landscape of AI governance:

The Challenge of Voluntary Codes: While intended to foster cooperation and provide versatility, voluntary codes frequently enough struggle to achieve worldwide adoption, especially when they are perceived as creating additional burdens or legal ambiguities for industry leaders. The success of such initiatives often hinges on their ability to strike a balance between promoting responsible innovation and maintaining a clear, predictable regulatory surroundings.
balancing Innovation and Regulation: the EU’s AI Act represents a significant attempt to regulate AI, but the interpretation and implementation of its principles, especially in voluntary frameworks, remain a point of contention. The debate between Meta and the EU underscores the ongoing challenge of creating regulations that can keep pace with technological advancements without stifling groundbreaking research and development. companies worldwide are grappling with how to foster innovation while ensuring ethical and legal compliance.
Geopolitical implications of AI Governance: The differing approaches to AI regulation between major global players like the EU and the US (where many of these tech giants are headquartered) have significant geopolitical implications. The EU’s comprehensive regulatory framework, if widely adopted or influential, could set a global precedent.Conversely, a fractured approach, with key companies opting out of certain initiatives, could lead to a divergence in AI development and deployment strategies across different regions.
The Importance of Legal Certainty: Meta’s primary concern, “legal uncertainties,” is a crucial factor for any business operating in a highly regulated or rapidly evolving field. Companies require clear guidelines to invest, develop, and deploy new technologies confidently. When voluntary codes introduce ambiguity, they can inadvertently create barriers to adoption rather than facilitate it. This emphasizes the need for regulatory frameworks, whether mandatory or voluntary, to be precise and actionable.* The Role of Industry in Shaping Regulation: The fact that the code was prepared by independent experts and that companies can choose to sign or not highlights the ongoing dialog between regulators and the industry.Even in rejection, Meta’s detailed reasoning and alignment with other companies provide valuable feedback that could influence future iterations of AI governance.This collaborative, albeit sometimes contentious, process is essential for creating effective and adaptable AI policies.

This ongoing dialogue between regulators and AI developers will continue to shape how artificial intelligence is integrated into society, with profound implications for innovation, competition, and public trust.

What are the primary goals of the EU’s Digital Services Act (DSA) and Digital Markets act (DMA)?

EU Code of Conduct: Microsoft and meta Clash Over Digital Standards

The Digital Services Act and the New Rules of Engagement

The European Union’s Digital Services Act (DSA) and Digital Markets Act (DMA) are reshaping the digital landscape, and the resulting EU Code of conduct is proving to be a significant point of contention between tech giants like Microsoft and Meta. These regulations aim to create a fairer, more transparent, and safer digital environment for users across the EU. The core of the conflict revolves around how these companies are implementing the required changes and the level of scrutiny they’re facing. Key areas of focus include platform accountability, content moderation, and interoperability.

Core Disagreements: Microsoft vs. Meta

The clash isn’t a simple head-to-head battle, but rather a divergence in approach to compliance.

microsoft’s Focus: Microsoft has largely positioned itself as a cooperative partner, emphasizing interoperability and open standards. Their approach with platforms like LinkedIn and Teams centers on facilitating data portability and allowing users greater control over their information. They’ve actively engaged with EU regulators,seeking clarification and offering solutions.

Meta’s Resistance (and adjustments): Meta, encompassing Facebook, Instagram, and WhatsApp, has faced considerably more pushback. Initial concerns centered on their data usage practices, targeted advertising, and the effectiveness of their content moderation systems. While Meta has made adjustments – including offering users more control over ad personalization and increasing transparency around content ranking algorithms – concerns remain about the scale and impact of these changes. The EU has specifically questioned Meta’s approach to protecting minors online and combating illegal content.

Key Areas of Conflict & regulatory Scrutiny

Several specific areas are driving the friction between the tech companies and EU regulators.

1. Interoperability of Messaging Services

The DMA mandates interoperability between major messaging apps. This means users should be able to communicate seamlessly across platforms like WhatsApp, Messenger, and iMessage (though Apple remains a separate, significant player in this debate).

The Challenge: Achieving true interoperability is technically complex and raises concerns about data privacy and security. meta has argued that forcing interoperability could compromise end-to-end encryption, a key feature of whatsapp.

EU Stance: The EU insists that interoperability can be achieved without sacrificing security, and that users deserve the freedom to choose their preferred messaging app without being locked into a single ecosystem.

2. Content Moderation and Illegal Content

The DSA places a heavy emphasis on platform accountability for illegal content hosted on their services. This includes hate speech, terrorist propaganda, and counterfeit goods.

Meta’s Struggles: Meta has consistently struggled with effectively moderating content at scale. Despite investing heavily in AI-powered moderation tools and human reviewers, illegal and harmful content continues to circulate on their platforms. The EU has expressed concerns about the speed and effectiveness of Meta’s response to flagged content.

Microsoft’s Approach: Microsoft,while not immune to content moderation challenges,has generally been seen as more proactive in addressing illegal content,particularly on platforms like LinkedIn where professional standards are expected.

3. Data Usage and Targeted Advertising

The DMA aims to curb the power of gatekeeper platforms to leverage user data for targeted advertising.

the Debate: The core issue is whether users have sufficient control over how their data is collected and used. The EU wants to ensure that users provide explicit consent for data collection and that they have the right to opt-out of targeted advertising.

Meta’s Business model: Meta’s business model is heavily reliant on targeted advertising, making changes to data usage practices particularly challenging. They’ve introduced features allowing users to limit ad personalization, but critics argue these options are not prominent enough or easy to understand.

Impact on Users: What Does This Mean for You?

These regulatory changes, and the clashes surrounding them, are ultimately intended to benefit EU citizens. Here’s how:

Increased Control Over Your Data: You’ll have more say in how your personal data is collected and used by online platforms.

greater Choice and Competition: Interoperability will break down walled gardens and allow you to connect with friends and family across diffrent messaging apps.

Safer Online Environment: Enhanced content moderation will help to reduce the spread of illegal and harmful content.

More Transparent Algorithms: You’ll gain a better understanding of how algorithms are shaping your online experience.

Real-World Examples & case Studies

The EU’s Investigation into TikTok (2023-2024): The European Commission launched an investigation into TikTok’s compliance with the DSA, focusing on concerns about its content moderation practices and the protection of minors. This investigation highlights the EU’s willingness to hold platforms accountable.

Apple’s Concessions on App Store Policies (2024): Under pressure from the DMA, Apple made significant changes to its App Store policies, allowing developers more versatility and reducing its control over the app ecosystem. This demonstrates the impact of the DMA on established tech giants.

* Ongoing scrutiny of Meta’s Ad Practices: The EU

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.