Facebook parent Meta’s decision to withhold its latest multimodal artificial intelligence (AI) model from the European Union highlights the growing chasm between Silicon Valley innovation and European regulation.
Citing an “unpredictable” regulatory environment, per a report from The Verge, Meta joins Apple in pulling back AI offerings in the region.
The move comes as Brussels prepares to enforce new AI legislation, raising concerns about potential impacts on innovation and competitiveness in the EU’s digital economy.
Meta’s retreat stems from uncertainties surrounding compliance with the General Data Protection Regulation (GDPR), particularly regarding AI model training using user data from Facebook and Instagram.
“Under GDPR, an individual essentially has the right to challenge any automated decision. But as AI has grown exponentially, human knowledge and understanding has not kept pace,” David McInerney, commercial manager at Cassie, a consent and preference management platform, told PYMNTS.
A critical issue facing companies like Meta is their ability to explain AI decision-making processes.
“Businesses can say they trained their AI, and it made an automated decision. But if companies aren’t able to properly explain how that decision was made, they cannot fulfill their legal obligation in the GDPR,” McInerney said.
Some experts say major tech companies like Meta and Apple’s retreat from offering advanced AI services in the EU could significantly impact commerce by limiting the availability of cutting-edge tools for businesses operating in the region. This regulatory-induced technology gap may hinder EU companies’ ability to compete globally, potentially slowing innovation in areas such as personalized marketing, customer service automation and AI-driven business analytics that are crucial for modern commerce.
On July 12, EU lawmakers published the EU Artificial Intelligence Act (AI Act), a pioneering regulation aimed at harmonizing rules on AI models and systems across the EU. The act prohibits certain AI practices and sets out regulations on “high-risk” AI systems, AI systems posing transparency risks and general-purpose AI (GPAI) models.
The AI Act’s implementation will be phased, with rules on prohibited practices taking effect from February 2, obligations on GPAI models from August 2, 2025, and transparency obligations and rules on high-risk AI systems from August 2, 2026. Notably, there are exceptions for existing high-risk AI systems and GPAI models already on the market, with extended compliance deadlines.
This regulatory uncertainty could have far-reaching implications for the EU’s tech landscape. Despite these challenges, the situation also presents an opportunity for tech industry leadership.
“Meta has the opportunity to change the narrative and set the tone for Big Tech by prioritizing consumer privacy in a way that hasn’t been done by many large tech companies,” McInerney noted.
The tech industry is watching closely as the EU continues to grapple with balancing innovation and regulation. The outcome of this regulatory tug-of-war could shape the future of AI development and deployment in Europe, with potential ripple effects across the global tech ecosystem.
EU officials assert that the AI legislation is designed to foster technological innovation with clear regulations. They highlight the dangers of human-AI interactions, including risks to safety and security and potential job losses. The drive to regulate also stems from concerns that public mistrust in AI could hinder technological progress in Europe, leaving the bloc behind superpowers like the U.S. and China.
In a related development, European Commission President Ursula von der Leyen has called for a new approach to competition policy, emphasizing the need for EU companies to scale up in global markets.
This shift aims to create a more favorable environment for European companies to compete globally, potentially easing some of the regulatory pressures on tech firms. However, it remains to be seen how this will balance with the stringent AI regulations already in motion.
As the implementation of the AI Act approaches, the Commission is tasked with developing guidelines and secondary legislation on various aspects of the Act. The tech industry awaits these guidelines, particularly those on implementing the AI system definition and prohibited practices, expected within the next six months.