Lawmakers on both sides of the Atlantic are racing to establish artificial intelligence (AI) regulations, with California poised to vote on strict AI oversight as the U.S. Congress considers a “regulatory sandbox” for financial services.
Meanwhile, the European Union’s AI Act is set to transform the healthcare technology landscape, highlighting the complex balance between fostering innovation and ensuring public safety when it comes to using AI.
California lawmakers are set to vote Thursday (Aug. 15) on a landmark bill that could reshape the AI industry, as tech giants and startups alike grapple with its potential consequences.
The legislation, SB 1047, must pass the assembly appropriations committee before advancing to a full assembly vote. It would be the first of its kind in the nation to impose sweeping restrictions on AI development and deployment.
At the heart of the bill are provisions requiring companies to conduct safety tests on AI systems before public release. It would also grant California’s attorney general the authority to sue firms whose technologies cause severe harm, such as mass casualties or extensive property damage.
The proposed law has ignited fierce debate in Silicon Valley and beyond. Supporters argue it’s a necessary safeguard against unchecked AI proliferation, while critics warn it could hamper innovation in a field seen as critical to future economic growth.
“Senate Bill 1047 is fundamentally flawed as it targets AI technologies rather than their applications, posing a significant threat to the competitiveness of US AI companies, particularly smaller ones and open-source projects,” Vipul Ved Prakash, CEO and founder of Together AI, told PYMNTS. “We believe this bill will stifle innovation and unfairly burden startups. Open-source AI, crucial for responsible, sustainable and safe AI advancements, would suffer greatly.”
Tech companies, venture capitalists and AI researchers are scrambling to understand the bill’s implications, with some predicting that if enacted, it could drive AI development out of California.
Gov. Gavin Newsom has not yet indicated whether he would sign the bill if it reaches his desk, adding another layer of uncertainty.
As Thursday’s vote looms, California’s lawmakers could set a precedent for AI regulation that reverberates beyond the state’s borders.
A new bill introduced in the U.S. Senate aims to spur innovation in AI within the financial sector by creating “regulatory sandboxes” that would allow firms to experiment with AI technologies under relaxed regulatory oversight.
The “Unleashing AI Innovation in Financial Services Act” would require federal financial regulators to establish programs allowing regulated entities to test AI-powered financial products and services without fear of enforcement actions, provided certain conditions are met.
Under the proposed legislation, financial institutions could apply to conduct “AI test projects” for products that substantially use AI and may be subject to federal regulations. Applicants would need to demonstrate that their projects serve the public interest, enhance efficiency or innovation, and do not pose systemic risks or national security concerns.
If approved, companies would be granted temporary relief from specific regulations for up to one year, with the possibility of extensions. Regulators would have 90 days to review applications, with automatic approval if no decision is reached within that timeframe.
The bill mandates that regulators coordinate on joint applications and establish procedures for modifying approved projects, handling confidentiality, and addressing non-compliance. Annual reports to Congress on project outcomes would also be required.
Proponents argue the measure will help the U.S. maintain its competitive edge in financial technology. Critics, however, raise concerns about consumer protection and financial stability.
The legislation reflects growing interest in balancing innovation with regulation as AI rapidly advances. It remains to be seen how the bill will fare in Congress and what amendments may be proposed as it moves through the legislative process.
The European Union’s landmark AI Act, which came into force Aug. 1, is set to impact the medical AI sector, according to a new Nature paper. This first-of-its-kind legislation aims to foster “human-centred and trustworthy AI” while safeguarding public health and safety.
The act introduces a tiered risk-based approach, banning practices deemed “unacceptable” while imposing strict requirements on high-risk systems. For the healthcare industry, this means most AI solutions will face heightened scrutiny.
“Most current solutions will be classified as high risk,” the paper noted, signaling a sea change for medical device manufacturers. The authors predict a surge in “regulatory complexity and costs” that could disproportionately impact smaller players.
Critics fear the act might stifle innovation, particularly among startups and SMEs. “Small and medium-sized enterprises with fewer resources are expected to suffer from the regulatory burden,” the researchers noted.
However, proponents argue the regulations are necessary to ensure patient safety in an era of rapid technological advancement. The paper emphasizes the need for “ongoing reassessment and refinement” of AI regulations to keep pace with innovation.
As the EU positions itself as a global leader in AI governance, the healthcare tech industry is bracing for significant upheaval. With the clock ticking on implementation, companies are scrambling to adapt to this new regulatory landscape.