A new international artificial intelligence treaty signed by the United States, the United Kingdom and the European Union has rekindled a debate over the future of tech innovation.
The AI Convention aims to protect the human rights of those affected by AI systems. It marks a step in global efforts to regulate a rapidly advancing technology that experts say has far-reaching implications for commerce. The move also pits advocates of responsible AI development against those who fear that over-regulation could stifle progress.
“While the new international AI treaty is designed to protect human rights and ensure responsible AI use, it risks further slowing innovation in businesses that depend on rapid AI development,” Jacob Laurvigen, CEO and co-founder of AI company Neuralogics, told PYMNTS. “By rushing through complex compliance and accountability frameworks, businesses might slow down in deploying new AI solutions due to added layers of regulation. While some oversight is necessary, there’s a fine line between responsible regulation and unnecessary bureaucracy. Companies that rely on AI for innovation and productivity should be free to experiment and iterate without being overly constrained by regulations meant to prevent hypothetical risks.”
Experts say the treaty’s focus on human rights will shape future business practices. Key provisions require impact assessments for high-risk AI systems, transparency in AI-driven decisions, and strict data collection and use guidelines.
The AI Convention and the EU AI Act are separate legislation addressing AI. While the AI Convention focuses primarily on protecting the human rights of people affected by AI systems and is still in development, the EU AI Act, which entered into force in August, establishes comprehensive regulations for AI technologies across all EU member states.
AI technologies increasingly permeate daily life, from smartphone assistants to autonomous vehicles and advanced medical diagnostics. The accord addresses growing concerns about AI’s potential negative impacts, including privacy infringement, algorithmic bias and job displacement.
“This sustained guidance, binding laws and ongoing scrutiny will compel all companies to be aware of the laws, regulations and implications,” Kamal Ahluwalia, president of AI company Ikigai Labs, told PYMNTS.
Ahluwalia said the treaty will foster a more ethical approach to AI development.
The industry’s response centers on the tension between innovation and regulation.
“AI thrives on unshackled creativity and data, and regulation often wants to straitjacket both,” Lars Nyman, chief marketing officer of CUDO Compute, told PYMNTS.
Vectra AI founder and CEO Hitesh Sheth offered another view.
“Artificial intelligence has opened numerous doors across industries,” Sheth said. “However, these very same doors have also allowed for expanded threats and incidents to invade.”
Global companies face challenges in complying with AI regulations across different countries. Implementing the AI treaty will likely vary between regions, as it reflects diverse cultural, legal and ethical views on AI.
“Companies may soon find themselves spinning plates, balancing European data privacy rules against more lax U.S. AI oversight — all while dealing with the evolving legal Wild West that is China,” Nyman explained.
He said this complexity is particularly daunting for multinational corporations that operate across various regulatory environments.
Overlapping old and new regulations may force companies to focus on legal compliance instead of innovation, Laurvigen said. Smaller companies and startups could be hurt, as they may need more resources to handle complex regulations.
“The fear of non-compliance, lawsuits or reputational damage could deter companies from exploring AI’s full potential,” he added.
Nyman predicted that the treaty could lead to the rise of “ethical AI” as a business model, drawing parallels to the organic and fair-trade movements in agriculture. The shift could lead to the development of AI solutions that are not only technologically advanced but also socially responsible and ethically sound.
The treaty’s impact will likely extend beyond the tech sector, influencing healthcare, finance, education and public administration. Governments are expected to invest heavily in AI literacy programs for policymakers and the general public, aiming to foster informed discussions about the role of technology in society.
Compliance in this new landscape might become “a relentless push to stay ahead but with a lot more paperwork,” Nyman concluded.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.