Amazon’s AI Alignment Call Sparks Industry Debate

After Amazon called for global alignment on responsible artificial intelligence (AI) measures, industry experts are weighing in on the potential implications for international commerce and tech regulation.

The eCommerce giant’s statement, delivered by David Zapolsky, senior vice president of global public policy and general counsel, emphasized the need for global alignment to protect U.S. economic prosperity and security. This call comes a year after leading American Big Tech companies, including Amazon, agreed to the Biden administration’s voluntary commitments to manage AI risks.

“It’s like the fox suggesting a security upgrade for the henhouse,” Lars Nyman, chief marketing officer at CUDO Compute, a decentralized cloud computing platform, told PYMNTS. “While it’s commendable that they’re advocating for responsibility, one can’t help but notice the irony.”

Nyman suggests that to influence industry practices truly, Big Tech companies must lead by example, implementing transparency in AI algorithms, robust ethical guidelines, and third-party audits to ensure compliance. This sentiment reflects growing concerns about the vast amounts of consumer data held by tech giants and its use in AI development.

Challenges for Global Standards

Zapolsky emphasized the importance of transparency in AI development and deployment, citing Amazon’s AI Service Cards as an example of informing customers about AI limitations and best practices. He also stressed the need for collaboration between companies and governments, highlighting the U.S. Artificial Intelligence Safety Institute Consortium as a model for establishing AI guardrails that align with democratic values and promote responsible innovation.

The push for global alignment faces hurdles, including differing national regulations, varied ethical standards, and rapid technological advancement. The global AI market is projected to reach $190.61 billion by 2025, according to a report by MarketsandMarkets, underscoring the economic stakes involved.

“The biggest challenge for a company the scale of Amazon is that different countries may end up with very different requirements for AI,” Andrew Gamino-Cheong, chief technology officer and co-founder at Trustible, a company specializing in AI governance, told PYMNTS.

Gamino-Cheong pointed to China’s strict requirements for foundational models as an example of how regional differences could create complications. He noted that an AI system compliant in China might be nonviable for use in the U.S. or EU, creating challenges for multinational corporations.

This regulatory fragmentation could have far-reaching implications for multinational corporations operating in the AI space. Experts note that even slight differences in regulations between major markets like the U.S. and EU could create complications for global tech firms, potentially affecting everything from product development to market access.

Collaboration and Competition

Tech giants are urged to collaborate on AI safety standards, but such cooperation comes with complications. The tech industry’s competitive nature often counters the need for shared standards and practices.

“Collaboration among big tech companies on AI safety standards is essential, yet it often feels like asking rival pirates to share a treasure map,” Nyman said. He suggested establishing industrywide consortiums focused on ethical AI practices, sharing best practices, and developing common safety protocols.

Some progress is already being made in this direction. Nicholas Rioux, chief technology officer of Labviva, an AI procurement technology company for life sciences, points to existing standards in eCommerce as a potential model. “Consumer trust in eCommerce is already pretty high as efforts like PCI-DSS [Payments Card Industry Data Security Standard] and other standards have driven a universal level of general compliance,” he told PYMNTS. 

Rioux also emphasized the importance of security in AI development, noting that companies need to invest in cybersecurity and external testing to ensure their AI systems are not vulnerable to exploitation.

Hilary Wandall, chief ethics and compliance officer at Dun & Bradstreet, told PYMNTS that companies should build upon existing standards, implementing “clearly defined policies on where and how their data is being sourced, the reliability of the source, procedures to monitor and correct outdated data, quality checks to make sure proper data stewardship capabilities are being practiced, and evaluation checks to determine whether the AI is operating as expected.”

Regarding collaboration, Wandall suggested that Big Tech companies work with various stakeholders to establish AI safety standards. She recommended they “collaborate with third-party application providers, payment system providers, and data and analytics providers to align on standards for data provenance, governance, transparency, security and resilience.”

As the global AI race intensifies, the push for responsible development is viewed as critical for shaping the future of international trade and commerce. The coming year is expected to be crucial in determining whether voluntary measures and emerging standards can effectively address the complex challenges of global AI alignment while fostering innovation and maintaining competitive advantage.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.