The U.S. will host a multi-national summit in November to discuss safe AI development.
U.S. Commerce Secretary Gina Raimondo told The Associated Press (AP) Wednesday (Sept. 18) that this will be the “first get-down-to-work meeting” following gatherings in the U.K. and South Korea to discuss the possible dangers posed by artificial intelligence (AI).
As the AP report notes, among the topics likely to come up at the two-day meeting, planned for Nov. 20 and 21 in San Francisco, are the rise of AI-generated fakery as well as the issue of how to determine when an AI system is capable enough — or dangerous enough — to require protective measures.
“We’re going to think about how we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors,” Raimondo said. “Because if we keep a lid on the risks, it’s incredible to think about what we could achieve.”
The meeting is expected to include representatives from national AI safety institutes in the U.S. and U.K., along with Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union, the report said.
The AP also points out that the meeting will take place after the 2024 presidential election between Vice President Kamala Harris — who helped develop the U.S. position on AI risks — and former President Donald Trump, who has pledged to overturn the White House AI policy.
In other AI news, PYMNTS wrote earlier this week about new research which suggests tech giants could gain the upper hand in generative AI, raising questions about the competitive future of the industry.
One recent paper found that massive computational requirements and network effects naturally cause market concentration, which could in turn result in a few key players holding outsized influence over pricing, data control and AI capabilities. This is a trend that has many players in the industry concerned.
“We are likely to see decreasing prices for smaller models and continued differentiation across large models,” Alex Mashrabov, CEO of Higgsfield AI, told PYMNTS, citing OpenAI’s GPT-4 for prosumer use cases and models like Flux and Llama for easy fine-tuning as examples of this differentiation.
Observers say lack of competition in generative AI could mean higher prices and fewer choices for businesses hoping to integrate AI tools into their operations, as well as slower innovation, which could hamper the development of new AI applications.
As PYMNTS has reported, Big Tech companies have in recent months been rapidly rolling out iterations of large language models (LLMs) that power chatbots.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.