A new Commerce Department proposal requiring detailed reporting from artificial intelligence (AI) firms threatens to saddle the industry with hefty compliance costs and potentially drive companies to relocate, experts warn, as the U.S. government seeks tighter oversight of the rapidly evolving technology sector.
The Bureau of Industry and Security (BIS) released a Notice of Proposed Rulemaking on Monday (Sept. 9), aiming to ensure AI technologies are safe, reliable and protected against potential misuse by foreign adversaries or non-state actors. Under the proposed rule, major AI firms would need to provide detailed reports to the federal government on their developmental activities, cybersecurity measures and results from security testing efforts known as “red-teaming.”
“As AI is progressing rapidly, it holds both tremendous promise and risk,” Secretary of Commerce Gina M. Raimondo said in a Monday news release. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
The initiative has garnered support from some industry experts.
“For advanced technologies that have huge potential like AI, we should think about misuse or potential security risks right from the start,” Crystal Morin, a former intelligence analyst in the U.S. Air Force and current cybersecurity strategist at Sysdig, told PYMNTS. “This legislation will encourage companies to be upfront and honest about their security practices and promote a secure-first approach to software design life cycles.”
However, concerns have been raised about the impact on smaller companies. Efrain Ruh, CTO and AIOps Lead at Digitate, told PYMNTS: “Having to comply with detailed reporting puts an additional burden, especially on small and mid-size companies with limited resources and personnel, that cannot afford having a dedicated team for putting this information together.”
Peko Wan, Co-CEO at Pundi X, outlined three major impacts on startups and small businesses.
“First, it could increase the regulatory burden, requiring resources to ensure compliance with new reporting requirements,” Wan said. “Second, the focus on cybersecurity and safety could drive innovation in secure AI systems. Lastly, it can boost competitiveness by fostering trust in AI technologies.”
The economic implications could be significant. Wan estimated that companies might face human resource costs between $570,000 and $815,500 annually to comply with the new mandates.
The global AI landscape might also shift due to these regulations. “We could see a similar effect with respect to other regulations, where companies decide to ‘relocate’ because there is a much more profitable ‘business case’ by just changing their hosting provider or moving their business to a geography location with less regulatory control,” Ruh said.
However, Houbing Herbert Song, an IEEE fellow, offered a different perspective. “When AI companies make decisions on where to operate or host their services, many factors are taken into account. Regulations are only one of these factors, but not the most important factor,” Song said. “In the long run, the global AI market landscape will not be affected by this regulation.”
According to Morin, lawmakers’ challenge is “to make sure the laws prioritize security without impeding business innovation. After all, their goal isn’t to drive companies away or encumber them needlessly so they fall behind their international competition.”
Ruh pointed to the industry’s fast-changing nature as a complicating factor.
“The whole industry is still evolving, and many government entities don’t yet fully understand the dynamics of these systems,” he said. “Therefore, we expect a period of uncertainty until the requirements are clearly and aligned with what the industry is able to provide.”
The Commerce Department is now seeking public input on the proposed rule. The AI industry’s response to these potential new requirements remains to be seen.