Voice authentication and security firm Pindrop has raised $100 million in debt financing.
The funding, announced Wednesday (July 17) and coming from Hercules Capital, allows Pindrop to further develop its audio, voice, and artificial intelligence (AI) technologies for customers in industries that include banking, finance, contact center, insurance, utilities, healthcare and retail.
“Contact centers, which play a crucial role in these sectors, are becoming increasingly vulnerable as cyberattacks grow more sophisticated,” the company said in a news release, noting a recent surge in contact center fraud.
Voice authentication solutions, the company said, can help protect against this fraud. Pindrop said it has analyzed 5.3 billion calls, prevented $2 billion in fraud losses and detected 104 million spoof calls, helping guard against deepfakes.
“This funding will fuel our ongoing growth and innovation in voice and AI technologies,” said Vijay Balasubramaniyan, Pindrop’s founder and CEO.
“As cyber threats continue to evolve, our mission to stay ahead of fraudsters and protect our customers is more critical than ever. We’re excited about the future as we remain committed to driving advancements that safeguard major institutions and deliver unparalleled security in the digital age.”
Pindrop’s new funding comes as regulators look to rein in deepfake use as part of a broader effort to tame the “wild west” of AI, as PYMNTS wrote Wednesday.
For example, FCC Chair Jessica Rosenworcel this week proposed new rules requiring the disclosure of AI use in robocalls to protect against scams and misinformation.
This move is part of a larger effort by the Commission to address the challenges presented by rapidly evolving AI technologies in the communications field, including actions against deepfake voice calls used for election misinformation and fines for carriers involved in such practices.
“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” Rosenworcel said in a news release. “That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions.”
At the same time, PYMNTS noted last week, AI is also used to help companies detect deepfakes.
“This technology could be used to protect company reputations or employees by verifying media content, detecting impersonation attempts and safeguarding employee privacy by preventing the spread of deepfake content that might violate employee’s privacy or be used for harassment,” Zendata CEO Narayana Pappu told PYMNTS.