A group of nine current and former OpenAI employees signed a letter calling out tech firms over major concerns about the risks of artificial intelligence.

In their letter, the tech workers called for more transparency in AI companies and better protections for whistleblowers who wish to raise concerns about the power of AI.

A total of 13 people signed the letter, and they all come from some of the top players in AI — including OpenAI, Anthropic, and DeepMind. It was also endorsed by two men known as the “Godfathers of AI,” Yoshua Bengio and Geoffrey Hinton.

“I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence,” former OpenAI employee Daniel Kokotajlo said in a statement.

“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” he added.

The AI employees outlined a list of four demands that they said would help mitigate the existing issues of inequality and misinformation in the AI space.

Here’s a look at the four principles the 13 employees said they want OpenAI and other AI companies to adopt, according to the letter.

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

Business Insider has reached out to OpenAI, Antrhopic, and Google Deepmind for comment on the letter.

OpenAI spokesperson Lindsey Held told The New York Times that the company is “proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk.”

“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the statement continued.

Share.
Exit mobile version