• Google was fined on Wednesday, partly over how it trained its AI.
  • French regulators leveled a €250 million (roughly $270 million) fine against the tech giant.
  • The watchdog said Google broke a pledge after using news outlets’ content to train Bard, now called Gemini.

Google was hit with a roughly $270 million fine on Wednesday, in part over how it trained its AI.

French regulators say Google went back on its commitments tied to negotiating deals with news outlets in France for their content. The watchdog alleged Google used the journalists’ content without telling them in order to teach its AI chatbot Bard — now rebranded as Gemini.

Google had promised in a previous settlement to “negotiate in good faith based on transparent, objective and non-discriminatory criteria,” which the regulators referred to as “Commitment 1.”

The regulators said there are still legal questions related to the use of news content to train AI models, but “at the very least, the Autorité considers that Google breached Commitment 1 by failing to inform publishers of the use of their content for their Bard software.”

The regulators also said that Google failed to cooperate with a monitoring trustee installed as part of a previous settlement, didn’t negotiate in good faith, and didn’t provide complete revenue information to negotiating parties.

The California-based company was fined €250 million over the listed violations and did not dispute the facts, the French regulators said.

In a statement released Wednesday, Google said the fine was “not proportionate” to the allegations.

Google said it agreed to pay because it was “time to move on.”

In its statement, Google said it was focused on “the larger goal of sustainable approaches to connecting people with quality content and on working constructively with French publishers.”

“Throughout the last few years, we have been willing to discuss concerns from publishers or the FCA and that is still the case today,” Google wrote. “But it is now time for greater clarity on who and how we should be paying so that all parties can plan a course towards a more sustainable business environment.”

How tech companies train their chatbots remains a hot topic — and one that’s already been brought up in court.

In 2022, a UK regulator fined AI company Clearview roughly $9 million in connection with how it scraped biometric data for facial recognition. But that fine was overturned on appeal by a tribunal of the UK General Regulatory Chamber a year later.

The New York Times sued OpenAI late last year over its ChatGPT bot, alleging the AI firm broke the law by using its content to teach the large language model. OpenAI has asked a judge to dismiss at least parts of the suit, claiming that the Times hired someone to “hack” its platforms, which the company denied.

Meanwhile, some publishers (including Axel Springer, Business Insider’s parent company) have reached deals with companies like ChatGPT to use their content.

Correction March 20, 2024: An earlier version of this story incorrectly stated this was the first time a company had been fined in connection with its AI training. In 2022, Clearview was fined by UK regulators for scraping biometric data. That fine was later overturned on appeal.

Share.
Exit mobile version