Apple has officially entered the AI wars, announcing a series of coming artificial intelligence features, including an anticipated collaboration with OpenAI that could magnify mounting privacy concerns within the tech sector.
At the Worldwide Developers Conference on Monday, the company introduced Apple Intelligence, an in-house suite of AI services coming to devices this fall. While Apple’s branded AI was the focal point of Monday’s keynote, the company also announced a highly-anticipated partnership with OpenAI, albeit somewhat unceremoniously.
Starting later this year, Apple users will have free access to OpenAI’s ChatGPT model without having to create an account. ChatGPT will be integrated with Apple’s new and improved Siri feature, allowing the AI model to scour the internet and quickly answer user questions.
Apple noted that the ChatGPT integration will be optional for users. Customers can opt out of OpenAI’s presence on their devices, which could go a long way in assuaging AI-wary users about privacy concerns.
“I think it’s brilliant,” said Maribel Lopez, an AI analyst and founder of research and strategy consulting firm Lopez Research. “Because I do think we’re going to get to a point where people are not willing to make that tradeoff.”
Apple’s assurances come amid growing concerns over OpenAI’s commitment to safety. A group of current and former employees went public earlier this month in a New York Times report with worries over the company’s financial motivations and approach to creating responsible AI.
OpenAI trains ChatGPT on user interactions and information. Generative AI trained in this way could eventually correctly guess sensitive information about a person based on what they type online, Business Insider previously reported.
“Some people are OK with that, and some people aren’t,” Lopez said. “But if you provide a platform and say there’s no way to opt out of it, that could be difficult.”
The opt-out ability on Apple devices offers customers a modicum of control amid the coming freight train of AI, Lopez added.
Apple is notably taking a safety-first approach to adopting AI. The company said Monday that it did not use customers’ private or personal data to train the foundation models that will power Apple Intelligence. Instead, the Apple model was trained on licensed data and publicly available information. The system will also be operated via Private Cloud Compute, an infrastructure designed to handle large AI requests privately.
Meanwhile, much of the marketing for Apple Intelligence already appears to be focused on safety and privacy protection, with advertisements boasting a “brand-new standard for privacy in AI.”
Apple’s apparent commitment to privacy, as well as its need to protect its strong brand, could explain why the company was late to the AI game, Lopez said.
“Everybody says they’re behind, but I think they took a lot of time trying to work through these things,” she said. “Because maybe Sam Altman doesn’t get sued. But Apple sure as heck does.”