The larger message from Apple’s AI strategy unveiled at WWDC this week focuses on tools that solve actual user problems. Apple, which has been quiet on AI all along, has with the first draft of its vision tried to make artificial intelligence itself less annoying.

“I think when we consider what it means for artificial intelligence to be truly useful, it has to be centered on you. We believe AI’s role is not to replace our users but to empower them,” Craig Federighi, Apple’s senior vice-president of software engineering, said at a briefing after the WWDC keynote presentation. “If you think about the kind of things that make that possible, it needs to be integrated into the experience you are using all the time; it needs to be intuitive, but it also needs to be informed by your personal context. If it’s going to do that, there’s a lot of responsibility involved.”

Federighi admitted that Apple had to make a decision, given that AI technology comes with constraints, and the company chose to go the other way. It looked at how to bring GenAI capabilities to the Apple lineup based on “a personal intelligence model” on a device that “draws on personal context” while doing it responsibly, ditching the traditional AI approach that the rest of the industry has taken.

Craig Federighi Craig Federighi, Apple’s senior vice president of Software Engineering, speaking about Apple Intelligence and privacy. (Image: Anuj Bhatia/The Indian Express)

The point Cupertino is making with “Apple Intelligence” is less about the dramatized way AI has been projected over the years and more about how artificial intelligence can be helpful. The company showed Apple Intelligence widely integrated into the system level and its applications, using large language models, enabling things like one-tap summaries of long emails, audio recording and transcription, and AI-generated text summaries. For example, the best use of AI can be seen on the iPad with a brand new Math Notes feature, as part of the Calculator app, where users can write down math equations using an Apple Pencil and see them solved instantly.

In a way, Apple’s AI strategy also hinges on being different from what its peers are doing with the technology. For example, throughout the keynote presentation, Apple stayed away from explicitly discussing how powerful its AI models are. For Apple, the model size doesn’t matter. “We don’t think users care about model size. We think they care about the functionality, working, and having enough performance and enough quality that it’s useful for people every day,” said John Giannandrea, senior vice president of machine learning and AI strategy, at the same briefing.

Festive offer

“The right approach to this is to have a series of different models of different sizes for different use cases,” he said, adding that a 3-billion-parameter model running on an iPhone is almost certainly the most capable model available today.

Apple Intelligence At WWDC 2024 in Apple Park, Cupertino, Apple introduced Apple Intelligence, emphasizing an AI that is powerful, intuitive, integrated, personal, and private. (Image: Anuj Bhatia/The Indian Express)

Perhaps the deepest, most difficult philosophical question is how Apple navigates the AI ​​landscape and whether or not it chooses the same path as others. But Apple made it clear that the company is taking a different approach to preserving users’ privacy, as much of the handling will be done on the device, unlike many GenAI programs, including ChatGPT that run in the cloud. On top of that, it has developed a private cloud as part of its personal intelligence offerings to secure personal data.

Cloud computing typically comes with some real compromises when it comes to privacy assurances because if you are going to be making a request to the cloud, the cloud traditionally can receive that request and any data included in it and write it to a log file, save it to a database, or perhaps put it in a profile about you. You don’t know, and even if a company makes a promise and says, “Well, hey, look, we’re not going to do anything with your data,” you have no way to verify that. The code that is running on that server could change at any time,” explained Federighi.

“We have a really clever two-part solution. One is that you don’t have to send all your data—your email, messages, photos, and documents—and store them in someone else’s cloud, where that server model could potentially probe them when needed. Instead, the intelligence on your device figures out what small bits of that information are relevant to answer this question,” Federighi continued, explaining how the private cloud computer works. Apple Intelligence will still have “little pieces of information” about you to make AI models better and understand user’s interests, habits, etc.

“As we move forward with AI and rely more on increasingly personal requests, we believe it’s essential for you to know that neither Apple nor anyone else would have access to any of the information used to process your request,” Federighi said.

WWDC 2024 At WWDC 2024, Apple showcased new AI features on MacBook, iPad, and iPhone, highlighting seamless integration and user-focused innovation. (Image: Anuj Bhatia/The Indian Express)

Giannandrea said that Apple had focused on reducing hallucinations in its models partly by using curated data. “We have very carefully selected the highest quality data that we can from the public web. We do that in a way where publishers can opt out of having their web data used in training models. In addition, we license a large amount of data, including news archives, textbooks, stock photography, and all of that goes into these models. For our diffusion model, we have done a lot of work in-house to produce the styles.” “We have put considerable energy into training these models very carefully,” he added.

If the way Apple trained its AI models works as promised, it should translate to Apple Intelligence being less prone to generating odd, inaccurate results as we saw in the case of Google’s AI Overviews tool which uses artificial intelligence to respond to search queries.

Apple is using its proprietary AI models to power Apple Intelligence through and through. Its partnership with OpenAI brings ChatGPT into Apple’s offerings, including its Siri voice assistant and writing tools. The tie-up with OpenAI seems like a small part of how Apple Intelligence has been deployed, although some have linked the deal with Apple’s weakness in AI.

For Federighi, Apple wanted to bring the expertise of other AI models that may have domain knowledge, which brings value to Apple users and improves Apple Intelligence. “We see the capabilities of those models as really complementary to what we’re doing, as an optional thing that users can draw on. We wanted to start with the best, and we think the ChatGPT from OpenAI, the new 4.0 model, represents the best choice for our users today,” he said. Federighi said that Apple may bring Google’s Gemini model to Apple Intelligence at a future date, without shedding more light on it.

While some have backed the move, others like Elon Musk criticized Apple’s OpenAI tie-up and called it a potential security breach, threatening to ban Apple devices from his companies. Musk’s main concern centers on user data potentially being transferred to a third-party AI system operated by OpenAI. Apple, however, has prioritized user privacy and insists that no user data will be stored by OpenAI. “We’ll ask you before you go to ChatGPT,” Federighi said. “From a privacy point of view, you are always in control and have total transparency with that experience that you leave Apple’s privacy realm and go out and use that other model.”

(The writer is attending WWDC 2024 in Cupertino, California, at the invitation of Apple)