Skip to content
apple intelligence: what mac admins need to know
Blog Recent News Apple Inte...

Apple Intelligence: What Mac Admins Need to Know

Kandji Team Kandji Team
12 min read

One of Apple’s biggest announcements at this year’s WWDC was about the upcoming release of what the company calls Apple Intelligence. But, this being Apple, it wasn’t just a jumping-on-the-bandwagon announcement about AI. Rather, it’s about the very Apple approach the company is taking to artificial intelligence, one that puts user benefits and protections first. 

(Apple obviously intended Apple Intelligence’s acronym to match that of the more general "artificial intelligence." For the purposes of this post, we’ll use “AI” to refer to artificial intelligence in general, not to Apple Intelligence.)

Apple’s communications have focused on ways that Apple Intelligence will help everyday users do everyday things better—writing, communicating, using Siri—while fiercely protecting their privacy and security. That, in turn, should make life a bit easier for IT and security teams, too.

As an IT professional, you’ll no doubt hear from users about Apple Intelligence when it starts appearing in beta form this fall. (It's already started to appear in developer betas.) But you’ll also likely hear from your security, compliance, and governance teams, who will understandably have some questions and concerns about it. 

To help you start thinking about how you’ll respond to those various constituencies, here’s what we know so far about Apple Intelligence, based on the company’s communications at WWDC and since. TLDR: Apple Intelligence is very different from what other vendors are doing with AI—enough so that it might be the first such tool that you can trust enough to deploy in your organization.

What Will Apple Intelligence Do?

As we said, Apple Intelligence will focus—at least initially—on making everyday tasks easier for the average user.

Apple’s leading example: It will empower writing tools to more effectively proofread text you’ve written or, if you wish, to do the writing for you. (None of that will surprise anyone who’s used one of the many AI tools from other vendors that are already available today.). Importantly, these writing aids won’t be limited to Apple apps; third-party vendors will be able to leverage Apple Intelligence, too.

Apple Intelligence will make other end-user tools more effective. Among the possibilities that Apple has mentioned:

  • A list of notifications on iPhone could intelligently put the most important ones at the top. Ditto for your Mail inbox. 
  • Mail could also show you summaries of emails in your inbox; the Phone app could do the same for calls.
  • As with writing tools, Mail could draft quick email replies for you.
  • A new Focus—Reduce Interruptions—could intelligently show you just those notifications that require attention immediately.
  • Siri promises to benefit you greatly. The digital assistant promises to respond better to verbal requests, in part by developing an understanding of your personal context—your most important contacts and places, as well as the particular capabilities of your device.

In other words, Apple is taking a very practical, brass-tacks approach to AI. It isn’t going to save (or destroy) the world. But it could make life a bit easier for everyday users.

Apple Intelligence: How It Will Do That

Like any AI system, Apple Intelligence will require some serious computing resources. Apple says it’s addressing this by creating a three-layer processing system.

On-Device Processing

The key to providing AI's benefits while minimizing its risks to privacy and security is that Apple Intelligence does as much of the processing as possible locally, on the device itself.

This is in part a consequence of Apple's work building neural engines into its CPUs, most recently and dramatically with the M4 chip: Over time, the company has been devoting increasing amounts of on-chip real estate to such engines, which can power AI.

To be fair, that’s not unique to Apple. We’ve seen a wave of Windows-based computers that combine the company’s Copilot AI system with Snapdragon X Elite processors. Google has been doing similar work with the Pixel neural engine for several years.

It should be noted that Apple Intelligence’s minimum hardware requirements are modest:

  • iPhone 15 Pro (with A17 Pro)
  • iPhone 15 Pro Max (ditto)
  • iPad Air (M1 and later)
  • iPad Pro (ditto)
  • MacBook Air (ditto)
  • MacBook Pro (ditto)
  • iMac (ditto)
  • Mac mini (ditto)
  • Mac Studio (M1 Max and later)
  • Mac Pro (M2 Ultra)

We’ll have to wait to see how Apple Intelligence performs on those different hardware platforms. But it seems unlikely that Apple would let it be a truly bad experience for users on older ones.

The on-chip architecture is a particularly good fit with Apple’s priority on protecting user privacy: Because so much of the processing happens on the device, the data that Apple Intelligence relies on never leaves the device, thus minimizing its potential exposure to bad actors.

Private Cloud Compute

All that said, Apple Intelligence will sometimes require more computing resources than the device itself can handle. That’s why Apple built a second level of processing power into the Apple Intelligence framework: Private Cloud Compute (PCC). As the name implies, it’s a cloud-based processing system that Apple says will be called on to process “more sophisticated requests” requiring “help from larger, more complex models in the cloud.”

Apple says that PCC is built around “compute nodes”:  Server hardware that relies on Apple silicon (and the built-in security that brings) along with a “hardened subset of the foundations of iOS and macOS tailored to support Large Language Model (LLM) inference workloads while presenting an extremely narrow attack surface.” 

Requests from devices to PCC will, Apple says, be encrypted end-to-end:

When Apple Intelligence needs to draw on Private Cloud Compute, it constructs a request—consisting of the prompt, plus the desired model and inferencing parameters — that will serve as input to the cloud model. The PCC client on the user’s device then encrypts this request directly to the public keys of the PCC nodes that it has first confirmed are valid and cryptographically certified. This provides end-to-end encryption from the user’s device to the validated PCC nodes, ensuring the request cannot be accessed in transit by anything outside those highly protected PCC nodes. 

As for the “Private” part of PCC, Apple says it’s focused on protecting users in five ways:

  1. Private data that’s uploaded to Apple’s cloud will be used only for the immediate processing task at hand and never retained after that task is complete. 
  2. It must be possible to confirm that any tools Apple uses to maintain PCC meet its security and privacy guarantees; Apple won’t use unverifiable tools for maintenance or monitoring the system.
  3. PCC’s privacy guarantees can’t be bypassed, even by Apple staff, during an outage.
  4. The PCC architecture means that attackers can’t target a specific user’s data—they can only target PCC as a whole.
  5. Security researchers must be able to verify that PCC’s privacy and security guarantees are being met

Those researchers will have to wait until after PCC becomes available in beta this fall before they can start to assess whether Apple has indeed met these goals. But the key point is that Apple is saying that it’ll make such assessments possible.

ChatGPT

Finally, Apple will offer an optional integration between Apple Intelligence and ChatGPT, which can help specifically in Siri and writing tools. For example, Siri might turn to ChatGPT for questions about photos. And when it comes to actually composing text in a writing tool, that too might be a job for ChatGPT.

But it will always be just an option: Users will control when and if ChatGPT is invoked, and they’ll have to confirm whether or not they want their information shared with the service. (IT teams will have a say over this as well, which we’ll get to in a minute.) Using ChatGPT as part of Apple Intelligence won’t require a separate account with OpenAI, but if you do have one, you’ll be able to access the extra features that come with it from within Apple Intelligence.

What Apple Intelligence Means to IT

Apple Intelligence is expected to ship sometime this fall, at least in beta form, though when exactly that happens is still TBD. That means there are a few months to address the concerns your organization’s security and compliance teams will likely have about it. 

Their top questions are likely to be: What could Apple Intelligence do with proprietary company data? Where is that data going? Who will have access to it? How do we know it’s safe? 

Other AI tools have faced the same questions. Fortunately, Apple seems to be prepared to address them.

As noted above, Apple will deliberately limit the data that’s sent anywhere off-device. In the particular case of PCC, it has some seemingly rigorous guardrails built-in to protect the privacy of end-user data. 

Also, Apple isn’t just asking organizations to trust it; it’s building verifiability into the architecture. Each organization will be able to decide for itself whether it wants to trust that system. Apple has also promised to deliver a third-party audit and share the results publicly.

The partnership with OpenAI may have raised the most questions. But it’s important to remember that using ChatGPT will require user permission. OpenAI seems to be following at least some of Apple’s guidelines when it comes to data governance for Apple Intelligence: Apple says that the IP addresses of users who opt to use ChatGPT will be obfuscated. As with PCC, user requests to ChatGPT won’t be stored anywhere in the cloud, and that data will not be used to train OpenAI’s models.

The best answer to many of these concerns will likely be user education to be sure they understand your organization’s policies on using generative AI of any kind and the specific policies around opting into ChatGPT as part of Apple Intelligence.

Apple Intelligence and MDM

Apple will likely give MDM solutions like Kandji the ability to restrict access to Apple Intelligence; we’re still waiting for full details on that.

One question is whether the company will empower IT to block access to Apple Intelligence altogether. A more granular approach would make sense: Rather than giving IT and security teams a simple kill switch, Apple would be encouraging customers to assess the value of individual features in Apple Intelligence and weigh that against their potential risks.

The company would rather preemptively allay the concerns of IT and security teams by designing an AI that truly puts privacy and security first. Given the extensive guardrails that Apple has built around the technology, particularly when it comes to Private Cloud Compute, it’s looking like Apple Intelligence will be safer for end-users and organizations alike than any SaaS product they’re already using. It’s certainly better than any other online LLM out there. And Apple is building Apple Intelligence so that compliance and security teams can assess that safety for themselves. 

That may be the main takeaway for now: Be sure your organization’s security and compliance teams are aware of Apple Intelligence now, so they’ll be ready to start examining it when the beta arrives this fall. We’ll be doing the same thing here.

About Kandji

Kandji is the Apple device management and security platform that empowers secure and productive global work. With Kandji, Apple devices transform themselves into enterprise-ready endpoints, with all the right apps, settings, and security systems in place. Through advanced automation and thoughtful experiences, we’re bringing much-needed harmony to the way IT, InfoSec, and Apple device users work today and tomorrow.