Every major tech event is now centered on artificial intelligence, as has been the case for Microsoft, OpenAI, and most recently, Apple. WWDC 2024 has to be one of the most anticipated tech events this year, and it was well worth the wait. I mean, between Apple's new partnership with OpenAI to bring ChatGPT to the ecosystem and Apple Intelligence, I can't tell what I'm more excited about.
Apple's event was quite similar to what we've seen in the past from companies like Microsoft and OpenAI, but quite a spectacle as it debuted in the AI landscape. If you've been following the iPhone maker's journey and advances for a minute, you'd know that it prides its services and products on being privacy- and security-centered. Perhaps a plausible explanation for why Apple has been laid back and reluctant to invest and adopt the technology. Well, this is all about to change!
It's "Apple Intelligence" now, not just AI?
Apple Intelligence? I guess we'll now have to be a bit more specific when talking about AI (artificial intelligence) — a classic touch from Apple to personalize its new AI system to align with its brand. Apple defines the system as a personalized intelligence platform that combines generative AI power with personal context to enhance and improve the user experience across its devices.
According to Apple:
"It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks."
But where does this leave security and privacy? Perhaps among the major concerns riddling users about the advancements in the category. However, Apple promises to remain a household name for privacy and security with Private Cloud Compute.
Is Private Cloud Compute secure enough?
With Apple's depressed iPhone sales (especially in the Chinese market) and short-lived Vision Pro hype, dabbling with the privacy and security of its products and services might be a gunshot to the foot if it doesn't work as presented on paper.
Microsoft and NVIDIA are currently ripping huge benefits for their huge investment in AI, with market analysts projecting that they are on the verge of hitting the iPhone moment with the technology. Both companies are riding high on the AI wave having hit $3 trillion in market valuation, ultimately pushing Apple to the third most valuable company in the world.
Apple is trying to reclaim its spot as the world's most valuable company with its recent AI efforts, but it promises to maintain users' privacy and security. However, Apple boasts that its new Private Cloud Compute system is the secret privacy code for AI. The company says it can "flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers."
Sounds like Windows Recall with extra cloud
In May, Microsoft held a special Surface and Windows event unveiling the first Copilot+ PC alongside next-gen AI features, including Windows Recall, Live Captions, and more. Windows Recall has been received with mixed feelings, with users blatantly raising privacy and security concerns across social media.
Regulators are looking into the tool's safety, prompting Microsoft to make Windows Hello enrollment mandatory for enabling Windows Recall on Copilot+ PCs. This is in addition to making it an opt-in experience. Microsoft touts Windows Recall as a privacy-focused experience that runs on-device with its data stored locally. But we've quickly learned that even a 'mere' 171 lines of code can bypass the stringent security measures.
Unlike Windows Recall, which entirely depends on an NPU, the new Apple Intelligence features will run on-device and rely on Private Cloud Compute for more computing power. But it doesn't stop there, features demanding more power will leverage the cloud's resources. However, the data won't be stored for third parties to access. A phenomenon Apple refers to as stateless computation.
Apple combines Apple silicon, Secure Enclave, and Secure Boot to make Private Cloud Compute less susceptible to attacks, making it a 'secure' platform for running LLMs.
Not everyone's onboard the AI fray
Now more than ever, technology is rolling down the AI path, and there are seemingly no roadblocks to stop its progression. Still, it's riddled with multiple issues, including a lack of proper regulation, privacy, security, and a general lack of interest from some of the user base.
Whether AI is a fad remains debatable, depending on which side of the coin you pick. However, security and privacy concerns still hold water amid reports of AI becoming smarter than humans, taking over jobs, and ending humanity. An AI researcher's findings suggest that there's a 99.9 probability the technology will end humanity, and the only way to avert this looming danger is to stop building AI in the first place.
It'll be interesting to see how Apple takes on the AI landscape with the security and privacy sidesteps deterring progression and whether it can reclaim its world's most valuable company title.