Six years ago I published a, shall we say, under the radar book on digital rights, called Control Shift. It was mainly concerned with how issues such as online privacy had already developed, but towards the end I wrote a speculative bit about virtual assistants. If you’ll excuse the self-quotation, I noted that “these proto-A.I.s don’t really work for us” because of their reliance on centralized, cloud-based systems—but future advances in A.I. could allow for on-device, open-source assistants that act as our trustworthy and discreet agents, rather than mere front-ends for Big Tech surveillance. This could help “fix the future,” I wrote hopefully.
Which is why I was extremely intrigued to see Qualcomm and Meta’s announcement yesterday about getting the Facebook firm’s newly open-sourced LLaMa 2 model to play nicely on Qualcomm-powered mobile devices, PCs, VR/AR headsets, and cars. People have already been playing with the first (leaked) LLaMa on Android phones, but this kind of official support should make for much better performance. Sure, Big Tech doesn’t get much bigger than Meta—a company that obviously looms large in any recap of digital rights abuses—but there are seeds of hope to be found here.
Qualcomm and Meta’s stated aim is to ensure that LLaMa 2-based A.I.s, including “intelligent virtual assistants,” can run in optimized form on devices “without relying on the sole use of cloud services.” User privacy is the first listed benefit, along with things like the ability for the A.I. to function without connectivity. Cloud independence should also work out cheaper for developers. The capabilities should be in flagship handsets next year.
Again, this is Meta we’re talking about. However, the Qualcomm arrangement—and more broadly the open-sourcing of LLaMa 2—would fit into a strategic shift that may be taking shape.
Over the years, Facebook/Meta has tended to limit its own options in ways that have caused it real headaches. For example, until the EU agreed to a new data-sharing deal with the U.S. last week, there was a strong possibility that Meta would have had to stop exporting Europeans’ personal data to the U.S. Meta repeatedly warned that this would mean pulling Facebook and Instagram out of the EU, which some interpreted as a threat, but it was really a statement of the fact that those networks simply don’t work when forced into regional silos.
But this month we’ve seen Meta launch Threads with the promise that it will soon become interoperable with rival networks that also use the ActivityPub protocol—a handy option to have for Threads itself, if the new U.S.-EU deal collapses like its predecessors and Meta has to split up its systems. And now the company has also taken major steps to enable more privacy-friendly virtual assistants that don’t need to send every sensitive request off to a data center, let alone one across the world. Privacy watchdogs will be all over such technology, so again, this could prove handy down the line.
In the end, it will all come down to viable business models. Of course Meta and other A.I. providers will want to maintain as clear a view as possible over how people are using their virtual assistants, but market and regulatory pushback may limit that, so it makes sense to build options into the technology. And who knows? Maybe one day I’ll get what I’ve been hoping for—a virtual assistant that really works for me.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer