Those who wish to avoid using generative A.I. for whatever reason are soon going to find that increasingly difficult, judging from the latest news about Google and Meta.
According to a Financial Times article, Meta will start sticking A.I. “personas” into Facebook and Instagram, maybe as soon as next month. The aim is apparently for these chatbots, which will have different styles of speaking, to provide search functionality and generally increase engagement on Meta’s platforms, with the company obviously getting the added benefit of learning a great deal more about its users via their interactions with these bots.
Given that European courts and regulators have shut down Meta’s ability to legally track people’s activities across multiple services and websites without express user consent, any new method of targeting advertising would prove very welcome at 1 Hacker Way.
Meanwhile, Axios reports that Google has begun work on a revamped version of Assistant, to smarten it up with generative A.I.
“As a team, we need to focus on delivering high-quality, critical product experiences for our users,” VP Peeyush Ranjan and product chief Duke Dukellis wrote in an internal email. “We've also seen the profound potential of generative A.I. to transform people's lives and see a huge opportunity to explore what a supercharged Assistant, powered by the latest [large language model] technology, would look like. (A portion of the team has already started working on this, beginning with mobile.)”
This involves a reorganization, with some jobs going away as Google combines its Services and Surfaces teams (the latter of which deals with various interfaces such as Google Home speakers and Android Auto). Amazon is also working on A.I.-izing its Alexa virtual assistant, and Apple is quietly trying to figure out how to do the same with Siri.
Soon, there will be no escape. The big question is how trustworthy these assistants will be, in terms of both the quality of the information they will provide—generative A.I. tends to “hallucinate” misleading nonsense some of the time—and their discretion.
I’ve said it before and I’ll say it again: Big Tech has a terrible privacy record and no way of pushing back against U.S. intelligence demands, and the ideal virtual assistant is one that’s auditable and on-device rather than being dependent on centralized cloud services, allowing it to be a trustworthy confidant.
Separately, check out my colleague Kylie Robison’s article on Bluesky’s first big test. Some users were signing up with racial slurs in their usernames, and a couple of the Twitter clone’s investors were unhappy with CEO Jay Graber for not speaking up. There were two big problems here—moderation is hard as a platform scales up, but there was also Graber’s failure to properly apologize for the racist-handle issue. After a lot of pressure, she apologized to the community for Bluesky’s moderation failures, but also for the team’s extended silence about them.
As Kylie writes:
Part of the challenge facing Graber and Bluesky is that they are working on two tech projects simultaneously: the user-facing app as well as an underlying protocol designed to establish an open-source framework for creating social apps by other developers. Graber and the team have not been shy to reiterate that the protocol is their vision for solving the issues that plague centralized social media companies.
On which note, I’ve just joined Bluesky myself. I’m frankly feeling quite fatigued by social media at the moment—thanks to the deeply unproductive Twitter/X drama and the fact that my Facebook feed is seemingly 80% sponsored content these days—but I’ll try to maintain a presence on Bluesky, at least for a while. My profile is here if you care to follow me.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer