Meta on Thursday revealed a sweeping set of policies and explainers about the use of artificial intelligence in its products and algorithms.
Why it matters: The tech giant has spent years trying to convince critics that it takes issues like data privacy and misinformation seriously, even when it was slow to act. Now, Meta appears focused on addressing concerns about AI sooner rather than later.
- "This is part of a wider ethos of openness, transparency and accountability," said Meta's president of global affairs Nick Clegg.
- "With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks. We believe that the best way to respond to those concerns is with openness."
Driving the news: Meta on Thursday released two sets of documents.
- The first is a detailed set of explainers showing everyday users how its algorithmic recommendation systems work.
- The second is an overview of how it uses artificial intelligence to power its content recommendations.
The company lays out which signals and predictive models it uses to help recommended personalized content to users.
- Examples of signals include basic account information, like location-related information, device information and preferred language, as well as data about a user's friends, Pages they follow and groups they are in.
Between the lines: In an effort to double down on transparency, Meta also said it would provide better tools for researchers, including over 1,000 AI models, libraries and data sets from the last decade, "so they can benefit from our computing power and pursue research openly and safely," Clegg said.
- Meta has in the past been criticized for not giving researchers enough access to its data.
- "It is our ambition to continue to be transparent as we make more AI models openly available in future," Clegg added.
Zoom out: Meta has promoted an open-source approach to AI as a means to widen access to the technology.
- In February it released its LLaMA large language model under an open source license allowing anyone to view and manipulate the code.
- The company is betting that will help it compete against rivals like Google and Microsoft, but some experts fear the approach can also pose security risks.
The big picture: After a brutal 2022 sell-off, Meta has experienced a strong rally in 2023, in part because Wall Street is bullish on its investments in AI.
- But regulators globally are still skeptical about whether Big Tech can handle AI responsibly.
The bottom line: For now, Meta seems to be dodging regulatory scrutiny around artificial intelligence, as larger rivals like OpenAI, Microsoft and Google come under a bigger spotlight. But if history has taught the company anything, it's that you can never be too early to start preaching transparency.