Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Tech News Now: OpenAI's newest model, Facebook's newest lawsuit, and more

Good morning and welcome to Tech News Now, TheStreet's daily tech rundown.

It seems I missed a lot while I was away in California (which was almost as chilly as it is here in New Jersey): new artificial intelligence models, new regulatory pushes and a positive mountain of tweets from Elon Musk. 

In today's edition, we're covering OpenAI's latest model, Facebook's latest lawsuit, the University of Michigan's alleged sale of student data to third parties, new proposals out of the FTC and the latest news out of Xbox. 

Related: How Truecaller is combatting the rise of deepfake fraud

Tickers we're watching today: Meta  (META) and Microsoft  (MSFT)

Let's get into it. 

OpenAI introduces Sora, its text-to-video model

OpenAI, which is in the midst of a number of lawsuits alleging copyright infringement in both the training and output of its flagship ChatGPT model, on Thursday announced a new text-to-video model called Sora

The model is currently capable of generating up to one minute of video. 

OpenAI did not say when the model would become available, saying that it's currently working with red teamers who are testing the model for safety before a potential launch. 

Like ChatGPT, OpenAI has not divulged the scope and type of content used to train this latest model. OpenAI did not immediately respond to a request for comment regarding this issue. 

Sora researcher Bill Peebles told Wired that the "training data is from content we’ve licensed and also publicly available content."

"When I started working on AI four decades ago, it simply didn’t occur to me that one of the biggest use cases would be derivative mimicry, transferring value from artists and other creators to megacorporations, using massive amounts of energy," AI researcher Gary Marcus said in response to the model. "This is not the AI I dreamed of."

OpenAI CEO Sam Altman spent the day posting clips of Sora's work on X. 

Rachel Tobac, the CEO of cybersecurity firm Social Proof Security, said that such video-generation models pose many risks to the public. She says they can be used to manipulate, trick, phish and confuse people at scale. 

"This tool is going to be massively challenging to test and control under many, let alone most, adversarial conditions," she said

Related: Cybersecurity expert says the next generation of identity theft is here: 'Identity hijacking'

Meta faces a $3.77 billion lawsuit 

A London tribunal ruled Thursday that a $3.77 billion lawsuit against Meta should be allowed to move toward a trial. 

The suit, brought on behalf of 45 million Facebook users in the U.K. by the London-based law professor Liza Lovdahl Gormsen, alleges that Meta abused its position to monetize users' data, and that users were never compensated for the personal data they provided. 

Meta has called the lawsuit "entirely without merit."

More deep dives on AI:

The Competition Appeal Tribunal initially rejected the lawsuit last year.  

Judge Marcus Smith said in his ruling that a final hearing in the case ought to be heard "in the first half of 2026 at the latest."

A Meta spokesperson told Reuters that the company will "vigorously defend" itself in the case. 

Related: Deepfake porn: It's not just about Taylor Swift

University of Michigan and student data

An organization called Catalyst Research Alliance was reportedly selling a collection of data gathered from the University of Michigan that it described as "ideal data to fine-tune large language models." 

One dataset included 85 hours of audio recordings, including study groups, office hours and lectures, while the other included academic papers. The company was selling the two datasets combined for $25,000. 

The pages describing the datasets have since been removed from Catalyst's website. Catalyst did not immediately respond to a request for comment. 

AI researcher Susan Zhang tweeted a screenshot of a LinkedIn message showing a user offering the dataset up for sale. 

The university told Gizmodo in response that the message was sent by a "new third-party vendor that shared inaccurate information and has since been asked to halt their work."

"No transactions or sharing of content occurred by the vendor. Student data was not and has never been for sale by the University of Michigan," Colleen Mastony, a University of Michigan spokesperson, said. 

Mastony added that the papers and recording in question were submitted by student volunteers and "have long been available for free to academics." 

Related: OpenAI CEO Sam Altman's latest venture under fire for privacy concerns

FTC proposes rules against AI impersonation

The Federal Trade Commission Thursday proposed new rules that would extend protections against AI impersonation to individuals in addition to corporations. 

This comes as issues of AI-generated deepfake fraud have been recently surging, culminating in a $25 million theft that came on the heels of the dissemination of fake, explicit images of Taylor Swift. 

The FTC is currently seeking public comment on this extension of its rule prohibiting the impersonation of individuals. The agency is also seeking comment on whether the rule should "declare it unlawful" for an AI company to provide services that it knows could cause harm to consumers by enabling impersonation. 

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” FTC Chairwoman Lina M. Khan said in a statement. 

Related: Apple Vision Pro review: I spent two weeks with a computer strapped to my face

Xbox teases ‘largest technical leap’ ever

The latest episode of the Official Xbox Podcast dropped Thursday, revealing one of the most significant updates to Microsoft’s gaming arm in recent years. 

Microsoft Gaming CEO Phil Spencer, President of Xbox Sarah Bond, and President of Game Content and Studios Matt Booty discussed the new updates.

Bond affirmed Xbox’s commitment to hardware and noted that the next generation will deliver “the largest technical leap” ever. 

Microsoft Gaming CEO Phil Spencer, President of Xbox Sarah Bond, and President of Game Content and Studios Matt Booty.

Xbox

”There’s some exciting stuff coming out in hardware that we’re going to share this holiday,” said Bond. Precisely what this is remains to be seen, possibly an update to the current Series S or X or even new accessories.

Alongside this commitment to hardware, Spencer finally confirmed that four Xbox titles will be arriving on other consoles, mainly the PS5 and Nintendo Switch. He didn’t confirm the exact titles — it’s not “Starfield” or “Indiana Jones” — but stated: “We’ve made the decision that we’re going to take four games to the other consoles.” 

The three executives also noted that Game Pass is a crucial part of the strategy and will be the place on day one to play those first-party, exclusive Xbox titles. They’ll expectedly run great on Xbox consoles as well. 

Bond also noted that Activision and Blizzard games are coming to the service, starting with Diablo 4 on March 28, 2024. The company said that 34 million gamers are on the platform. 

Related: Stock Market Today: Stocks turn lower as produce prices leap revives inflation concern

The AI Corner: Must-read research

Researchers from Google DeepMind, Carnegie Mellon University, Stanford University and the University of California, San Diego, released a new paper Wednesday titled: "The Illusion of Artificial Inclusion."

The paper featured a study of the replacement of human participants and researchers with large language models in research. Such efforts, the researchers found, seem focused largely on increasing the speed of research while reducing its cost.

The issue, according to William Agnew, one of the authors of the paper, is multifold: Practically, LLMs cannot simulate humans or analyze interviews at the quality that humans can, and they are also known to be biased based on their training data. 

"Replacing human research participants with LLMs undermines every value of participatory or community-based research," he said

"Participation as research subjects is one of the few levers of power impacted communities have over research; we should seek to expand, rather than eliminate this far too weak voice. We must reject artificial inclusion and build research practices that center and serve the most marginalized communities."

Read the full paper here

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: Human creativity persists in the era of generative AI

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.