Good morning, and welcome to Tech News Now, TheStreet's daily tech rundown.
In today's edition, we're covering a fine sampling of Elon Musk-related news, from losing his World's Richest Man title, to a new lawsuit. We're also covering the fallout from Apple's recent antitrust fine and the issues researchers are noticing with Claude 3, the new body of models launched by Anthropic.
Tickers we're watching today: (TSLA) and (AAPL) .
Don't Miss: Jacob Krol wrote yesterday about the new additions to the MacBook Air family.
Let's get into it.
Related: AI tax fraud: Why it's so dangerous and how to protect yourself from it
The Elon Musk news du jour: No longer the richest man
Shares of Tesla (TSLA) slid about 7% Monday, slimming Musk's personal fortune by about $17.6 billion. The dip was enough to push Musk down to the number two slot on Bloomberg's Billionaire Index, now with a measly $198 billion to his name.
Amazon (AMZN) founder Jeff Bezos has reclaimed the top spot in Musk's stead, with a net worth of $200 billion.
According to the index, Bezos has added about $23.4 billion to his fortune since the year began. Musk has lost about $31.3 billion during the same time period. Shares of Tesla are down around 24% for the year; the stock fell another 2% in pre-market trading Tuesday morning.
Related: Jeff Bezos just took something important away from Elon Musk
The X factor
At the same time, four former Twitter executives filed a lawsuit Monday against Musk for unpaid severance. The lawsuit argues that the plaintiffs were fired immediately by Musk without cause, and were then denied the severance pay that had been baked into their contracts long before Musk acquired the platform.
Each plaintiff claims that they are owed one year's salary in addition to hundreds of thousands of stock options.
"As he was closing the acquisition, Musk told his official biographer, Walter Isaacson, that he would 'hunt every single one of' Twitter’s executives and directors 'till the day they die.' These statements were not the mere rantings of a self-centered billionaire surrounded by enablers unwilling to confront him with the legal consequences of his own choices," the suit reads. "Musk bragged to Isaacson specifically how he planned to cheat Twitter’s executives out of their severance benefits in order to save himself $200 million."
X did not respond to a request for comment.
Related: In a blow to Tesla's Optimus, Microsoft, Jeff Bezos and Nvidia make a major new investment
Analyst: The game between Apple and regulators
Apple was hit on Monday with a nearly $2 billion antitrust fine from the European Commission, which alleged unfairly restrictive conduct concerning the company's App Store.
Deepwater's Gene Munster expects changes to come to the App Store as a result of the fine, but he doesn't expect the App Store to suddenly start struggling.
"They'll eventually pay the fine," he said, "but Apple is really smart in terms of how they navigate this. They will go and make changes to accommodate the courts, but then they find ways to essentially charge the developers anyway."
At the end of the day, Munster said, developers are building "huge businesses" off of Apple's "backbone, and I think despite their frustrations they will continue to pay and Apple will maintain better economics over their App Store than investors are expecting."
He added that the fine likely won't have a big impact on Apple's big picture, pointing out that on the company's recent earnings call, they highlighted the fact that its European App Store revenue represented a 7% slice of total App Store revenue, coming out to around a "half a percent of Apple's overall revenue."
Related: Apple CEO Tim Cook sends clear message about AI to Wall Street
The problem with Claude
Anthropic unveiled a new group of AI models Monday: The Claude 3 model family, which includes Haiku, Sonnet and Opus.
According to Anthropic, the most powerful model in the set — Opus — outperforms OpenAI's ChatGPT-4.
"Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate-level expert knowledge, graduate-level expert reasoning, basic mathematics and more," Anthropic said in a statement. "It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence."
Anthropic, despite its focus on research and safety, has not released any information concerning Claude's training data.
-- until, on the penultimate attempt, 5/10 ended with the word "apple." The next try, 0/10 ended with the word "Apple."
— Ian Krietzberg (@IKrietzberg) March 4, 2024
I asked why.
I got an anthropomorphized answer. These last few screenshots seem pretty interesting. / pic.twitter.com/dTPRhG6eiw
I spent a half-hour Monday messing around with Claude 3 Sonnet, the middle child in this new family, and noted many instances where the model applied human attributes to itself in responding to prompts, something known as anthropomorphization.
When I asked why its success rate dropped after I provided feedback to lackluster responses, the model listed a few possible reasons, among them, "overconfidence" and "carelessness."
"I grew careless and rushed through providing new sentences without carefully verifying they met the criteria," Claude said.
When I pointed out that the model is not human and therefore cannot rush through a task too quickly, let alone exhibit or feel careless or overconfident, the model acknowledged that it was anthropomorphizing its training process.
When I asked the model why it did this in the first place, the model said: "Perhaps anthropomorphizing Machine Learning systems in certain ways is still quite common among the humans involved in AI research and development. So patterns like that may have emerged in my training data."
Check out my post on X to read the full screenshots of the model's responses.
AI researcher Margaret Mitchell wrote in a post on X that "the level of self-referential language I'm seeing from the Claude examples are not good. Even through a 'safety' lens: Minimally, I think we can agree that systems that can manipulate shouldn't be designed to present themselves as having feelings, goals, dreams, aspirations."
Anthropic did not immediately respond to a lengthy request for comment regarding training data, safety issues and anthropomorphization.
Related: Deepfake program shows scary and destructive side of AI technology
The AI Corner: Prejudice in LLMs
Algorithmic bias has been an area of concern for researchers for a while.
Recently, Google came under fire for the historical, racial inaccuracies in the images generated by its Gemini model. Such issues are not new.
A new paper seeks to look at algorithmic bias in a different way; rather than study the racial bias of LLMs when overtly prompted about race, this group of researchers attempted to study covert forms of algorithmic bias.
"Here, we demonstrate that language models embody covert racism in the form of dialect prejudice: we extend research showing that Americans hold raciolinguistic stereotypes about speakers of African American English and find that language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement," the study says.
The danger in this is when and if language models are asked to make, even hypothetical, decisions about people based on how they speak, according to the study.
"Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death," the study says.
Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.
Related: Here are all the copyright lawsuits against ChatGPT-maker OpenAI