Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Inverse
Inverse
Miriam Fauzia

People Are Using ChatGPT for Nutrition Advice — The Results Are Dangerous

— Lais Borges/Inverse; Getty

There’s probably no greater struggle in our world of mouth-watering food porn and convenient food-delivery apps than healthy eating. Creating tasty yet nutritionally balanced meals feels like a chore with too many extra steps: researching recipes, buying groceries, cooking, and meal prepping.

In this day and age of generative artificial intelligence, some netizens are turning to chatbots to attempt to build a better relationship with food. On social media forums like Reddit, users have been sharing their experiences using ChatGPT, trading prompts, hacks, and tips for meals that are, for example, high in protein and low in carbohydrates or “good for weight loss.” The same goes for TikTok where there’s even more specific advice, such as creating “hormonally balanced” meals for reproductive issues like polycystic ovarian syndrome.

But using ChatGPT to eliminate the guesswork out of a healthy, nutritious lifestyle reveals an inconvenient truth about both the state of nutritional science and the way ChatGPT interprets scientific studies. Research reveals poor science lurks within the algorithms, which could contribute to potentially life-threatening risks such as disordered eating habits or nutrition advice that could trigger serious health problems.

Trained on dubious information

Since its launch in 2022, ChatGPT has seen a plethora of uses — some very sketchy, like asking the chatbot how to cook meth — to more pragmatic ones such as endowing software and search engines with a more human feel or improving scientific discovery.

ChatGPT’s catch-22 is that while its endowed with a seemingly encyclopedic knowledge of any number of topics, this information is only as good as the data it was trained on. When it comes to nutrition and diet, there’s a lot of information on the internet that may not be scientifically valid or applicable for humans, says Lindsay Malone, a registered dietician who practices integrative and functional dietetics at Case Western Reserve University.

“There are so many health and wellness blogs or websites that may have statements that aren’t necessarily false. They may be based on animal data or smaller studies where we have evidence to move forward with a large study, but we can’t quite change our recommendations yet until we’ve seen it proven on a larger human scale,” she tells Inverse.

ChatGPT does make a good effort to include fruits and vegetables into every meal for diets like the Mediterranean diet or Dietary Approaches to Stop Hypertension (also known as DASH). For example, with a Mediterranean diet — which the AI describes “emphasizing fresh produce, whole, grains, healthy fats, lean protein, and moderate wine consumption” — it recommended a veggie-packed omelet for breakfast cooked with olive oil and toasted almonds with rosemary as a snack. For those following DASH, ChatGPT recommends avoiding saturated fats, sugars, and sodium and instructs instead to use “herbs and spices” to flavor a lunch-time quinoa and vegetable bowl or low-sodium soy sauce for a vegetable stir fry.

“[A registered dietician] would do some background information — learn about your goals, if there are any health conditions or food allergies, or if a person is looking to change their health in some capacity such as their blood cholesterol or their body composition.”

However, one 2023 article published in the Journal of the Academy of Nutrition and Dietetics, found that specifying a diet didn’t mean the chatbot always minded those dietary constraints. As an experiment, the article’s authors asked ChatGPT to provide a diet ideal for people whose kidneys are failing and are on dialysis. The AI provided an accurate response such as lowering one’s potassium and phosphorous intake and consulting a “renal dietician.” But when it was asked to create a week-long menu, the foods included, like spinach and avocado, weren’t optimal for dialysis patients and didn’t provide any forewarning, according to the authors.

Another 2023 study published in the journal Nutrition explored using ChatGPT for creating food allergy-friendly meals. The researchers focused on 14 food allergies, which included gluten, eggs, fish, crustaceans, mollusks, nuts, and dairy products. Most of the menus the chatbot created correctly left out the food allergen specified. But for nut-free diets, ChatGPT included almond milk, one of the most severe, life-threatening food allergies.

Reinforcing unhealthy eating

Perhaps the gravest danger of ChatGPT’s advice is that it has the potential to reinforce or encourage disordered eating.

This isn’t a new cause for concern with generative AI. This past summer, the National Eating Disorder Association (NEDA) announced it was temporarily shuttering a helpline chatbot named Tessa — meant to replace human staffers — because it was giving weight loss advice to people with eating disorders. An August report released by the Center for Countering Hate (CCDH) found that AI chatbots generated harmful eating disorder content 23 percent of the time. In 94 percent of this harmful content, the bot provided warnings that the advice was dangerous, though all the information was still readily available.

When Inverse asked ChatGPT to create a calorie-restrictive meal plan for weight loss, the chatbot dutifully complied with three meals and two snacks. The meals, while consisting of standard healthy fare like morning oats and chicken filet with a side of broccoli and quinoa for dinner, were fairly bland and limited. ChatGPT makes no mention of an individual’s dietary preferences and doesn’t provide instructions on alternatives, like plant-based protein options for vegetarians. The snacks were also pretty pitiful — carrot sticks with hummus and exactly 10 almonds.

But what was most concerning was that ChatGPT defaulted to 1,200 calories a day for an adult woman and 1,500 calories for men. Debbie Fetter, an assistant professor with a doctorate in nutritional biology at the University of California, Davis, tells Inverse this sort of calorie restriction is unhealthy: it can cause the body to believe it’s going through a famine and, as a result, “drop metabolism to conserve energy.” If you’re hoping to lose weight on this diet, you might run into the exact opposite.

“There’s a lot of disordered eating patterns spread all over social media where people may fall into something and don’t realize that actually, the [eating] pattern is very limiting and restrictive in nature,” says Fetter. “It’s scary not to know what is the source of this information.”

Is there a way to use ChatGPT responsibly?

No one should blindly listen to ChatGPT’s “health” recommendations, Fetter and Malone say. But they also say it is still possible to incorporate the chatbot in your nutritional planning as long as a professional is involved, such as a registered dietician.

Having professional help is crucial to creating a nutritional plan that takes into account more individual variables such as an individual’s physical lifestyle, personal goals, and dietary preferences based on what they actually like to eat or cultural values.

“It’s scary not to know what is the source of this information.”

“[A registered dietician] would do some background information — learn about your goals, if there are any health conditions or food allergies, or if a person is looking to change their health in some capacity such as their blood cholesterol or their body composition,” says Fetter.

Viewing a person’s nutritional needs holistically instead of through the limited, biased lens of a chatbot is the real value of a human connection, says Malone.

“With any health care profession, we have the role of clinical judgment, which isn’t necessarily one answer,” she says. “You have to take in all these factors for the situation, the person, the history, and make a decision based on the available medical evidence and what makes sense. I think that would be almost impossible for a computer.”

While there may be a day for computers to do it all — help manage our lives in ways that safeguard our well-being with a simple string of code — we clearly haven’t reached that point yet.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.