Your Fitbit already tracks everything from your steps to your sleep patterns, giving people access to a positive deluge of information about their every waking moment.
Google (GOOG) has decided that users need more, however. During the live-streamed event The Check Up, Google revealed that a new feature is coming to future Fitbit models: artificial intelligence.
Related: Why one doctor turned to AI to 'dysolve' dyslexia
Google and Fitbit partner on large language model
Google and Fitbit said that they are partnering to build a Personal Health Large Language Model (LLM) that will help users make more sense of all the data that Fitbit gathers. It will run using Gemini, Google's AI model, and be trained using "anonymous data from real, human case studies gathered from accredited coaches and wellness experts."
As an example, Google said that the model could be used to break down how exercise impacts the quality of a person's sleep, or vice versa.
Google said that the model will become available to a limited number of premium Android users on Fitbit Labs later this year. It is unclear when tech will roll out on a wider basis.
Google completed its acquisition of Fitbit in 2021.
It is also unclear whether the model will run in the cloud or on device, which is generally seen as a more secure way to access private information. Google said the model will "power future AI features across our portfolio."
The announcement, the latest in a long line of efforts by Google and its Big Tech peers to, when it comes to AI, integrate, integrate and integrate some more, comes just a few weeks after its Gemini model faced scrutiny for generating historically inaccurate images.
Scientist has concerns about LLMs
That issue, according to several AI experts, including cognitive scientist Gary Marcus, was emblematic of a broader problem with LLMs, a form of AI that became popularized last year by ChatGPT: Such algorithms are largely (and intrinsically) unreliable, hallucinatory and biased.
Marcus has often argued for the importance of a new AI paradigm, one that is transparent, trustworthy and reliably usable. Though Google explained that this health model was trained on anonymized human data, many of the details of the model (its training process, the carbon footprint of its training process, the specific types and amounts of data used in training, the size of the model, etc.) remain unknown.
"I am negative about LLMs. We need a better approach, one that is reliable, safe, fair and trustworthy. LLMs will never be that, so it’s time to move on," Marcus said in a post on X, adding in a separate post: "We should worried about LLMs being used in high-stakes applications where they simply aren’t reliable enough."
Google, according to CNET, addressed some of these concerns in a press briefing, explaining that the launch of this model as an experimental labs feature will enable the company to collect and respond to user feedback before the tech is rolled out to the public.
Google did not respond to a request for comment regarding hallucinations, safety and the training data of the model.
These ongoing pushes to further integrate AI across the internet come as regulation in the U.S. continues to lag and ethical concerns — regarding bias, transparency, misinformation, fraud, cybercrime, copyright infringement, economic inequity and sustainability — continue to proliferate.
Related: Deepfake program shows scary and destructive side of AI technology