Table of Contents
Last week, two big names in the artificial intelligence (AI) and wellness industries announced a collaboration to develop a “customized, hyper-personalized AI health coach that will be available as a mobile app” to “reverse the trend lines on chronic diseases.”
Sam Altman (head of OpenAI, maker of ChatGPT) and Arianna Huffington (a former media executive who runs a high-tech wellness company called Thrive Global) announced their new company, Thrive AI Health, in a Time magazine advertorial.
Health is an appealing direction for an AI industry that has promised to transform civilization, but whose huge growth of the past couple of years is beginning to look like it’s stalling. Companies and investors have pumped billions into the technology, but it is still often a solution looking for problems.
Meanwhile, venture capitalists Sequoia and the investment bank Goldman Sachs are wondering out loud whether enough revenue and consumer demand will ever emerge to make this bubble feel more solid.
Enter the next big thing: AI that will change our behavior, for our own good.
Personalized nudges and real-time recommendations
Altman and Huffington say Thrive AI Health will use the “best peer-reviewed science” and users’ “personal biometric, lab and other medical data” to “learn your preferences and patterns across the five behaviors” that are key to improving health and treating chronic diseases: sleep, food, movement, stress management and social connection.
Whether you are “a busy professional with diabetes” or somebody without “access to trainers, chefs and life coaches”—the only two user profiles the pair mention—the Thrive AI Health coach aims to use behavioral data to create “personalized nudges and real-time recommendations” to change your daily habits.
Soon, supposedly, everybody will have access to the “life-saving benefits” of a mobile app that tells you—in a precisely targeted way—to sleep more, eat better, exercise regularly, be less stressed and go touch grass with friends. These “superhuman” technologies, combined with the “superpowers” of incentives, will change the world by changing our “tiny daily acts.”
Despite claims that AI has unlocked yet another innovation, when I read Altman and Huffington’s announcement I was struck by a sense of dĂ©jĂ vu.
Insurance that manages your life
Why did Thrive AI Health and the logic behind it sound so familiar? Because it’s a kind of thinking we are seeing more and more in the insurance industry.
In fact, in an article published last year I suggested we might soon see “total life insurance” bundled with “a personalized AI life coach,” which would combine data from various sources in our daily lives to target us with prompts for how to behave in healthier, less risky ways. It would of course take notes and report back to our insurers and doctors when we do not follow these recommendations.
In a related article, my colleagues Kelly Lewis and Zofia Bednarz and I took a close look at the theories of behavioral risk that might power such products. A model of insurance based on managing people’s lives via digital technology is on the rise.
We examined a company called Vitality, which makes behavioral change platforms for health and life insurance. Vitality frames itself as an “active life partner with […] customers,” using targeted interventions to improve customer well-being and its own bottom line.
Similar projects in the past have had questionable results. A 2019 World Health Organization report on digital health intervention said,
Hyper-personalization
Altman and Huffington say AI-enabled “hyper-personalization” means this time will be different.
Are they right? I don’t think so.
The first problem is there is no guarantee the AI will work as promised. There is no reason to think it won’t be plagued by the problems of bias, hallucination and errors we see in cutting-edge AI models like ChatGPT.
However, even if it does, it will still miss the mark because the idea of hyper-personalization is based on a flawed theory of how change happens.
An individualized “AI health coach” is a way to address widespread chronic health problems only if you envision a world in which there is no society—just individuals making choices. Those choices turn into habits. Those habits, over time, create problems. Those problems can be rooted out by individuals making better choices. Those better choices come from an AI guardian nudging you in the right direction.
And why do people make bad choices, in this vision? Perhaps, like middle-class professionals, they are too busy. They need reminders to eat a salad and stretch in the sunshine during their 12-hour workday.
Or—again from the AI health coach perspective—perhaps, like disadvantaged people, they make bad choices out of ignorance. They need to be informed that eating fast food is wrong, and they should instead cook a healthy meal at home.
The social determinants of health care apps
But individual lifestyle choices aren’t everything. In fact, the “social determinants of health” can be far more important. These are the social conditions that determine a person’s access to health care, quality food, free time and all the things needed to have a good life.
Technologies like Thrive AI Health are not interested in fundamental social conditions. Their “personalization” is a short-sighted view that stops at the individual.
The only place society enters Altman and Huffington’s vision is as something that must help their product succeed:
And if we don’t bend society to fit the AI models? Presumably we will only have ourselves to blame.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Why an ‘AI health coach’ won’t solve the world’s chronic disease problems (2024, July 12)
retrieved 12 July 2024
from https://medicalxpress.com/news/2024-07-ai-health-wont-world-chronic.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.