Digital Exhaust #213
Stendahl syndrome, the right to try, and a big, beautiful digital health review
This week I drilled down on a couple of big developments that caught my attention — Montana's new legislation for novel therapies and the ICahn med school unleashing of ChatGPT Edu. These offer interesting signals of what's ahead. My summaries should get you up to speed along with some basic analysis.
And, of course, some of my favorite things are down under Digital Exhaust. Thanks again for being a subscriber. If you can pass this along to anyone who might be interested it would be really appreciated.
Two big things that caught my eye
Montana — The try it state
This was big news that didn't see alot of coverage. Montana has passed a bill that allows clinics to sell unproven/early stage treatments. Under the legislation, doctors can open an experimental treatment clinic and dispense therapies not approved by the FDA. The products have to 1) have gone through Phase I trials and 2) have to be produced in Montana. Once it’s signed by the governor, this will be the most expansive law of its kind in the country. Versions of the Right to Try law have been passed in 41 U.S. states
While originally intended to allow terminally ill people access to novel drugs, Montana’s new bill has been softly hijacked by influencers, longevity opportunists, and the like. This bill will make it easy for people who are not ill to try things.
This is a dicey subject. Individuals should have the right to try things. At the same time vulnerable patients should be protected from sham treatments delivered by hucksters. Last year, for example, Utah passed a bill that allowed podiatrists, chiropractors, midwives and naturopaths to dispense placental stem cell therapies. This is pretty concerning.
Bottom line
Montana could become a hub for innovation or it could all go sideways with adverse patient outcomes, legal challenges, and tighter federal regulation that stifles momentum in other states.
OpenAI in Medical Training
This week, the Icahn School of Medicine at Mount Sinai became the first U.S. medical school to provide all medical and graduate students with access to OpenAI’s ChatGPT Edu platform. Built on the GPT-4o model and launched in May 2024, ChatGPT Edu is designed to support responsible AI use in higher education—enhancing clinical reasoning, accelerating research, and offering personalized learning experiences.
Meanwhile, a recent study from the Keck School of Medicine at USC found that a majority of U.S. medical students are already using ChatGPT for academic purposes. Not surprising, but good to know.
Still, the integration of ChatGPT into medical education raises familiar concerns like academic integrity. This all comes on the heels of a provocative New York Magazine exposé about the explosion of LLMs on college campuses—repeatedly labeled as “cheating.” At Northeastern University in Boston the professors are using AI to teach and the students are suing. And this week, Nature published a study surveying more than 5,000 medical academics on AI disclosure in research. The most striking finding? No consensus. Attitudes are all over the map.
If we can't agree on how to use AI as teachers, what are we supposed to teach the students?
“Don’t mistake your ChatGPT search for my medical degree”
We’ve been here before. In the early days of the empowered patient and the rise of medical “Googling,” physicians had to recalibrate their mindset. Then came the emergence of social media when some medical schools banned it outright and others, like Baylor College of Medicine, pioneered curricula to teach responsible engagement. Today, these platforms are routine tools of our professional lives.
Medical education now faces a similar moment. Just as undergrad institutions are wrestling with LLMs, we’ll have our own growing pains. But like with search and social, we carry the responsibility to help the next generation navigate what’s coming. And the scale and implications of generative AI far exceed anything we’ve seen.
Bottom line
Burying our heads in the sand doesn’t help students—and it won’t help patients who are also trying to make sense of this new landscape. We need to talk about this—openly, constructively, and honestly. Even if we ultimately decide there are moments where these tools have no place.
With the outcome is never predetermined. We never really know how something is going to be used. That’s why we need an ongoing dialogue—back and forth, student and teacher—so we can figure this out in real time.
Digital Exhaust
Stendahl syndrome. This was my fav serendipitous find of the week (and a free frozen turkey to anyone who has heard of this). But apparently there is a phenomenon where people, overcome with ecstasy in the presence of art, collapse. Every year, a few dozen tourists to Florence are rushed to the local hospitals, literally overcome by the city’s array of paintings, sculptures, frescoes and architecture. And it has a name: Stendahl Syndrome. There you go. You're welcome. Link
First blood test for Alzheimer's. This is big. Yesterday the FDA approved the first blood test for diagnosing Alzheimer’s disease. This opens up a quick way to identify the disease. As of yesterday, we were dependent on PET scans and spinal taps for diagnosis. The challenge here is that this needs to be used judiciously. Using the test in asymptomatic individuals could lead to misinterpretation of results — amyloid plaques don't always correlate with the development of Alzheimer’s. So the test is most useful when used alongside clinical evaluations in the context of suspicion. I expect this to be wildly overutilized. Link
Sleeping with the CCP. TikTok is launching in-app guided meditation exercises. It will be automatically activated in teens. Instead of prompting them to stop using TikTok in the middle of the night, they'll apparently be redirected to the company's after hours edition that 'helps them sleep.' It’s remarkable that big tech creates the problem then delivers the solution with more technology. Link
Big beautiful digital mental health review. This is authored by Dr. John Torous (et al), who knows more about this stuff that anyone on the planet. It’s “designed to catch you up on all things apps, VR, AI, chatbots, implementation, equity, and lived experience voices.” And it’s open source. Link.
Where are the AI radiologists? Experts predicted that the radiologists would be the first to fall to AI. But this story showcases how all of this is likely to work — complementary human/AI collaboration with the redefinition of the doctor's work. Link
You can't make this up. Elizabeth Holmes is doin' hard time for fraud around her blood testing company, Theranos. Her spouse has started ...(wait for it)... a blood testing company. One Theranos whistleblower has a theory. Link
GLP-1s for sale. Interesting: The FDA has halted the sale of Ozempic and Zepbound copycats. So online clinics have begun offering liraglutide, an older GLP-1 medication injected daily instead of weekly. Anything to make a buck. Link
You could be the public face of health for Apple. Posted on careers at Apple: Apple is looking for an individual who will partner with Apple Clinicians, Product, Regulatory, Study and Legal team members to spread awareness of our digital health products and scientific approach with the global medical and research communities. This is the kind of thing I would do. h/t Mario Aguilar, Stat's Health Tech Newsletter
Exciting and disturbing. Sam Altman’s goal for ChatGPT is to remember ‘your whole life’. We learned this lesson from Google. I love my GPT, but I wouldn't trust 'em. Link
Really liked the Stendahl syndrome