Site icon MCulster Visionary Leaders

Noblesse oblige | European lag | Inspirational leadership | Partner voices

Noblesse oblige | European lag | Inspirational leadership | Partner voices

 

News and views you ought to know about: 

  • One of the AI haves feels the pain of the AI have-nots. His discomfort is especially pronounced when he thinks about how hard it must be for financial strugglers to keep up with regulations and rumors of regulations to come. “Large healthcare organizations have the resources, data assets and expertise to responsibly deploy and monitor AI tools and the infrastructure to meet complex requirements,” notes the noblesse oblige champion, Daniel Yang, MD, vice president of AI and emerging technologies at 39-hospital Kaiser Permanente (2024 operating revenues: $115.2B). “But smaller hospitals, rural clinicians and community clinics often struggle to maintain basic IT infrastructure, let alone manage complex, new regulations.”
     
    • If regulators aren’t sensitive to the plight of the little guy, healthcare AI regulations intended to protect patients may only make things worse for millions of Americans—and thus for U.S. healthcare as a whole, Yang warns. “The choices we make now will determine whether rules leave smaller providers and their patients behind or lift the entire field to a higher standard of care,” he writes. “Effective regulations should help and apply to every organization, not just to those already ahead.”
       
    • Speaking on behalf of Kaiser Permanente, Wang urges policymakers to do three things: 1.) Create consistent standards and processes for ensuring the responsible use of AI in healthcare; 2.) provide technical assistance and financial support for healthcare organizations to responsibly use and monitor AI tools in healthcare; and 3.) Support the launch of large-scale clinical trials to show AI’s safety and effectiveness in healthcare. 
       
      • The brief post could be dismissed by the cynical as a PR stratagem for Kaiser Permanente, but that doesn’t mean it’s not sincere and worthwhile. (In fact, it’s both.) Read it here.
         
  • Only four European countries have a national strategy for adopting and regulating healthcare AI. Another seven are working on it, but that still leaves a big chunk of the continent fiddling while AI burns. Part of the holdup is that there, as here in the States, regulation badly lags behind innovation. Some 43 countries, or 86% of the sample, report legal uncertainty as their top barrier to AI adoption. Another 39 lands (78% of the field) cite financial affordability as a major barrier. What’s more, fewer than 10% have liability standards, which are needed to lay blame should an AI system be implicated in a medical error that harms a patient. The U.N. is spotlighting the survey report, prepared by its World Health Organization, in hopes of spurring action among European healthcare leaders. The U.N.’s coverage quotes Dr. Hans Kluge, WHO regional director for Europe. “[W]ithout clear strategies, data privacy, legal guardrails and investment in AI literacy, we risk deepening inequities rather than reducing them,” Kluge says, echoing ominous observations made Stateside by Kaiser Permanente (see item above) and others. “The choices we make now will determine whether AI empowers patients and health workers or leaves them behind.”
     
  • In the U.K., more than 90% of healthcare professionals believe well-designed AI will help them deliver better care. Major majorities also see relieving healthcare workers of nonclinical tasks as AI’s most valuable contribution (86%) while recognizing the technology must be integrated with electronic patient records (89%). Download the report here. 
     
  • Armed with AI, patients can fight back against denials of coverage. Yep, there’s an app for that. In fact, there are several. Of course, like all AI, these tools require human oversight. For example, an AI tool might draft an appeal letter that greatly impresses the patient who prompted it. But impressing a user can just as easily obscure the point of seeking algorithmic assistance in the first place. The aim, after all, is persuading the payer. And that’s a much taller order. The quandary is spelled out at Stateline by Carmel Shachar, JD, MPH, director of the Health Law and Policy Clinic at Harvard Law School. “It can be difficult for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn’t quite accurate,” Shachar tells the outlet. “The challenge is, if the patient is the one driving the process, are they going to be able to properly supervise the AI?” Get the rest.
     
  • In the 1990s, Gary May was an electrical engineer investigating early neural networks. He dreamed of a future in which AI helped improve the health status of entire populations. Today he’s chancellor of the University of California, Davis. And he’s not done dreaming. “We stand at a critical moment for health and education,” May writes in an open letter addressed to all UC-Davis stakeholders. “Responsible AI offers vast potential to expand student opportunities and advance research that improves lives at home and in hospitals.” It’s a rah-rah piece for the school, of course it is, but it’s geared toward promoting innovative yet responsible healthcare AI. And aren’t we all stakeholders in that global endeavor? Allow yourself to feel a little inspired here. 
     
  • Also worth your while:
     
  • Research news of note: 
     
  • From AIin.Healthcare’s sibling outlets:
     

  • Nabla launches Nabla ConnectThe fastest way for any EHR to deliver world-class ambient AI

    Nabla Connect is a new plug-and-play module that lets EHR vendors embed Nabla’s trusted AI assistant directly into their platformmaking integration fast, compliant, and effortless.
    In just a few days, EHRs can:

    • Integrate ambient AI without heavy engineering
    • Enhance clinician experience with seamless, accurate note generation
    • Stay future-proof through continuous updates

    Learn more about Nabla Connect
     

  • Don’t Let AI Touch Patient Data

    Shadow AI is sneaking into hospitals and clinics faster than IT Teams can track. Download the free Security Checklist from Fellow to uncover where AI notetakers may be recording or storing sensitive conversations. Use it to flag compliance risks early and keep your organization on the safe side of privacy laws. Before Shadow AI spreads inside your team, download the checklist to reduce your risk here.

link

Exit mobile version