Why AI Guardrails are Crucial for Ethical Generative AI in Healthcare

Generative AI (GenAI) is poised to revolutionize healthcare, offering improved diagnostics, personalized treatment plans, and enhanced operational efficiency. However, its implementation necessitates careful consideration of both its potential and pitfalls.

A recent HIMSS24 panel discussion highlighted the excitement surrounding GenAI, while acknowledging the importance of responsible integration. Experts emphasized the need for robust data security protocols, given the vast amount of sensitive information GenAI relies on.

For Humberto Quintanar, vice president and chief technology officer at Memorial Healthcare System, the concern is not so much about the technology itself, but about the data at the core of it.

“What worries me and keeps me up at night is the security of that data,” said Quintanar. “Today we are looking at that very carefully, because it’s so easy to get that information and just share it internally, but it’s also becoming easier to share it externally.

Despite these concerns, promising use cases are already emerging. One example is AI-powered note generation, which frees up physicians’ time for more patient interaction. Radiology is another area ripe for GenAI application, with technology that can generate reports from X-rays and identify similar patient cases.

Ethical Strategies are Paramount for Heathcare Generative AI

While GenAI offers undeniable benefits, ethical considerations remain paramount. Experts warn against increased workloads disguised as efficiency gains. To ensure responsible adoption, healthcare leaders must establish guardrails not just for the technology itself, but also for how it’s managed.

“In order for these promising results to continue, expand and grow in ethically responsible ways, there need to be guardrails,” said Brain Spisak, independent consultant and research associate at National Preparedness Leadership Initiative at Harvard University.

“This tech might save us a bunch of time – say 15% of time – but what’s going to happen with that gap? Is leadership just going to fill it with something else? You need to build guardrails for not just the technology but the leadership.

Hackensack Meridian Health, New Jersey’s largest health system, is working with Google Cloud to deploy generative AI solutions to automate manual and repetitive tasks and analyze large patient data sets to identify patterns and trends to aid clinical decision-making.

Strategic Approach to Responsible AI

“We’ve taken a strategic approach. We have various Generative AI enabled capabilities in production as pilots and under development,” Robert Garrett, CEO of Hackensack Meridian Healthcare, said during Tuesday’s keynote speech at the HIMSS conference. “AI-driven chatbots are helping enhance the patient experience. We’re using responsible AI with humans always in the loop.”

The potential for bias in GenAI algorithms also necessitates careful attention. Healthcare data can reflect existing societal biases, and AI systems trained on such data can perpetuate these inequalities. Mitigating bias requires building diverse datasets and implementing fairness checks throughout the development and deployment of GenAI tools.

Ultimately, the successful integration of GenAI in healthcare hinges on a collaborative effort. Healthcare leaders, technologists, and ethicists must work together to ensure that this powerful technology is used responsibly. Prioritizing patient well-being and fostering a more efficient and equitable healthcare system.


This content was initially generated with the assistance of AI tools. However, it has undergone thorough human review, editing, and approval to ensure its accuracy, coherence, and quality. While AI technology played a role in its creation, the final version reflects the expertise and judgment of our human editors.

ethicAil – Building Trust in AI

Scroll to Top