What Makes Healthcare Risk so Difficult to Manage?
.jpg)
What Makes Healthcare Risk So Difficult to Manage?
In every safety critical field—from aviation to nuclear energy—managing risk is paramount. But one domain stands out in both its complexity and its challenge: healthcare. Despite decades of investment, training, and research, adverse outcomes in medicine remain alarmingly common. Why is this? What makes healthcare risk difficult to manage compared to risk in other safety-critical industries?
At the BRITE Institute, we focus on helping organizations understand and manage risk, especially in complex socio-technical systems. Healthcare provides a revealing case study of why traditional risk models often fall short. In this post, we explore four major reasons why healthcare risk is so unique: its human-centered design, inherent uncertainty, socio-technical complexity, and the impossibility of eliminating risk altogether.
1. A Human-Centered System
Unlike many other high-risk domains, healthcare is fundamentally a human system. Human workers are not just operators; they are also the central agents of diagnosis, judgment, empathy, and care. As Lyons et al. (2004) put it:
“Healthcare is also much less predictable than many other kinds of work... More than other industries, the healthcare system relies on human-human interaction.”
Remove human operators from an aircraft, and autopilot can carry on. Remove the crew from a nuclear plant, and many systems will continue running. Remove clinicians from healthcare, however, and the entire system collapses.
Furthermore, humans are not only the providers in healthcare—they are also the subjects. Unlike mechanical systems, the bodies and minds being treated are not uniform or fully understood. Human variability on both sides of the interaction introduces a level of unpredictability that few other domains face.
2. Uncertainty in Outcomes
Medicine is not engineering. Though it is rooted in science, it involves frequent decision-making under deep uncertainty. Every patient responds differently to treatments. Symptoms can be misleading. Diagnoses may be incomplete. And even the most evidence-based guidelines may not apply to the person sitting in front of the clinician.
In The Laws of Medicine, oncologist Siddhartha Mukherjee captures this reality:
"To the outsider, medicine seems firmly based in rigorous science. To the insider, it often feels like a blend of fact, instinct, probability, and hope" (Mukherjee, 2015).
This uncertainty complicates risk management. While engineers may know the tolerance of a metal beam or the pressure capacity of a pipe, physicians must work with probabilities, partial data, and individual variability. As a result, healthcare often involves choosing between two or more risky options rather than between a risky and a safe one.
3. Socio-Technical Complexity
Healthcare is not just biological and clinical; it is also deeply organizational and social. It involves intricate systems composed of people, technologies, workflows, protocols, and physical environments. As a socio-technical system, healthcare must be analyzed not only for its technical flaws but also for how social dynamics shape outcomes.
Every patient interaction may involve multiple professionals, departments, tools, and decisions. These moving parts do not always interact seamlessly. Handoffs can fail. Equipment may be unfamiliar. Staff may be fatigued. Even well-designed safety protocols can break down when real-world complexity takes over (Leveson, 2011; Hollnagel et al., 2015).
Moreover, these systems drift. Over time, practices and routines evolve—sometimes away from their intended safe states. Dekker (2011) calls this system drift: the gradual movement of real-world behavior away from the assumptions of designers and regulators.
"Systems will gradually shift, adapt, and evolve; their behavior may therefore drift further away from that which is desired and into the realms of unsafe performance" (Dekker, 2011).
4. Risk Cannot Be Eliminated
Perhaps the most defining feature of healthcare risk is its inescapability. In aviation or nuclear energy, we speak of a goal of zero accidents. But in healthcare, zero harm is rarely achievable. Every intervention, even the most routine, carries risk. Even inaction carries risk.
Leveson (2011) explains that risk can arise both from action and inaction. A doctor may face the decision of whether to administer a powerful drug: give it, and risk side effects; withhold it, and risk disease progression. There is no zero-risk path.
As Mukherjee (2015) notes, uncertainty and risk live “the spaces between facts.” And because every body presents slightly different combinations of biology, psychology, and behavior, every risk assessment is case-specific.
Worse still, what was once considered acceptable risk may become intolerable as knowledge and expectations evolve. For instance, surgeries once thought dangerous are now routine. Patients who would have been turned away decades ago are now viable candidates for transplants, chemotherapy, or advanced imaging. As standards rise, our tolerance for error and complications drops (Rao et al., 2007; Friedrich et al., 2009).
Conclusion: Managing the Unmanageable?
Healthcare poses challenges that are fundamentally different from those in other high-risk industries. Its dependence on human behavior, the irreducibility of uncertainty, the complexity of its systems, and the impossibility of eliminating risk all contribute to making healthcare risk unique.
These challenges require equally unique solutions. Borrowing models from aviation or engineering can help, but they must be adapted to account for the human, uncertain, and ever-changing nature of healthcare work.
Understanding what makes healthcare different is the first step toward making it safer. At BRITE Institute, we aim to equip professionals with the knowledge, tools, and mindsets needed to manage risk in systems that can't simply be engineered into perfection.
References
Dekker, S. (2011). Drift into failure: From hunting broken componentsto understanding complex systems. CRC Press.
Friedrich, K., Nussbaumer, K., Pöhlmann, C., & Hauenstein, K. H.(2009). Kidney transplantation in the elderly. Nephrology DialysisTransplantation, 24(4), 1076–1080.
Hollnagel, E., Wears, R. L., & Braithwaite, J. (2015). From Safety-Ito Safety-II: A white paper. The Resilient Health Care Net.
Leveson, N. (2011). Engineering a safer world: Systems thinkingapplied to safety. MIT Press.
Lyons, M., Adams, S., Woloshynowych, M., & Vincent, C. (2004). Humanreliability analysis in healthcare: A review of techniques. InternationalJournal of Risk & Safety in Medicine, 16(4), 223–37.
Mukherjee, S. (2015). The Laws of Medicine: Field Notes from anUncertain Science. Scribner.
Rao, P. S., Schaubel, D. E., & Saran, R. (2007). Risk-adjustedmortality following kidney transplantation in the United States. AmericanJournal of Transplantation, 7(5), 1098–1106.