C L O S L E R
Moving Us Closer To Osler
A Miller Coulson Academy of Clinical Excellence Initiative

Between the monitor and the bedside: a metamodern reflection 

Takeaway

As clinical systems, assisted by AI, grow more accurate and efficient, the risk isn’t healthcare professional replacement. It’s the displacement of clinician judgment, responsibility, and physical presence that patients and families need most when outcomes are uncertain. 

Lifelong Learning in Clinical Excellence | February 12, 2026 | 6 min read

By Adam Schiavi, MD, PhD, MS, Johns Hopkins Medicine 

 

Patient vignette, Part 1 

The following encounter is a composite, shared not as a case study but as a reflection on how contemporary systems shape clinical meaning. 

 

Mr. L was a 41-year-old man admitted to the intensive care unit after being found unresponsive at home by his daughter, with an unknown downtime. He was intubated during transport, received one cycle of cardiopulmonary resuscitation with return of spontaneous circulation after three minutes, and required vasopressor support on arrival to the neuro–intensive care unit. Neuroimaging demonstrated diffuse hypoxic-ischemic injury without findings sufficient to establish a definitive prognosis. He underwent targeted temperature management at 36°C for 24 hours. Continuous electroencephalography showed no epileptiform activity after rewarming. Over several days, cardiopulmonary parameters stabilized, and bedside monitors appeared reassuring, but neurologic examination showed no meaningful improvement. Prognostic assessment incorporated serial examinations, imaging, validated clinical scores, and integrated decision-support outputs within the electronic health record and neuroprognostication dashboard. AI-assisted modeling consistently estimated a low probability of meaningful neurologic recovery. 

 

There’s a particular tension that becomes visible only when one works close enough to power that its consequences cannot be ignored. It’s not the familiar tension between ignorance and knowledge, or between tradition and innovation. It lies instead in the gap between what the systems we create are capable of doing and what those actions mean for the people who must live with their consequences. 

 

Modernism, postmodernism, and metamodernism 

Modernism taught us to build. Postmodernism taught us to doubt what we had built. Metamodernism lives in the oscillation between these impulses: a sincere desire to improve the world paired with an awareness that improvement can erase what it intends to save. This essay is written from within that oscillation. 

 

Within clinical practice 

I’m an anesthesiologist and intensive care physician working in a clinical system that’s constantly evolving and perpetually exposed to risk. At the bedside and in the classroom, my work centers on preventing catastrophes and making decisions under uncertainty, time pressure, and moral weight. These aren’t neutral domains. Lives are altered by the decisions we make—and increasingly by the architecture of decision-making itself: the tools, structures, and systems that shape how choices are framed, constrained, and ultimately owned. 

 

Over time, a question has surfaced: How do we build systems powerful enough to change outcomes without hollowing out the human meaning of those outcomes? 

 

This question doesn’t arise from abstraction alone. It appears at the bedside when patient monitors display confidence while families live with uncertainty. It appears when production pressure conflicts with clinical judgment where decision making responsibility can become decoupled from outcome accountability. It also arises from the reality that artificial intelligence systems can make decisions and recognize clinical patterns faster than any human yet cannot perceive the suffering those patterns represent. 

 

As systems grow more capable, there’s a temptation to treat uncertainty as a defect to be eliminated. In medicine, however, uncertainty isn’t an epistemic failure. It’s a defining feature of caring for vulnerable human beings facing mortality. When systems erase uncertainty entirely, they do more than reduce error; they risk displacing agency by narrowing where judgment, responsibility, and moral choice are allowed to reside. 

 

Metamodern ethics doesn’t reject technology, nor does it romanticize human fallibility. Instead, it asks a harder question: What must remain irreducibly human even as everything else becomes scalable? 

 

The answer, I suspect, isn’t judgment alone. Judgment can be trained, assisted, and perhaps one day surpassed or replaced. What must remain is responsibility: a human standing behind a decision when its consequences unfold unpredictably, when outcomes are tragic despite being justified, when harm occurs without error, and when no system, protocol, model, or metric can absorb the moral weight of what has occurred. In an age of increasingly intelligent systems, this stance may be the most important thing medicine cannot afford to automate. 

 

Fears that machines will replace clinicians are, in this light, somewhat misplaced. The more immediate danger is that clinicians will become curators of outputs rather than authors of decisions. Harm will not disappear; it will simply become harder to locate. Decisions will still be made, outcomes will still follow, and explanations will still be available—but ownership will quietly evaporate. 

 

The response cannot be to slow innovation or to defend inefficiency for its own sake. Neither position is tenable. The work ahead is to design systems that are efficient but incomplete. Systems that preserve moral friction and leave room for pause. We must resist treating forward motion as its own justification. Narrative must be protected without being treated as infallible, and efficiency must be pursued without being mistaken for moral sufficiency. 

 

If there’s hope in metamodernism, it lies in the willingness to let these tensions remain unresolved. Progress can proceed without allowing success to erase responsibility. We can continue to build while remaining uneasy about what building costs. We can optimize without pretending optimization is synonymous with goodness. Most importantly, we can calculate accurately while still allowing space for grief, hesitation, and moral repair. Not every tension deserves resolution, and not every discomfort is a problem to be engineered away. 

 

Our systems are already powerful. They increasingly shape how decisions are framed, how risks are tolerated, and how responsibility is distributed before any individual clinician acts. The harder question is what remains visible when those systems work as intended—when outcomes improve, errors decline, and no obvious failure demands explanation. In those moments of apparent success, will there still be room to ask who bears responsibility, whose values are being carried forward, and what forms of human presence were quietly made unnecessary? Or will success itself become the final justification, leaving us with better outcomes and an uneasy sense that we no longer recognize ourselves in what we have made? 

 

Patient vignette, Part 2 

During family meetings, physiologic stability and probabilistic projections were readily available, but there was less space to address questions about who the patient was, what outcomes he would have considered acceptable, and how to reconcile normal-appearing monitors with the possibility that continued treatment no longer served his values. The clinical issue was not whether withdrawal of life-sustaining treatment could be justified, but how responsibility was distributed when decisions appeared to arise from aggregated data rather than an identifiable moment of judgment. In the absence of error or system failure, the moral weight of the outcome remained, even as its authorship became increasingly difficult to locate. Family concerns increasingly reflected this dissonance, including questions about nonclinical influences such as bed availability or organ donation, illustrating how system-level clarity can unintentionally complicate trust and meaning at the bedside. 

 

Things to consider: 

1. When decision-support tools and predictive models perform well, notice how they shape not only recommendations, but also the timing, tone, and authority of conversations at the bedside.

 

2. Remaining the “human in the loop” means more than reviewing outputs; it requires explicitly claiming authorship of decisions that carry moral and emotional consequences. 

 

3. Be alert to the dissonance families experience when physiologic stability and reassuring monitors coexist with discussions of poor prognosis and address the optics of care directly rather than assuming data will reassure.

 

4. Remember that patients and families are often less interested in probabilities than in how a trusted clinician interprets those probabilities considering values, identity, and what it means to live or die well.

 

 

 

 

 

 

 

 

 

 

 

This piece expresses the views solely of the author. It does not necessarily represent the views of any organization, including Johns Hopkins Medicine.