Skip to content
ANCHOR · The People Layer

Clinical M&M, Operational M&M.

The morbidity-and-mortality review is universal. Every department in every teaching hospital runs one every month. The equivalent review for operational failures exists almost nowhere — and the asymmetry tells you more about what the profession takes seriously than any strategic document ever will.

TWO REVIEWS · ONE HAS A ROOM · THE OTHER HAS NO ROOMClinical M&MMONTHLY · SCHEDULED · STRUCTUREDCase presentation · 15 minDecision points reviewed · 15 minClinical learning identifiedDocumented · actioned · followed upEvery department. Every month. Without exception.Operational M&MRARELY SCHEDULED · RARELY STRUCTUREDDischarge delay analysis — no venueCoding miss review — no venueHandover failure review — no venueReadmission pattern review — no venueThe events happen. The review does not.The asymmetry is not neutral. It reveals what the profession has decided is serious and what it has decided to tolerate.
Illustrative. The specific example topics for operational review (discharge delay, coding miss, handover failure, readmission pattern) are representative of the events most commonly missing from any structured review venue.

Every teaching hospital in Germany runs a clinical morbidity-and-mortality conference. Every department. Every month. Without exception. The ritual is deeply embedded in medical professional culture: a case is presented, decision points are examined, the clinical reasoning is interrogated, learning is identified, and the findings are documented and fed back into departmental practice. The conference is not primarily punitive — it is a structured venue for the profession to examine its own errors, systemically and collegially, with the shared understanding that good clinicians make mistakes and the purpose of the review is to catch the patterns before they compound. The clinical M&M is one of the most valuable cultural inheritances the hospital system has.

The equivalent review for operational failures does not exist. In almost no hospital I have worked with has there been a scheduled, structured, monthly venue in which the operational errors of the preceding month are systematically examined. Discharge delays that compounded into bed shortages. Coding misses that cost the department six-figure sums. Handover failures that propagated through night-shift decisions. Readmissions traceable to premature discharge. These events happen every month in every department. The clinical M&M does not cover them; its scope is the patient’s clinical trajectory, not the operational trajectory of the decisions around them. No other venue covers them either. They are noted informally, if at all, and disappear into institutional memory without being examined.

This asymmetry is one of the most revealing features of German hospital culture. What a profession examines, it takes seriously. What it refuses to examine, it has decided to tolerate. The absence of an operational M&M is not a scheduling oversight; it is a statement. The ANCHOR layer of CuraOS treats the installation of this missing review as one of the highest-leverage operational moves a department can make, not because the review itself is difficult to run but because its existence changes the profession’s relationship with its own operational failures.

The observation: a department that runs a clinical M&M and does not run an operational M&M has decided, implicitly, that clinical errors deserve collegial examination and operational errors do not. The decision is almost never articulated. The consequences shape every ANCHOR-layer variable the department has.

Why the clinical M&M works.

The clinical M&M is effective for specific structural reasons that are worth examining, because the same structural reasons explain why its operational counterpart would be effective if it existed.

It is scheduled. The monthly slot is in the calendar. Attendance is expected. The event happens whether anyone champions it or not. Contrast with informal operational debriefs, which happen only when someone specifically calls them — and almost never after the immediate crisis has passed.

It is structured. The format is known. The case is presented in a specific order. The decision points are examined with a specific set of questions. The discussion has a defined endpoint. Contrast with operational post-mortems, which often ramble because nobody has specified what the format is.

It is collegial. The expectation is shared learning, not individual blame. Clinicians present their own cases without fear that attending them will end careers. The culture has absorbed, over decades, the principle that good clinicians make errors and that examining those errors together is how the profession improves. Contrast with operational review, which often carries an implicit blame charge because no parallel culture has been built.

It produces actioned output. Findings are documented, fed back into clinical guidelines, incorporated into teaching. The review is a learning instrument with institutional memory, not a one-off conversation. Contrast with operational issues, which are often noted once in a crisis email and then never revisited.

Each of these structural features can be replicated for operational review. None of them is difficult to install. What is difficult is the initial cultural shift — the acceptance that operational failures deserve the same institutional seriousness as clinical failures. Once the shift is made, the mechanics are almost trivial.

What an operational M&M reviews.

The operational M&M does not replace the clinical one; it sits alongside it, covering the events the clinical conference cannot and does not. The scope is operational failures that affected patient care, departmental performance, or both — with the understanding that operational and clinical failures are not cleanly separable in practice, and that many patient outcomes have operational determinants that the clinical M&M is not structurally equipped to surface.

A useful classification of the events that belong in operational review uses two dimensions: whether the event was avoidable, and whether it was noticed at the time. The four-quadrant taxonomy clarifies which events the review primarily exists to surface and which are in scope as secondary material.

FIGURE — The four quadrants

What operational M&M actually reviews.

FOUR QUADRANTS · WHAT OPERATIONAL M&M ACTUALLY REVIEWSAVOIDABLEUNAVOIDABLENOTICEDUNNOTICEDAvoidable + noticedThe cases the team alreadyknows went wrong.Primary material for the review.Learnings are systemic.Unavoidable + noticedEvents the team processedas “just how it is.”Often partly avoidable oncloser structural examination.Avoidable + unnoticedThe most expensive quadrant.Structural errors hiding inunremarkable outcomes.Only surfaced by review.Unavoidable + unnoticedThe baseline noise ofhospital operations.Not the target of M&M, butuseful for calibration.The review exists to migrate events leftward and upward — from unnoticed to noticed, from unavoidable to avoidable.
The taxonomy is a practice framework rather than a formal classification. Its value is in distinguishing the quadrant where most operational learning lives (avoidable + unnoticed) from the quadrant that is most salient but least instructive (avoidable + noticed, which the department often already knows).

The avoidable + noticed quadrant (top left) is the starting material for any operational M&M. These are the events the department already knows went wrong — the discharge that took an extra three days because of pharmacy timing, the coding miss that was caught at audit, the handover that led to a delayed escalation. The team knows these cases; the review’s function is to examine them systemically rather than letting them disappear into institutional memory as isolated incidents. The learning is usually about the structural conditions that allowed the event to occur, not about the specific individual decisions that produced it.

The avoidable + unnoticed quadrant (bottom left) is where most of the review’s value lives. These are the events that the department did not experience as failures at the time — the operational errors hidden inside outcomes that looked unremarkable. A patient discharged three days later than necessary, without anyone having noticed that the pathway would have supported earlier discharge. A DRG miscoded in a way that reduced revenue but nobody caught because the case looked clinically ordinary. A handover that missed a key detail but no adverse event resulted, so the handover quality was never flagged. These events do not surface through incident reporting because no incident occurred. They surface only through structured retrospective review of the kind the operational M&M exists to provide.

The unavoidable + noticed quadrant (top right) is the material the team has already processed as “just how it is” — weekend admission delays, seasonal bed pressure, the consequences of external vendor timing. On closer structural examination, many of these events turn out to be partly avoidable when reframed as design problems rather than facts of operational life. The four-question capital filter of Post 19 applies here: is the demand profile actually uniform, is the process serial where it could be parallel, is the bottleneck the asset or the handoffs. The M&M is the venue in which these reframings happen.

The unavoidable + unnoticed quadrant (bottom right) is the baseline operational noise of hospital life. It is not the target of the review, but it is useful for calibration — the team benefits from being able to distinguish genuinely unavoidable operational variability from apparent unavoidability that is actually accumulated neglect. The calibration comes from the discipline of having the other three quadrants examined regularly.

”The events that do not reach incident reporting because no incident occurred are the events that the operational M&M exists to surface. The rest of the hospital’s feedback systems are built around events that announced themselves. The M&M is for the events that did not.”

How to run it.

The operational M&M is structurally simple. Ninety minutes per month. Two to three cases per session, selected by the deputy chief or a rotating senior from the preceding month’s operational events. The structure mirrors the clinical M&M closely: case presentation, structural decision points, contributing factors, learning identified, action items documented. The cultural principle is the same: shared learning, not individual blame.

The selection of cases is the part that most determines whether the review is useful. A useful month’s selection typically includes one case from the avoidable + noticed quadrant (to maintain continuity with what the team already knows went wrong) and one from the avoidable + unnoticed quadrant (to introduce the category of operational failures the team has not previously surfaced). A third case, drawn from the unavoidable + noticed quadrant, provides the structural-reframing exercise. Selecting three cases per session from the avoidable + noticed quadrant alone — which is the default tendency, because those are the cases everyone remembers — quickly exhausts the material and makes the review feel repetitive.

Documentation is essential. Each session produces a one-page summary: the cases reviewed, the structural contributing factors identified, the action items, and the named owner for each action item. The summary is circulated to the department and archived. Over time, the archive becomes one of the most valuable operational learning resources the department has — a written record of the operational patterns the team has examined and the structural changes it has committed to as a result.

The senior clinician who chairs the review matters. The chair signals, through behaviour, whether the review is genuinely a learning venue or whether it has drifted toward performance assessment. The chair who asks “what was the structural condition that made this likely” creates a different review than the chair who asks “why did you do it this way.” Both questions have their place; one is the operational M&M question, the other is the performance conversation, and conflating them destroys both.

Why the review does not exist despite being obvious.

Three structural reasons keep hospitals from installing the operational M&M.

Nobody is named responsible for operational learning. The clinical M&M has its natural owner: the chief of service or the chief’s deputy, embedded in a century-old professional culture that expects it. No equivalent role exists for operational learning. The function is orphaned across medical leadership, nursing leadership, controlling, and quality management, with no single owner who treats it as a core responsibility. Without the named owner, the review does not get scheduled, and if scheduled, it does not get defended against the ordinary pull of competing priorities.

Operational failures feel less serious than clinical failures. A discharge delayed by three days does not feel, viscerally, like a clinical error that harmed a patient. The intuition is misleading. Operational failures aggregate into clinical consequences — delayed discharges into bed pressure into emergency-department boarding into delirium rates into adverse events — but the linkage is distributed and statistical rather than individual and causal. The review venue that makes the linkage visible does not exist, so the linkage is not made, so the operational failures continue to feel less serious than they are.

The review threatens to make visible the discomfort the department has been tolerating. Operational failures are often the result of structural conditions that the department knows about and has learned to work around. Bringing those conditions into a structured review means acknowledging the workarounds, which means acknowledging that leadership has been tolerating them. The implicit acknowledgement is uncomfortable. The discomfort is exactly the operational signal the review would produce. Avoiding the discomfort means avoiding the signal.

What the Schlüchtern ANCHOR work showed on operational review.

At Main-Kinzig-Kliniken Schlüchtern, the geriatric department installed a monthly operational M&M in late 2020 under the broader operational programme with Prof. Dr. Rainer Sibbel at Frankfurt School [1,2]. The review ran alongside the existing clinical M&M on the last Wednesday of each month. The chair rotated among senior staff. The initial case selection was heavily weighted toward the avoidable + noticed quadrant — the team was comfortable examining cases that had obviously gone wrong. Over the first year, the selection gradually shifted toward the avoidable + unnoticed quadrant, as the team developed the analytical skills to surface cases that did not announce themselves as failures.

The operational value of the review compounded over years. The accumulated documentation became a library of structural patterns the department had worked to address: pathway misalignments, handover gaps, coding-capture failures, discharge-timing constraints. Each pattern, once identified in the review, produced structural changes that removed the pattern’s recurrence. The rate at which new cases entered the avoidable + noticed quadrant declined as earlier cases in the avoidable + unnoticed quadrant were systematically addressed. The review produced a specific kind of institutional memory that no other forum was producing.

The secondary effect was cultural. Junior staff in a department that runs an operational M&M internalise a different relationship with operational failure than junior staff in a department that does not. Operational failures are seen as learning material rather than as embarrassments to be suppressed. The suppression, when it happens in departments without the review, is itself an operational cost — the failures recur because they are never examined — but the cost is invisible from outside the department. The review makes it visible.

The operational readingA department that has installed an operational M&M alongside its clinical M&M operates with different institutional memory and a different culture around operational failure than a department that has not. The difference is not in the individual cases reviewed in any single session; it is in the cumulative effect of running the review year after year, which reshapes what the department takes seriously and what it examines. The review is modest to install. Its effects compound across the whole ANCHOR layer.

What to do on Monday.

Schedule the first operational M&M for the last Wednesday of next month. Ninety minutes. Invite the senior clinical staff, the nursing leadership, and the departmental coder. Do not invite external controlling or quality-management staff for the first six sessions; their presence changes the cultural character of the review before the culture has stabilised.

Select three cases for the first session. One from the preceding month that the team already knows went wrong operationally (avoidable + noticed). One that on reflection looked routine but turns out to have had an operational misstep hidden in it (avoidable + unnoticed — this case will be harder to identify on the first pass and will get easier with practice). One that everyone considers unavoidable but that rewards structural reframing (unavoidable + noticed, with a challenge to whether the unavoidability is really structural or just conventional).

Chair the first session yourself. Set the cultural tone: learning, not blame. Ask the three operational questions in order: what was the structural condition that made this likely; what did the team do in response; what change would make this less likely next time. Document the findings in a one-page summary by the end of the following day.

Run the review monthly for six months before evaluating it. The first three sessions will feel slightly awkward — nobody in the room has run this before, the case selection will be imperfect, the questions will be refined with practice. By the sixth session, the format will be stable and the cultural norms will be established. At that point, the review continues running largely on its own, requiring only the discipline of scheduling it and selecting the cases.

Do not try to run both M&Ms in the same session. The clinical M&M has its own culture, its own participants, and its own pacing. Combining them produces a worse version of each. Run them separately, same week if convenient but different days.

The absence of the operational M&M is the single most revealing operational feature most German hospital departments have. The presence of the review, sustained for years, is one of the most operationally consequential changes a department can make. The installation costs ninety minutes a month. The return compounds across the whole ANCHOR layer and across every layer downstream of it.

Clinical failures already have their venue. Operational failures deserve one too. The profession has not yet built that venue. Your department can.

References

Sources cited in this post.

  1. Main-Kinzig-Kliniken Schlüchtern. Operational data of the geriatric department, 2019–2025. Internal records, available on request.
  2. Matoski N, Sibbel R. The FLOW methodology: operational transformation of a geriatric department — quantitative evidence from a 7-year programme. Manuscripts in preparation. Frankfurt School of Finance & Management; 2026.

A note on methodologyThe four-quadrant taxonomy (avoidable/unavoidable × noticed/unnoticed) is a practice framework developed across operational engagements and is not derived from a formal classification literature. The Schlüchtern operational M&M implementation (late 2020 onward, monthly, last Wednesday) is from the geriatric department’s internal records under the formal research programme with Prof. Dr. Rainer Sibbel at Frankfurt School; the causal reading that the review produces a distinct kind of institutional memory reflects the author’s judgment of the programme’s dynamics rather than a controlled comparison. The specific cultural claims (that departments with an operational M&M develop different relationships with operational failure) are observational patterns across engagements.

Phase A · Operational Scoping

Ten consultation slots per quarter.

Phase A is a focused operational scoping engagement. It runs four weeks, produces a structural diagnosis across the five layers, and ends with a specific recommendation. Ten engagements per quarter — currently booking Q3 2026.