A hospital CEO told me last year that she had lost count of the improvement initiatives launched during her tenure. Not because there were too many — because most of them had quietly ended without ever being declared over. The programmes had launched. Some had even produced results. Then, somewhere in the fog between initial success and institutionalised change, they had dissolved. Replaced by newer programmes. Forgotten by the next quarterly review. Indistinguishable, a year later, from the baseline the hospital had been operating at before the programme started.
This pattern is not a feature of her hospital. It is a feature of hospital operations generally, and of organisational change more broadly. The widely-cited figure is that around 70% of organisational change initiatives fail to deliver their intended outcomes [1]. That number has a complicated history — it was a throwaway line in a 2000 Harvard Business Review article [2], later repeated as an estimate by John Kotter in 2008 [1], then cited back and forth until it acquired the status of empirical fact despite never having been empirically established. A 2011 academic critique of the figure called it a myth and argued that the variable criteria used across studies make any single failure rate indefensible [3]. The 70% is not a proven number. But the phenomenon it points at — that most organisational improvement efforts do not produce sustained change — is real. It is easy to observe. And it is easier to observe in hospitals than in most other sectors, because hospitals collect operational data that make the reversion visible.
The FLOW layer of the CuraOS framework exists specifically to address this pattern. Not as a change-management consultancy offering, and not as another methodology to install alongside the dozen that have come before. As a diagnostic framework for identifying the specific phase in which hospital improvements typically die, and the specific operational rhythm required to survive that phase.
This is the observation at the foundation of every FLOW-layer conversation: hospitals are rarely missing the improvement. They are missing phase 3. And phase 3 has a name, a cadence, and a specific owner.
The five phases, and which one kills the programme.
Across the operational programmes I have led or observed, hospital improvement initiatives move through a consistent sequence of five phases. The first two are well-understood and routinely executed. The last two are rare and are described openly as exceptional. The third phase is the one almost nobody names, and almost nobody resources.
Phase 1 — Launch. A problem is identified, a programme is initiated, a working group is formed, a timeline is set. This phase works because it rewards the organisation politically — announcing an improvement initiative signals to the board and the wider workforce that leadership is taking the problem seriously. The activation energy is high but the political cost is low. Every hospital I have worked with does this phase competently.
Phase 2 — Early wins. The low-hanging fruit gets picked. A few visible process changes happen. The working group produces a first report. The metric the programme was targeting moves in the right direction, or at minimum stops getting worse. Energy remains high; people are excited; the programme is talked about in corridor conversations and quarterly updates. This is the phase when the leadership team takes photos and when internal newsletters run the case study.
Phase 3 — Institutionalisation. The working group’s original members rotate off. The programme leader’s attention shifts to the next problem. The early-wins metric plateaus. A decision point arrives, and it is invisible: either the hospital converts the programme’s changes into the permanent operating rhythm of the affected departments, or the programme remains a project that continues to require someone actively pushing it. If no one is pushing, the gravitational pull of the prior rhythm pulls behaviour back to baseline. This is the phase in which perhaps 90% of programmes die — not dramatically, but through the quiet dissolution of the routines that had been holding the improvement in place.
The pattern has direct empirical support in the healthcare improvement literature. According to PubMed-indexed research, Panella and colleagues’ 2018 cluster-randomised controlled trial across 26 hospitals (N=514 geriatric hip-fracture patients) found that formal clinical-pathway implementation improved process indicators in 18 of 24 measured dimensions but produced no significant effect on any of 13 patient outcomes [4]. The Panella finding is the published version of the pattern described in this post: programmes that reach Phase 2 — compliance with process changes — frequently fail to convert that compliance into changed operational reality at the bedside. The gap between process-indicator improvement and patient-outcome improvement is the Phase 3 cliff. The hospitals in Panella’s trial reached Phase 2 at scale; most did not reach Phase 3. The result looks, from outside, like a failed improvement programme; the more accurate diagnosis is that Phase 2 succeeded and Phase 3 was never attempted.
Phase 4 — Sustain. A hospital that has crossed the institutionalisation cliff now operates with the improvement as part of standard practice. The original working group is gone; the new rhythm is what the department actually does. Metrics remain at the improved level and are monitored through the regular operational cadence rather than through project reports. Roughly one in four programmes that launch reach this phase in any recognisable form.
Phase 5 — Compound. The sustained improvement generates second-order effects. Staff are attracted to a better-running department; clinical outcomes improve because the operational friction is lower; the department becomes a destination for referrals or for internal rotations; junior leaders who worked on the programme carry the rhythm into other departments. Only the small minority of programmes — perhaps one in ten — generate compound effects of this kind, and they take at least two or three years to become visible.
”The difference between phase 2 and phase 4 is not the improvement itself. It is whether someone made the deliberate, unglamorous, politically-costly decision to convert a project into the way the department operates.”
Why phase 3 is the graveyard.
Three structural features make institutionalisation the phase where hospital improvements typically die.
It has no natural owner. Launch has a project sponsor. Early wins have a working-group lead. Sustain has the department manager who runs the new rhythm. Compound has, by definition, its own momentum. But institutionalisation sits in the uncomfortable space between project ownership (which is ending) and operational ownership (which has not yet begun). If a named individual does not explicitly own the transition, nobody does, and the transition does not happen. Most hospitals have never assigned this owner, because it requires naming a phase most leadership teams have never named.
It has no visible reward. Launching a programme is visible; early wins are visible; sustain, if it holds for two or three years, is eventually noticed. Institutionalisation is invisible by design — the sign that it has succeeded is that the improvement stops being talked about, because it has become the normal way of operating. Leadership culture in most hospitals rewards visible activity. Phase 3 is the antithesis of visible activity. It requires leaders to spend time on things nobody is looking at, and to not move on to the next announceable initiative when their political calendar says it is time to do so.
It requires operational rhythm discipline. The actual work of institutionalisation is unglamorous. It is the monthly department meeting agenda that keeps the improvement’s metric as a standing agenda item. It is the quarterly audit that asks whether the new routine is still being executed or has quietly dropped out. It is the onboarding document for new staff that describes the improved rhythm as simply “how we do it here” rather than as a programme that was run. It is the boring, structural work of converting a project into a default. Most hospital leadership teams do not have a mechanism for doing this work, because the work is not part of the improvement programme itself — it is part of how the department operates after the programme ends.
The organisational-change literature [4] has long made this point in the language of “refreezing” versus “adopting a new steady state.” The specific language matters less than the underlying mechanism. Whatever you call it, the work happens or it does not. In hospitals, the work overwhelmingly does not.
What the one phase that stops it actually looks like.
The figure below summarises, in one diagram, the specific operational rhythm that distinguishes hospitals that cross the institutionalisation cliff from hospitals that do not. It is not a methodology. It is not a framework to license. It is the set of four operational elements that, when all four are installed together, turn a project into a permanent rhythm.
Four operational elements. All four, or none.
The one phase in practice — what Schlüchtern did.
At Main-Kinzig-Kliniken Schlüchtern, the geriatric-department operational restructuring (2019–2025) worked because the four institutionalisation elements were installed deliberately alongside the operational changes, not after them as an afterthought. The programme had a named owner from the launch of phase 2 through the close of phase 4. The key operational metrics — length-of-stay, weekend discharge rate, Langlieger rate, bed utilisation — appeared as standing items in the departmental operations meeting from the first month and have never been removed. A quarterly audit rhythm, separate from the project controls, checks whether the routines (morning board round, Thursday-Friday discharge huddle [5], coder touchpoint) are still being executed in practice. And the onboarding documentation for new geriatric-department staff describes these routines as the department’s operating rhythm rather than as special programmes.
The operational outcomes, sustained over three consecutive fiscal years under the formal research programme with Prof. Dr. Rainer Sibbel at Frankfurt School [6,7], are in the figures below. Specifically relevant to the FLOW layer argument: the improvements held. At year three after the initial launch, the metrics had not reverted. At year five, with original working-group members long rotated, the rhythm was still in place.
reduction
increase
2022→2025
The cross-specialty pattern is the same. An ICU that improves team-composition documentation will revert within eighteen months unless the four institutionalisation elements are installed. A surgical department that implements a handover protocol will see the protocol degrade quietly unless the routine is owned, tracked, audited, and onboarded into. A cardiology unit that redesigns its heart-failure pathway will see the pathway erode within a year unless someone is specifically responsible for the rhythm holding.
The operational readingWhen I walk into a hospital and ask a Chefarzt or a department manager to describe their most recent successful improvement programme, the answer is usually a phase-2 story: what was done, what changed, what the initial results were. When I then ask what has happened in the eighteen months since, the answer is usually either silence or a quiet acknowledgement that the routine has degraded. This is not a failure of the original improvement. It is a failure to install the institutionalisation rhythm. The two are different problems, and they have different solutions.
What to do on Monday.
If you are a Geschäftsführer or department lead reading this and you suspect your hospital’s improvement programmes are dying in phase 3, there is a specific first move that takes about an hour and costs nothing.
Pick the three most recent improvement programmes your hospital has launched. For each, answer four specific questions. Who is currently, by name, responsible for the rhythm holding? At what operational meeting is the programme’s primary metric a standing agenda item, and when was the last time it appeared there? What quarterly audit mechanism, if any, checks whether the original routine is still being executed? And how does new staff learn about the routine when they join the affected department — as a project they have to read about, or as the default way the department operates?
For most hospitals, most programmes, most of the time, the answers to these four questions are: no one specific, no standing agenda, no audit mechanism, and the new-staff onboarding never mentions it. That is the phase-3 graveyard in concrete operational terms. The programmes that launched may still be labelled as “in progress” on internal dashboards. They are not in progress. They have died.
The second move, once you have the answers for three programmes, is to pick the one that matters most clinically or financially and assign the four elements to named owners by the end of the week. The named-owner assignment is the single highest-leverage decision available in the FLOW layer. Everything downstream flows from it: the agenda item becomes the owner’s responsibility to maintain, the audit mechanism becomes the owner’s responsibility to commission, the onboarding text becomes the owner’s responsibility to write and maintain. Without the owner, nothing else gets done. With the owner, the other three elements follow structurally within a few weeks.
The third move, only after the first programme has been stabilised, is to install the institutionalisation rhythm as a standard practice: every new improvement programme launched in the hospital must, at the point of launch, specify the named owner for phase 3, the standing agenda location, the audit cadence, and the onboarding text. Programmes that cannot specify these four elements at launch are not authorised to proceed. This rule, applied consistently, converts the hospital from a launcher of improvements into a sustainer of improvements, which is a categorically different operational state.
The 70% figure will remain an estimate, debated in academic papers, cited and recited without ever being empirically settled [2,3]. The underlying phenomenon will not change. Most improvements, most hospitals, most of the time, will continue to die in phase 3. The hospitals whose improvements hold will be the ones that named phase 3, owned phase 3, and built the operational rhythm to survive it.
The one phase that stops the dying has a name. It takes a week to begin installing. And it is the entire game.
Sources cited in this post.
- Kotter JP. A Sense of Urgency. Boston (MA): Harvard Business Press; 2008.
- Beer M, Nohria N. Cracking the code of change. Harv Bus Rev. 2000 May–Jun;78(3):133–141.
- Hughes M. Do 70 per cent of all organizational change initiatives really fail? J Organ Change Manag. 2011;24(4):451–464. DOI: 10.1108/09534811111158903
- Litvak E, Long MC. Cost and quality under managed care: irreconcilable differences? Am J Manag Care. 2000 Mar;6(3):305–312.
- Bechir G. Reducing weekend hospital discharge delays without seven-day coverage by leveraging Thursday and Friday planning. Cureus. 2025 Jul 8;17(7):e87526. DOI: 10.7759/cureus.87526
- Main-Kinzig-Kliniken Schlüchtern. Operational data of the geriatric department, 2019–2025. Internal records, available on request.
- Matoski N, Sibbel R. The FLOW methodology: operational transformation of a geriatric department — quantitative evidence from a 7-year programme. Manuscripts in preparation. Frankfurt School of Finance & Management; 2026.
- Panella M, Seys D, Sermeus W, Bruyneel L, Lodewijckx C, Deneckere S, et al. Minimal impact of a care pathway for geriatric hip fracture patients. Injury. 2018 Aug;49(8):1581–1586. DOI: 10.1016/j.injury.2018.06.005. Retrieved from PubMed.
A note on methodologyThe “70% fail” statistic is explicitly treated in this post as an estimate rather than a proven number, following the academic critique by Hughes (2011) [3]. The specific phase-by-phase percentages shown in the hero figure (85% reach early wins, ~25% sustain, ~10% compound) are illustrative of the pattern observed across the author’s practice and across the change-management literature cited; they should not be read as precise empirical frequencies. The Schlüchtern operational data (LOS reduction, throughput, case volume, weekend discharge rate, bed utilisation) are verified from operational records across 2019–2025 and used in the formal research programme with Prof. Dr. Rainer Sibbel. Claims about cross-specialty applicability (ICU handover, surgical protocols, cardiology pathways) reflect practice observation rather than formal research; the Schlüchtern research programme specifically covers the geriatric case mix.