Blueprint Digital NHS WalesPsychological Safety
The Blueprint

Psychological Safety

Psychological safety and a culture of trust are prerequisites for successful digital delivery — Edmondson, Google's Project Aristotle, and the DORA State of DevOps research all reach the same conclusion. DHCW's documented culture is the inverse: lack of trust, bullying, harassment, suppression, retaliation, burnout, undertrained staff, toxic leadership, and exploitation of headcount as a financial instrument. The Blueprint cannot succeed under these conditions; restoring psychological safety is the precondition every other intervention assumes.

Why is psychological safety on a Blueprint about digital health? Because every credible body of research — Amy Edmondson's 25 years of work, Google's Project Aristotle (2015), the DORA State of DevOps reports, Westrum's organisational typology — reaches the same conclusion: high-performing technology delivery requires teams that feel safe to surface bad news, challenge senior decisions, and admit uncertainty. DHCW's documented culture is the structural inverse: bullying, harassment, suppression of dissent, retaliation against whistleblowers, exhaustion, undertrained staff, leadership unable to receive challenge, and the deliberate use of vacancy and overtime as financial instruments. The Compassionate Leadership Pledge approved in fifteen seconds at the same meeting where 65% staff burnout was reported is one exhibit in a wider inversion. No Blueprint intervention can land at scale under these conditions. Psychological safety is the prerequisite the rest of the analysis assumes.

Why a Blueprint about digital health needs to talk about psychological safety

Every credible body of evidence on high-performing technology delivery reaches the same conclusion: it depends on teams that feel safe enough to surface bad news, challenge senior decisions, admit uncertainty, raise risks before they become incidents, and disagree with their leaders without career penalty. Where that condition exists, organisations ship. Where it does not, they perform theatre.

This finding is not a soft-skills aside. It is the structural precondition on which the Blueprint’s other interventions depend. Competent leadership, radical transparency, portfolio ruthlessness, embedded delivery teams, multi-year funding — none of these mechanisms land under a culture that systematically punishes the people who would make them work.

This page sets out the evidence base, documents how DHCW’s actual culture stands as its inverse, and identifies psychological safety as the prerequisite that the rest of the Blueprint assumes.

The evidence base

The research on psychological safety and technology delivery is unusually convergent.

Amy Edmondson, Harvard Business School, has spent twenty-five years documenting the same finding across hospitals, manufacturing, software firms, and government departments: teams whose members feel safe to speak up consistently outperform teams whose members do not. Her 1999 paper Psychological Safety and Learning Behavior in Work Teams established the construct; her 2018 book The Fearless Organization synthesises the evidence. The finding has been replicated across hundreds of studies and is treated as foundational in modern organisational research.

Google’s Project Aristotle (2012-2015) studied 180 internal teams to identify what made the highest-performing ones effective. The headline finding — summarised on the Wikipedia article on psychological safety and widely reported at the time — surprised the research team: the single strongest predictor of team performance was psychological safety, not seniority, tenure, technical skill, or even the specific people on the team. Teams that felt safe to take risks and admit failure shipped more, learned faster, and stayed together.

The DORA State of DevOps research (2014-present), now run by Google Cloud, has consistently identified Westrum’s typology of organisational culture as a top predictor of software-delivery performance. Westrum (2004) categorised organisations as pathological (information is hidden, messengers shot, responsibility shirked), bureaucratic (information is ignored, messengers tolerated, responsibility narrowed), or generative (information is sought, messengers trained, responsibility shared). Generative cultures correlate with elite delivery performance — faster lead time, more frequent deployment, lower change-failure rate. Pathological cultures correlate with the opposite.

The pattern across these three independent research traditions is the same: high-performing technology organisations are not just better at recruiting; they are structurally different in how they treat the people who deliver. The structural difference is psychological safety.

What this looks like in practice

Psychological safety is not the absence of expectation. It is a culture of trust and responsibility at every level in the organisation, starting with the leadership — and that is a demanding standard, not a permissive one. The point is not that staff should be asked less. It is that the conditions under which more is asked of them are made coherent.

That coherence has five operating components, and each is testable against the published record:

  1. Trust runs both ways. Leaders trust staff to surface bad news, raise risk, disagree with senior decisions, and act on judgement; staff in turn trust that the published record will reflect what they said in the room and that disclosures will not produce career penalty. Where either half of that trust fails, the organisation degrades to the manufactured-narrative pattern documented at L6.
  2. Responsibility starts with the leadership. The cultural conditions that staff experience are produced by the choices leaders make. A workforce cannot generate its own psychological safety from below. It can only respond to whether leaders accept challenge, publish accurate metrics, admit mistakes, and act on the implications.
  3. It is legitimate to expect more from staff — but only when they are equipped. Higher expectations require, as their precondition: realistic resourcing (not vacancy savings designed to fail), the right tools and training, a clear vision of what is being built and why, sufficient autonomy to make decisions inside their role without escalation theatre, and recognition (financial and otherwise) tied to delivery rather than to credential accumulation or loyalty.
  4. Autonomy is paired with accountability, not with control. Teams given decision rights inside published technical standards consistently outperform teams whose decisions are routed through approval chains designed for procurement risk rather than delivery speed. This is the structural mechanism that the Target Architecture — federation rather than monopoly — is designed to realise inside NHS Wales.
  5. Reward tracks delivery, not narrative. Promotion, recognition, fellowships, and credentialing should flow toward the people whose work has produced measurable outcomes — not toward the people who present well at boards or accumulate professional titles during the period their programmes are failing.

These components describe the structural condition the Blueprint relies on. The next section sets out, point by point, the documented inverse of each.

What the inverse condition looks like at DHCW

Psychological safety has an opposite. What follows is the documented condition at DHCW, mapped against the ten markers of an unsafe organisational culture.

1. Absence of trust. The published board record is curated to remove signals of failure — the knowledge graph documents 107 distinct sanitisation instances across DHCW’s board and committee minutes, plus 237 further passages identified as hiding-intent. Curation ratios in some published transcripts fall below 11%. Where the gap between what is said in the room and what is recorded is this wide, no one inside the building can trust the published account, and no one outside can trust the organisation. The detail sits at L6: The Manufactured Narrative.

2. Bullying. Glassdoor employee reviews describe a “horrendous culture of bullying with management sweeping any issues under the carpet.” The pattern is documented across reviews and corroborated by witness testimony.

3. Harassment and retaliation. Multiple Employment Tribunal cases have arisen from DHCW; the pattern documented at L9: The Whistleblower Suppression Loop shows individuals who raised concerns about technical capability, programme realism, or the integrity of the published record being subjected to pretextual disciplinary processes that did not follow the organisation’s own policies. Roles were replaced with downgraded versions on lower bands. The position survived; the oversight function did not.

4. Suppression of dissent. External information sources have been blocked on NHS Wales network devices — including carenhs.org, a site composed entirely of publicly sourced material from Senedd proceedings and Audit Wales reports. Whistleblowing statistics, disciplinary data, and staff-leaver analyses are published nowhere. Where employees cannot read external criticism and cannot see how concerns are handled, dissent is structurally disabled before it can form. The dedicated treatment is at L10: The Information Fortress.

5. Targeted action against named individuals. Specific individuals who raised concerns have been managed out via pretextual processes; specific roles have been downgraded after the postholder departed; specific protected disclosures have been met with disciplinary investigation. The legal threshold described at L11: Captured Governance — wilful misconduct serious enough to amount to an abuse of public trust — applies to a series of named actions, not to a generalised culture failure. The structural mechanism that protects the executives carrying out these actions from internal accountability — the pre-credentialled patronage pipeline running from ABUHB through Goodall and Paget — sits at L8: The Loyalty Selection Loop.

6. Lack of safety to speak up. The Speak Up Guardian function exists on paper. In practice, four senior voices at the 31 July 2025 board — the Director of Finance, an Executive Director, and two independent members — independently named the burnout-workload-vacancy causal chain in the room. None of their challenges survived in the published minutes. When the most senior people in the organisation cannot raise a concern that is recorded, the structural conditions for ordinary staff to raise concerns do not exist.

7. Sustained burnout. The 2024 staff survey recorded 65% of staff “frustrated and burnt out” or close to it; 58% reported significant workload pressure. The 65% figure was stripped from the published minutes. Twelve months later, the figure was 68.9% — the year-on-year increase was also stripped. The Director of Finance’s same-meeting admission — “to finish at five, where we’re contractually finishing at five and log out… I think a lot of us are finding that more challenging… that says something about the way we work” — does not appear in the published record.

8. Undertrained workforce. Annual training compliance has been used as a financial-management lever rather than a capability investment. The leadership cohort exemplifies the gap: the CEO accumulated four professional credentials in the eighteen months around DHCW’s founding while presiding over an organisation whose technical staff warned for eight years about WCCG running on unsupported technology. Pay-mapping was identified as “two years in arrears” by March 2025. The Head of Software Engineering role was advertised at Band 8c (£71-82k) — well below market rate for the responsibility. The competence void at the top is documented at L7: The Competence Void; the underinvestment in staff below the executive layer is the same story at a different altitude.

9. Toxic leadership. The Performance and Delivery Committee generated zero corrective actions across eighteen consecutive months from May 2024 to May 2025. The Chair admitted approving a £20M Kainos framework without scrutiny: “I should have looked. I don’t know how these appear on our website as contracts.” A £226M Microsoft Enterprise Agreement passed at the March 2026 board meeting in a single sentence with no questions and no vote. The CEO’s “never event” characterisation of a recurring data centre infrastructure failure (at the July 2025 board) and the Executive Director of Operations’s prior-incident admission — “we did have another incident like this last year” — were both erased from the published minutes. Leadership that cannot receive a senior internal challenge, cannot scrutinise its own procurement, and cannot tolerate its own most accurate language being in the public record is by definition unsafe to work for.

10. Exploitation of headcount. Vacancy savings have been built into DHCW’s financial plans every year since founding — the financial mechanism is documented in full at L1: The Hiring Trap, where the year-by-year arc of vacancy-as-savings becoming structural is set out. At the 29 September 2022 board, the Chair, Simon Jones, warned in plain terms: “making recurrent savings through non-recurrent vacancies… is something I’ve got the scars on my back about… you just heap misery on misery every year when you do that.” The warning was completely erased from the published minutes. The strategy continued. By Q1 of Year 4, 84% of the in-year savings target had been delivered through unfilled posts. Working days lost to sickness rose from 8,684 in 2021-22 to 15,846 in 2024-25 — an 82% increase across three years, against headcount growth of approximately 30%. Long-term sickness rose 59%. The Annual Report 2024-25 names stress and anxiety as the leading cause. Annual-leave buyback schemes were introduced because remaining staff could not take the leave they were owed. CEO praise for “many, many weeks, months of out of hours weekend working” sits in the same record as the survey item showing that the praised directorate was the worst-affected by burnout.

Each of these ten markers has its own documentary trail in DHCW’s published record. Taken together, they describe the structural inverse of the condition the research literature identifies as the prerequisite for technology delivery.

Compassionate Leadership as the cover language

A culture this far from the research evidence cannot describe itself accurately and continue. The label is the bridge.

On 25 July 2024, three substantive items came before the DHCW board in a single meeting:

  1. The Compassionate Leadership Pledge was tabled for approval. The Chair, Simon Jones, invited questions. There were none. The Pledge was approved in under fifteen seconds with no substantive discussion.
  2. In the same CEO report, the Chief Executive thanked the operations directorate for “many, many weeks, months of out of hours weekend working” delivering the data centre move.
  3. Later in the same meeting, the staff survey was presented: 65% of staff were frustrated and burnt out; the operations directorate — the one just thanked for unpaid overtime — carried the worst burnout indicators in the organisation.

No board member connected the three facts. The 65% figure did not appear in the published minutes. The connection between overtime praise and directorate-level burnout was not made.

Twelve months later (31 July 2025), the next staff survey returned. Burnout had risen by 3.9 percentage points, to 68.9%. That figure was stripped from the published minutes. In the same survey, “flexible working / compassionate leadership / collaboration” was recorded as a “strength.”

This is the function of the Pledge. It is not a culture programme that failed. It is the affirmative half of the same machine documented at L6: The Manufactured Narrative: when every internal warning and every measurement of harm is stripped from the published minutes, the label is what remains in the public domain. “Compassionate” permits the gap between rhetoric and outcome — between the language of safety and the 68.9% burnout rate, between “putting people first” and an 82% rise in stress-driven sickness — to be sustained without contradiction inside the organisation.

The same mechanism is documented at L11: Captured Governance — across the Performance and Delivery Committee’s eighteen-month zero-corrective-actions window, the data centre cooling failover recurred near-identically without any published assurance output flagging it. The matching evidence sits at Drift to Low Performance: each year’s harm becomes next year’s baseline, measured in days lost and reported as a survey “strength.”

What the Blueprint requires

Psychological safety is not a Blueprint intervention. It is the prerequisite the interventions assume.

  • Intervention 1: Competent Leadership replaces the executive and non-executive cohort that produced the inverse condition. A leadership that approves a culture pledge in fifteen seconds at the same meeting as a 65% burnout report cannot itself produce psychological safety; new leadership, recruited externally against published technical criteria, is the mechanism.
  • Intervention 2: Radical Transparency addresses the information machine that sustains the inverse condition. Statutory publication of staff-survey results, sickness rates, leaver analysis, whistleblowing data, and disciplinary statistics — on a fixed cadence — closes the gap between what staff experience and what the record shows.
  • Intervention 3: Portfolio Ruthlessness reduces demand on a workforce that can then be honestly resourced rather than chronically overloaded.
  • Intervention 5: Break the Annual Trap and Intervention 6: Reform the Funder address the funding cycles that currently require recurrent savings to be designed from unfilled posts — the structural mechanism behind the 82% rise in sickness.

The Compassionate Leadership Pledge does not need to be revoked. It needs to be made testable against published outcomes — and a leadership in place that accepts the test.

Psychological safety is what every credible body of research identifies as the precondition for the digital delivery the Blueprint describes. Restoring it is the precondition the Blueprint itself sits on.