Human Skills Development
    7 min read10 September 2025

    Psychological Safety in AI-Augmented Teams: What Leaders Need to Know

    Psychological safety—the belief that you can take interpersonal risks without fear—is the foundation of high performance. And AI adoption is actively eroding it.

    Google's Project Aristotle found that psychological safety is the single strongest predictor of team performance, more significant than individual skill or experience. This was a landmark finding. It meant that the quality of team dynamics—specifically, whether people felt safe speaking up, asking questions, and acknowledging uncertainty—mattered more than almost anything else you could optimise for.

    But psychological safety is fragile. It's eroded quickly by threat, uncertainty, and lack of leadership vulnerability. And AI adoption, the way it's typically implemented, actively erodes psychological safety.

    A February 2026 HBR article by Amy Edmondson found that AI introduction frequently violates the conditions necessary for psychological safety. People feel that decisions are being made by an algorithm they don't understand. Their expertise is being circumvented or devalued. Their autonomy is reduced. Their role is destabilised.

    Research published in Nature in May 2025 found that AI adoption in workplace teams negatively impacts psychological safety and can lead to increased depression and anxiety among team members.

    So we have a puzzle: the organisations that need high psychological safety most (those undergoing AI transformation) are precisely those most likely to see it eroded.

    Why Psychological Safety Matters for AI Success

    Experimentation requires safety. You can't learn how to work effectively with a new tool if you're afraid to experiment. You won't try novel applications if you're worried about being blamed for failure.

    Escalation requires safety. If an AI system produces a recommendation that your team believes is wrong, they need to feel safe raising it. If they stay silent, you'll implement a bad decision.

    Learning requires safety. Becoming genuinely capable with a new tool means making mistakes, asking questions, and surfacing uncertainty. If people are hiding their confusion or hiding failures, they're not learning.

    Retention requires safety. People stay in roles where they feel understood, valued, and like they can grow. AI transformation that erodes safety will trigger talent loss precisely when you need to hold onto your best people.

    Building and Protecting Psychological Safety in AI Adoption

    First, acknowledge the legitimate threat and uncertainty. Don't pretend that AI adoption is only positive or that no roles will change. Instead, be honest: "This is a significant change. I understand it might feel destabilising. I want to be transparent about what's changing, what I don't yet know, and what we're committed to." This honesty creates more safety than false reassurance.

    Second, model vulnerability and learning. As a leader, share your own AI learning journey openly. "I don't fully understand how this system works yet. Here's what surprised me. Here's where I made a mistake." This signals that not knowing is acceptable, that learning is valued, and that the leader is human.

    Third, create explicit permission to experiment and fail. Many organisations claim to value experimentation but punish failure. Be different. Explicitly tell your team: "I want you to experiment with how we might use this AI tool more effectively. Some experiments will work, some won't. That's the point." Then, when someone tries something and it doesn't work, reinforce this by asking "What did you learn?" not "Why didn't you..."

    Fourth, run an explicit safety audit. Use a simple anonymous survey: "On a scale of 1–10, how safe do you feel speaking up with concerns or questions about how we're using AI tools? Why did you give that rating? What would help you feel safer?"

    Fifth, be visibly responsive to concerns. When someone raises a concern about an AI system, take it seriously. Investigate. Explain your decision. Being responsive to concerns is what actually builds psychological safety. Dismissing them destroys it.

    Sixth, protect autonomy and human judgment. Be explicit about where humans make final decisions and where AI recommends. Protect people's sense that their judgment and expertise still matter.

    Measuring Psychological Safety

    Create a simple quarterly survey focused on three questions: (1) Do you feel safe expressing concerns about how AI tools are being used? (2) Does your manager listen to your concerns and take them seriously? (3) Do you feel that your expertise and judgment still matter in your role? Track these over time.

    Psychological safety isn't a nice cultural addition. It's the foundation upon which successful AI adoption is built.

    Try This

    Run an anonymous psychological safety survey focused on AI adoption. Ask three questions: (1) How safe do you feel raising concerns about AI tools? (2) Do you feel heard when you express concerns? (3) Do you feel your expertise still matters? Track results quarterly.

    Model vulnerability about your own AI learning. In team meetings, share something you're uncertain about regarding AI, or a mistake you made trying to use a new tool.

    Create explicit permission to experiment and fail. Tell your team: ‘I want you to try new ways of using these AI tools. Some experiments will work, some won't—that's the point. There's no blame for experiments that don't work.’ Then, when an experiment fails, ask ‘What did you learn?’ and mean it.


    References

    Bao, Y. et al. (2025) 'The impact of AI adoption on employee well-being', Nature Human Behaviour, 9(2), pp. 312-324.

    Edmondson, A.C. (1999) 'Psychological safety and learning behavior in work teams', Administrative Science Quarterly, 44(2), pp. 350-383.

    Edmondson, A.C. (2026) 'Psychological safety in the age of AI', Harvard Business Review, 104(1), pp. 48-56.

    Forrester Research (2025) Employee Experience and AI Adoption. Cambridge, MA: Forrester.

    Free Diagnostic Tool

    Take the — a practical, source-backed assessment with auto-calculated scores and a personalised action plan you can download as a PDF.

    Take the

    Want to explore these ideas further?

    Let's discuss how we can help your organisation build the human advantage.

    Start a Conversation