Transcript
Gosai: In 1977, two Boeing 747s were diverted to Tenerife airport in thick fog. Due to the weather condition center miscommunication, the two birds collided on the runway during takeoff, killing 583 people. That is considered one of the most deadliest aviation accidents in history. When the black box of the aircraft that attempted to take off was retrieved, it was revealed that both the first officer and the flight engineer recognized something wasn't quite right, but the captain continued to take off, with neither officer challenging the captain's authority. It's believed that the captain's authority alongside his irritation of being diverted and delayed was the key reason why neither officer challenged the captain's decision to take off. See, the problem with challenging authority is that the benefits are often unclear and delayed, avoiding a possible collision that might not happen. While the costs are tangible and medium, the captain's anger at being questioned. We have a tendency to underweight the benefits and overweight the costs. Sometimes it can seem easier not to say anything, especially when you have to make that call in a split second.
Google's Project Aristotle
In 2013, Google ran an internal project called Aristotle. They set out to answer the question, what makes a team effective? They looked at 180 teams across the organization, and at lots of different characteristics of team members. They found an odd mix of personality, skill, or background explained why some teams perform better than others. What they did find was that what really mattered was less about who was on the team, and more about how that team worked together. What teams needed was for members to feel safe to take risks, and feel confident that no one on the team would embarrass or punish anyone else for admitting a mistake, asking a question, or offering new ideas.
NASA Redstone Rocket Crash
Around 1955, NASA was building a prototype Redstone rocket, which went off course and crashed during testing. They did their version of a post-incident review to try and figure out what happened, but the head of the project couldn't find any clues. This was going to mean that they were going to have to start and redesign the mission from scratch, costing millions of dollars in people work hours. Then an engineer said they think they knew what happened. Not only that, they think they knew what caused it to crash. Who was it? It was them. When they were working on it, they touched upon the circuit board with a screwdriver and got a spark. They checked and everything seemed fine. Maybe that's what caused the crash. They did some more analysis, and it turned out, that was the problem. What happened to that engineer? A few years later, he received a package containing a bottle of champagne for his candor and honesty. What do you think would have happened if an incident like this happened at your organization? Would they be punished and taken off the project, or worse, fired? Or, would they be rewarded with a bottle of champagne? At least the engineer was. The interesting thing is that engineer could have just stayed quiet, and no one would have known they caused that crash. Instead, unprompted, they admitted their potential mistake. What was it at NASA that made it possible for that engineer to take such a huge risk and admit that mistake? What was the special ingredient at Google that enabled some teams to perform better than others? What was missing from that cockpit of that Boeing 747 back in 1977? We'll go look at that.
Background Information
My name is Jit Gosai. I work for the BBC. We're a public service media broadcaster, which basically means we create content that goes on TV, radio, and the web, with an aim to inform, educate, and entertain the British public. I work on the tech side of the org, known as the product group, which houses all of its main digital services. More specifically, I work on the on-demand side, known as iPlayer, its video on-demand service, and Sounds, its audio on-demand service. Which is available on lots of different platforms such as web, smart TVs, mobile phones, and smart speakers. All of these platforms are supported by dedicated multidisciplinary teams, as well as backend teams providing APIs to power all these platforms. I work as a principal tester across the department, working with multiple teams to see how we can do things better. Particularly, I'm very interested in establishing a culture of quality, which is one where we don't just inspect the quality by testing the things at the end of the development lifecycle, but one which looks to build quality in every level that we work. From the products that we build, the processes that we use to build them, and the very interactions between the people that work in our team. One of the ways in which I do this is by spending time with teams and the people that work in them.
Communication Between Team Members
Over the last three years, I've been talking to lots of different teams and their team members about how they work and the problems they face. I'm talking to testers, developers, delivery managers, team lead, product owners, and during this time, a key theme kept coming up. They would describe scenarios where a problem occurred, which should have been easily resolved if the people involved had simply spoken to each other. I'd often hear, why can't people just talk to each other? When I delve deeper into it, I found that people were reluctant to speak up in certain situations. They were unsure how others would react if they said they didn't know or understand something, or what would happen if they admitted to a mistake that would first result in an issue. It was generally simpler to keep quiet until then you were more certain about the details, at which point, why bother admitting it? This was resulting in problems that could easily have been resolved earlier, if the team had known there was an issue, but things were being left until the problem got big enough that no one could ignore. Or what would happen more often than not, slowing the team down without anyone understanding why.
At the time, I thought the problem was feedback. They just didn't know how to tell each other what was and wasn't working. My thinking was, if we better understand how to give and get feedback, then we could get people to talk to each other. This was around the time I came across Kim Scott's work with "Radical Candor." She referenced the work of Amy Edmondson in her book, "The Fearless Organization." That's when I recognized what she described as psychological safety, to be what I've seen, or lack of it in our teams. Amy Edmondson defined psychological safety as the belief that the work environment is safe for interpersonal risk taking. What does this mean? Interpersonal risks are anything that can make you look or feel ignorant, incompetent, negative, or disruptive. Belief is all about what you think about taking these risks. The work environment can be pretty much any group you're working, for instance, your team.
Why Should You Care?
Why should you care? The Tenerife airport disaster illustrates that if we have high level of psychological safety in that flight cabin, the first officer and the flight engineer would have been more willing to challenge the captain and check that they were actually clear to take off, and may have just averted that disaster. The NASA Redstone rocket showed that high level of psychological safety leads people to focus on achieving their goals over self-preservation. Google's Aristotle project found that it was less about who was on the team, and more about how that team worked together. It was specifically psychological safety that allowed team members to feel safe to take risks, and feel confident that no one on the team would embarrass or punish anyone else for admitting a mistake, asking a question, or offering up a partially thought through idea. What these three case studies show, is that psychological safety or the belief that someone can take interpersonal risks is fundamental for successful teams, can help us to learn from our mistakes, and just goes to show how important speaking up can be.
Why Psychological Safety Matters for Software Teams
Does this matter for software teams now? Through the examples from very different industries, aviation and space, only Google really relates to the software industry. The case study never really mentioned if they were just software teams or other types of teams across Google. Why does psychological safety matter for software teams? I think we can all agree that our organizations want high performing teams. Why? The simplest reason is that it helps them deliver value to their customers. The higher the performance, the more value they're likely to be able to deliver. They don't simply want high performing teams, they want them to be highly satisfied, too. Why? Happier people tend to be more productive. Not only that, they're more likely to stick around, more likely to advocate for the company, and more likely to make other people happier too. Research also suggests that these two things are highly correlated. When you see an increase in one, you tend to see the other one go up too, and vice versa, when one is down, the other tends to go down with it.
VUCA (Volatility, Uncertainty, Complexity, Ambiguity)
If we can control everything in our work environments, we probably would, and teams will be quite productive. That's the thing. Our work environments can be quite complex. One way to describe that complexity is to borrow a term from the U.S. military known as VUCA, which stands for Volatility, Uncertainty, Complexity, and Ambiguity. What does this have to do with software and our work environments? Our work environments can often be volatile, meaning, we don't know how change will affect the system. Our work environments can often be uncertain, meaning, we don't always know how to do something to get a specific result or outcome. Our work environments can often be complex, meaning, it's impossible for one person to know how all the systems fit together. Our work environments can often be ambiguous, meaning, we can all interpret what we see differently, which can lead to misunderstandings, but also multiple ways to solve the same problems.
Multidisciplinary Teams
There are things we can do to try and limit VUCA in our work environments, but it's probably impossible to move it all together. VUCA can cause two main issues. The first one is failure. The chances of things going wrong are that much more likely, due to all that volatility, uncertainty, and complexity. The other is interdependence, either within teams but also between teams due to no one person knowing everything, and no one person being able to handle everything. What's the best way to handle all this failure and interdependence? Multidisciplinary teams, much like what we have now with developers, testers, delivery managers, product managers, UX people, and numerous types of different types of developers, all working together with the odd architect and principal thrown in for good measure. Whatever risks that can't be mitigated can be tackled as they occur. All these different skills and abilities and experiences can collaborate to figure out the best way to proceed forward.
For multidisciplinary teams to work effectively, they need to be learning as they deliver and continuously improving their work environment. Learning comes in many forms, from classroom settings, to reading books, and other literature, blog posts, articles. Also, attending conferences, lunch and learns, and meetups too. Continuously improving is all about improving the team's capability to do the work with less while simultaneously doing it better and faster than before, rather than trying to do more work by just trying to do it faster. Why is learning and continuous improvement crucial for high performing teams? You need both to decrease the impact of complexity, VUCA, which has a knock-on effect of lowering the impact of failure too, which typically results in increase in performance of the team. There's a good chance that it may also have a positive knock-on effect on satisfaction too. All this needs to be happening as you deliver, not just a separate step in the process.
The key for making this work is learning from others. Don't get me wrong. Training sessions are great and we could still have them, but we learn the most when we apply the knowledge we've learned, or better yet, things we have learned from others. One of the best ways to learn from others is while we're working. To be able to effectively learn from others we need to share, such as, what we know, things we don't agree with, things we don't understand, what we don't know, and mistakes we've made. Some things are easy to share. Why? Because it feels safe to, because you can better predict how others will react when you do so. Other things are hard to share. Why? Because it feels risky, because you don't know how other people would react if you do. Essentially, you need to show vulnerability in a group setting. That's not an easy thing for people to do. What might they think of you if you said these things? This is where the problems will start. If we only share what we feel safe to share, for example, only what you know, it impairs our team's ability to learn from others. Which limits our ability to continue to improve, which increases the impact of complexity in our work environment, which increases the likelihood and impact of failure. By not taking interpersonal risk with our teams, our ability to learn from failure is also limited, which just leads to a cycle of not effectively learning from our team members. This can lead to decrease in performance, which can have a knock-on effect on team member satisfaction too.
Psychological Safety and Safe Spaces
What is it that helps team members share mistakes, what they don't know, things they don't understand, things they don't agree with? Psychological safety. Another way to look at this is to get people to speak up and take interpersonal risks in our teams in order to get them talking to each other about what is and isn't working. When I was working with teams, I'd spot when people seemed reluctant to take interpersonal risk. Whenever I mentioned psychological safety, everyone would nod and say yes, they knew what it was about. Most people's understanding seemed to stem from the fact that they knew what psychological meant, something to do with the mind, and what safety meant, the idea of being protected. Psychological safety meant protection of the mind. It's all about creating safe spaces, but confident people don't really need it. Ultimately, it's just about people trusting one another.
If we look at the definition of safe spaces, which according to Wikipedia is a place intended to be free of bias, conflict, criticism, potentially threatening actions, ideas, or conversations. Psychological safety, which is the belief that the work environment is safe for interpersonal risk taking. Then, safe spaces is going to lead people down the wrong path. Don't get me wrong, creating safe spaces is important, and is something we need in our teams. Psychological safety and safe spaces are quite different. A safe space to me is almost a refuge. It's somewhere where you can go and now recharge, or pushed out of your comfort zone. It's almost the very notion of being in your comfort zone, and feeling safe and protected. You can't take interpersonal risk in an environment that is a safe space. Psychological safety and safe spaces are not the same thing.
The problem with framing psychological safety as a safe space, is that it can lead people to think that it's all about being super nice, and telling people that they've done a good job, even when they haven't, as telling them might hurt their feelings. Or delivering bad news in an anonymous form that doesn't allow for any follow-up. Creating an environment where anything goes and doesn't matter if you do poorly or well, leading to apathy. We need to be careful, as people can almost go in the opposite direction too. It can lead people to think that psychological safety is all about being upfront and telling people how it is, which can lead to people being blunt with their feedback, and not caring how others react when they hear they've made a mistake, or something hasn't quite gone to plan. That people should just develop a thicker skin. This can lead to overly confrontational environments, where people never know how others will react when things don't go well, and are likely to start feeling anxious, leaving them to be more risk averse, or creating environments where only a certain type of person can survive, pushing teams into a less diverse and inclusive environment.
What we need are environments that lead away from apathy, so that people know what is expected of them, and that they'll be told when things don't quite hit them on, without the fear of being embarrassed or judged. Which allows them to be intellectually vulnerable. Also, to not worry that people will think it'd be overly negative or disruptive if we're pointing out others' issues and mistakes. Also, away from environment that are fear-inducing, with high levels of uncertainty with how people will react if things don't quite go to plan. The worry that they will be judged by their teammates as incompetent or ignorant if they admit to a mistake, or share that they don't know something, or worse, be punished for doing so. What we need is people to feel safe enough to being vulnerable in a group setting, so they can come out of their comfort zones and take risks. Also know where the boundaries are, and understand that what isn't, isn't acceptable within the teams, and how they will know if they've crossed that line, and what the outcome will be if they did. What we need are environments that allow people to do their best work, but also understand how they can get even better. In some ways, psychological safety is about giving people the courage to say those hard to say things. People need to come out of their comfort zones and start getting uncomfortable.
Creating Environments with High Psychological Safety
My previous approach in teams was talking to people one-on-one about what psychological safety is and isn't, and how it helps, which is fine for a couple of teams here and there. This wasn't really going to scale across a whole department, and hoping that people will just figure this stuff out just didn't seem very realistic. See, the thing about psychological safety and taking interpersonal risk is that it's not a natural act. How do you go about creating environments that are considered high in psychological safety? There are two core areas to making it happen. One is team members that enable interpersonal risk taking, which is about helping people push out of their comfort zones and taking interpersonal risks. The second is leadership that fosters environments high in psychological safety, as people have a natural tendency to look up in the hierarchy towards what is and isn't acceptable.
Over the last year, I've developed a bottom-up and a top-down approach towards creating environments that will be considered high in psychological safety. The bottom-up approach is all about focusing on team members, and isn't trying to get them to take interpersonal risk, but upskilling them in their communication skills. Over the last couple of years, I've noticed that communication skills are not evenly spread in teams. You have some people who are very skilled and know all about how to actively listen, the art of asking a good question, and how to get and gift people. Others have never heard of active listening, questioning of something that it actually will do. Feedback is what your line managers will do. Good communication skills are essential for effective collaboration. As a result, I developed a series of workshops that allows you to better balance out these skills in a team. The thinking being, if team members are better skilled at communicating, then they may be more willing to take interpersonal risks when encouraged to do so. As a result, this has proven very successful with upskilling team members with these skills, but what is less certain is, does it help people take interpersonal risks? This is where the top-down approach comes in. The top-down approach is all about educating leaders in what is and isn't psychological safety, and how they can foster environments that are considered high in it, much like this talk. In fact, all the content I've spoken about so far is directly taken from the talks and discussions I'm facilitating with leadership groups. The insight I've gained from this is that a lot of leaders know the right behaviors, but they've not made the connection as to how these behaviors foster psychological safety, which often resulted in inconsistencies with their approaches, and therefore, never really create an environment high in psychological safety.
Principals and Staff-Plus Engineers Bridging the Gap
Principals and staff-plus engineers typically sit in the gray areas of teams. You can almost think of this as the middle between teams and leaders. While not official leaders with line management responsibilities, we have very senior titles that put us in unofficial leadership positions. Also, while not official members of teams, we're individual contributors, still getting hands-on with them. What this allows us to do is bridge the gap between team leaders and team members in ways that others just can't. We have an opportunity to not only create environments high in psychological safety, but also demonstrate interpersonal risk taking to show others how to do it. How do we do this? By adopting certain mindsets and behaviors. What we need to do is develop our curiosity in that there's always more to learn. We need humility as we don't have all the answers, and empathy, as speaking up is hard and needs to be encouraged.
Core Mindsets, and Modeling Behavior as Staff-Plus Engineers
What I'd like to do is look at how to demonstrate these mindsets, and looking at some of the behaviors that we can model as staff-plus engineers. I'd like to start with framing the work. This is a two-part process to frame the work. First, you need to set expectation, about our work being interdependent and complex. What this does is it lets people know that they are responsible for understanding how their work interacts with other people's in the team. We need to set expectation that they need to learn from failure as it's going to happen, or that people will try and avoid it, making it less likely that they speak up. Or team members are seen as important contributors with valued knowledge and insight. Not just there as support workers, there to make the leadership team look good. Setting expectation is really important as it helps people default towards collaboration, and away from self-protection and not speaking up. The other part of framing is setting the purpose, which is all about, what are we doing? Why are we doing it? Who does it matter to? It's about helping people connect their work back to the organizational goals. Again, reinforce that default towards collaboration and away from self-protection. This is important, even if you don't directly manage the people in the teams, because we shouldn't just assume this has happened. It needs to be happening continuously, even when it seems obvious. Why? Because people forget, especially when things are tough. It's in these tough moments that we need people to speak up and share what they know.
We then need to invite participation. Firstly, by acknowledging our fallibility. That we don't hold the answers as leaders and staff-plus members, and that we need to work collaboratively to share our different experiences and perspectives, as well as what we do and what we don't know. We need to provide each other with candid but respectful feedback on how things go. Secondly, we need to proactively invite input by demonstrating active listening and asking open, thought-provoking questions. When we're inviting people to participate, we then need to respond productively. Firstly, by expressing appreciation. Before anything else, we need to acknowledge that it takes courage to speak up, and thank people when they do, because we want them and others to do it more. We then reward people's performance. We want to emphasize the effort and strategy they've put into the work, not just the outcomes of the work. Why? Because even when people have tried their best to come up with a great strategy, we still might not succeed, so only rewarding successes indirectly punishes failure. Failure is highly likely in the complex work that we do. You then need to destigmatize failure, because people usually believe that failure is not acceptable, and high performers don't fail, which leads to people trying to avoid, deny, distort, or ignore failures. We need to reframe failure as being a natural byproduct of experimentation. The high performers produce, learn from, and share their failures. Therefore, failure is part of learning, which can lead to open discussions, faster learning, and greater innovations.
What do these three areas actually do? Framing the work creates shared expectation and meaning, and helps people see that they have valuable things to contribute, and that we want to hear what they know: good or bad. Inviting participation increases confidence that their contributions are welcomed. By giving them the space, asking them to share, and providing structure to do so, will build confidence that what they have to share will be heard. Finally, responding productively directs teams towards continuous learning, by showing appreciation of what people have contributed encourages them and others to do it more. Reframing failure helps people to experiment which inevitably produces more failures that we share openly so we can all learn. How do these things help you create psychological safety? The mindsets are all about helping you focus your thinking and behavior. The reason for three is that it keeps the mindsets' intention, preventing people from over-indexing on any one of the mindsets, and helps guide our behaviors. Framing the work, inviting participation, and responding productively are the behaviors that we need to be modeling in our teams. Creating shared expectation, sharing contributions, and moving towards continuous learning are the intentions and help us understand what outcomes we're looking to create with our behaviors.
Getting Started, with Core Mindsets
Working like this can be a bit tricky. What can you do right now to get started? How can you frame the work? Firstly, when working with teams, name the goal of speaking up. Clearly state the need for people to speak candidly but respectfully, and clearly mapping it back to team's goals. What are they trying to accomplish? Why does speaking up help this happen? Then, state the purpose. For the teams that you're working with, learn to tell a compelling story about what the team does, and why it matters to the department, and how it can help the organization be better. This is really important for work that doesn't clearly link to customers. For instance, work done by backend teams, or infrastructure teams, but also day-to-day work, so just build infrastructure, testing, moving towards continuous delivery. When can you do this? A couple of good places to start is kickoff meetings when you work, is a great opportunity to name the goal of speaking up. If you're a coach or mentor, other staff members, then catchup. Catchups can be another good place to understand what they think the team's purpose is. How did this differ from your compelling story? It gives you a chance to course correct. Sharing a compelling story outside the team can also be really helpful too, especially during interviews or recruitment drives, or during presentations about what the team does.
Then, we want to invite participation by learning from failure. What can you do now? We can show humility during meetings with team members, by making sure they know you don't have all the answers, and by emphasizing that we can always learn more. We can demonstrate proactive inquiry by asking open questions rather than rhetorical ones, and asking questions of others, people to see what they think rather than just sharing your view. We want to be helping to create collaborative environments and structures that look to gain others' views and perspectives and concerns, no matter their ranking or experience with the subject matter. Finally, responding productively. When failure occurs, what can you do now? We need to express appreciation by thanking people when they bring ideas, issues, or concerns to you. We want to be creating structures for experimentation that encourage intelligent failures, that help us to learn what to do next, not just failing for the sake of it, and proactively destigmatize failure by seeing what we can do to celebrate those intelligent failures. When someone comes to you with bad news, how do we make that a positive experience for them? Do you offer any support or guidance for helping them explore their next steps? This isn't a step-by-step plan or a one-off activity where you frame the work, invite people to participate, then follow by responding productively, but something that needs to be happening continuously, in some cases, when the opportunity arises. A good rule of thumb here is little and often.
Recap
I began with why I started looking at psychological safety, because I was puzzled by why people couldn't speak up to each other. Then defined what psychological safety meant, which in its simplest form, is about people being able to take interpersonal risks or the belief that they can. We then looked at some case studies, as to where our lack of psychological safety resulted in tragic consequences, but also how high levels resulted in accelerated problem solving and performance. I then took us through why psychological safety is critical for software teams. This is due to our work being complex, which leads to interdependence. In order to be able to handle that complexity and interdependency, we need to be able to share the easy and the hard things in our team. I then went on to detail some of the common misunderstandings of psychological safety, that it's not about safe spaces. It's not about being too nice or too blunt. That confident people need it too. Finally, how can we foster psychological safety in our work environments by thinking through three core mindsets, and adopting specific behaviors with the intentions to create shared meaning, confidence that people's contributions are welcome, and directing teams towards continuous learning. Some initial steps on how we can get started.
Some of you may be wondering, will this solve psychological safety in our teams? By itself, with just one person behaving this way? Probably not. If we as staff-plus engineers can start to model these behaviors, we can show others a different way that leads to true collaboration, not people just handing work off to one another, and coordinating their actions. We don't have to do this. No one holds that against you. Most teams will continue as they are, with only a minority of people willing to take the risks, while the rest wait for more sensing on what the outcome will be if they speak up. At which point, it can often be too late, slowing the team down, and people wondering why people just can't talk to each other. If you do decide to model these behaviors, you not only create environments that are more inclusive, but they encourage people to speak up and share what they do and don't know. You create the environments for others to step up and unlock the performance of our teams and organizations, and increase the impact of your work in ways that we can't even anticipate.
Conclusion
As staff-plus engineers, we have an incredible opportunity to shape the cultures of our teams and organizations. We can leave it to chance and hope these things just work out, or we can be deliberate and set the direction we want to go. Psychological safety is foundational to high performing teams. If people won't, or can't speak up, then we risk not hearing valuable information until it's too late. One of the best ways to increase interpersonal risk taking in our teams is for us to model these types of behaviors regularly and consistently. My question to you is, when would you get started?
See more presentations with transcripts