“Nobody has ever been fired at Valve for making a mistake.”
I love the sentiment. I hate firing people. The rest of the section of their philosophy on Risk reads really well and, in my experience, works really well.
I’m just not sure that is a good thing.
I suppose the issue is a literal definition of the word mistake. It’s one thing to get a game feature sufficiently wrong (defined as more people hate it than like it) that it could be considered a mistake. There’s so much subjectivity in game design AND the consequences for failure are not life threatening, so risk should be embraced, encouraged, and rewarded.
The problem is the word AND, along with the definition of the word mistake. Valve has had two security incidents which would make it into the category of life threatening. First, the disclosure of the HL2 source could be perceived as such, at least in term of the survival of Value. The leak of Steam credit card data, however, jeopardized the survival of Value AND (potentially) caused serious damage to their customers.
For the opening sentence to be Truth, the following must be true:
First, they’re using a much more expansive definition of the word mistake than I do. Mistakes are spectrum which also include fraud, negligence, and incompetence, not just “getting it wrong.”
Second, they might not even know who’s directly responsible for the conditions which lead to the breach in the first place. (Disclaimer: I don’t work at Valve, so I clearly don’t have first hand knowledge.) Every security breach scenario I’ve investigated had, as one of it’s root causes, people making errors which individually wouldn’t have caused the security breach, but in aggregate made one possible.
Wouldn’t a shifting self-selecting workforce create an environment absent ownership and stewardship?
How could that not create exactly the sort of seams through which hackers penetrate systems?
What happens when you interject actual human beings into this system, which all of their messy characteristics?
Would you want to develop Flight software this way? Put another way, would you want to trust your life to this software?
For me the answer is no. I just don’t like it. I suppose the answer is some sort of a hybrid for the smallest amount of critical functionality possible. Inside of that critical functionality bubble, something more conventional could exist, applying the well understood principals of mission critical software development, with a semi-permiable membrane through which people and functionality could pass as needed. The difference here is someone who be responsible for ensuring only “safe” things transited that barrier using APIs, code sharing, data passing, and system architecture. Outside of the bubble, the more amorphous and squishy team model could operate just fine.
“If you find yourself in a group or project that you feel isn’t meeting these goals, be an agent of change.”
Best sentence in the book. However the sentence before it:
“This handbook describes the goals we believe in.”
…closes the door on doing the hybrid-ish solution. They make it abundantly clear that they’re not willing to consider the hybrid solution proposed above.
I love the idea of taking risks. I just don’t understand how you get to the level of correctness required, when you never fire anyone and have amorphous accountability. My gut says that the peer review mechanism is supposed to weed the bad apples out.
“Peer reviews are done in order to give each other useful feedback on how to best grow as individual contributors.”
Nice thoughts. (I would have reworded those paragraphs to emphasis the positive nature of feedback, not just criticism.) Here’s the thing, if this is the only way you cause people to shift off of projects they’re not succeeding at, or out of Valve completely, as required, then there’s a Political problem. Political as in not Aristotle politics, but instead that messy partisan kind. Because after all, aren’t peer reviews provided by your peers? And in just about any group of people, how do you avoid the popularity contest issue?
They address some of this in the hiring section:
” It’d be tough for us to capture because we feel like we’re constantly learning really important things about how we hire people. In the mean- time, here are some questions we always ask ourselves when evaluating candidates:
• Would I want this person to be my boss?
• Would I learn a significant amount from him or her?
• What if this person went to work for our competition? Across the board, we value highly collaborative people.”
The precise questions used in their feedback process aren’t disclosed, but I get the feeling that they’re asking them again on a periodic basis. That’s key to ensuring that you haven’t made a hiring mistake. The use of the word “boss” is highly ironic, seeing as their definition doesn’t match the dictionary definition…
“Hiring is fundamentally the same across all disciplines.”
Awesome. Buttressed by what I’ve recently read about Pixar.
“We’re looking for people stronger than ourselves.”
Awesome. Made even more so by the fact that they justify and defend that statement.