If you are a good developer and you’ve worked in bad organizations, you often have ideas to improve the process. The famous Joel Test is a collection of 12 such ideas. Some of these ideas have universal acceptance within the software industry (say, using source control), while others might be slightly more controversial (TDD). But for any particular methodology, whether it is universally accepted or only “mostly” accepted, there are a multitude of organizations which don’t employ them. There are many, many shops that do big bang testing, that do Big Design Up Front, that use e-mail for source control, and much worse. What’s going on here? Shouldn’t those companies be out of business?
You may be content with the abstraction that management is simply incompetent. This is certainly a common sentiment. But how incompetent can the pointy-haired boss really be if he employs dozens of people? Shouldn’t a little startup using XP be able to wipe the floor with bad companies in an hour? If all it takes is better engineering practice, shouldn’t the better engineers always win?
Many developers who endorse good software engineering practices work on and draw conclusions from maybe 1-3 large-scale projects per year. But as a result of my contracting business, I’ve had the opportunity to study many large organizations and do postmortems of many dysfunctional software projects. I’ve analyzed the codebases of hundreds of failed projects. You can look at any one of these projects and say that it failed because of bad software engineering practices–bad or no tests, bad source control, no incremental delivery, no spec, etc. But why do people keep doing projects like this? Why are these software practices still in use? Looking at 3 codebases will not answer this question. Looking at three codebases tells you “There are no tests, because neither Jeff, Bill, or Tom wrote them.” But that’s not the sort of answer I’m looking for.
I don’t have any hard answers, but I have some theories. The first one is that our industry’s best practices don’t make sense. I don’t mean, of course, that they aren’t objectively better–of course they are. But they are nonintuitive. If you tell a person that more resources will slow a project down, they will tell you that you’re crazy. It goes against all intuition. Even with the longitudinal studies, charts, graphs, and more, it is a difficult sell. It’s the same story for Agile, TDD, incremental delivery, product backlogs–the whole stack of best practices. Every single one is nonintuitive. Otherwise, everyone would already be doing them.
Unless you are having a meeting with the one person who is going to use the software that you’re writing, you’re not meeting with the real customer. You’re meeting with a person who has to explain to someone who can explain to someone who can explain what you’re saying to the real customer. It’s not enough to convince the person you’re sitting in the room with that Agile is a good idea. He has to convince his boss. That person has to convince his boss. That person has to convince the sales team. The sales team has to convince the customer. If the customer is b2b, your contact at the customer organization has to convince his boss. Who convinces his boss. Who convinces the real customer. Maybe. Unless that sale is also b2b. This is a very long game of telephone. If the guy you’re talking too is thinking “This sounds like a really good idea but I’m concerned I can’t sell this upstairs,” you are dead in the water. At any point in the chain, if somebody thinks that, you are dead in the water. You can’t just say “It’s objectively better,” you have to show how he can turn around and sell the idea to someone else.
Put yourself in the middle manager’s shoes. If the project goes bad, he has to “look busy”. He has to put more developers on the project, call a meeting and yell at people, and other arbitrary bad ideas. Not because he thinks those will solve the problem. In fact, managers often do this in spite of the fact that they know it’s bad. Because that’s what will convince upper management that they’re doing their best.
My second theory is that business objectives can change faster than anyone can react. Real software engineering takes time and discipline, but you are under tremendous pressure to ship. A narrow window of opportunity has opened and we have to get something out the door in three months. The pressure can be internal–some salesperson has comitted to an insane schedule–or it can be external–the customer needs a solution in two months. Often the problem can be interest on the technical debt–we made poor choice A and shipped it, the customer is mad, we need to fix it ASAP by making poor choices B, C, and D. Analysts feel the need to change the strategy every six months, because if we kept one strategy and stuck with it, what would we need an analyst for?
Turning around a software project is like turning around a tugboat: it takes a lot of time and energy, and tremendous momentum. When the boat turns slowly, it is blamed on the software developers failing to react to changing business objectives instead of management not forseeing and not adequately communicating a sufficiently lengthy product roadmap in the first place. In this environment, people are unlikely to consider software management practices to be the problem, because it would involve the management admitting that they have no clue what they are doing, not only from a technical point of view, but also from a business point of view, because they fail to anticipate the direction in which they want to go this week.
This is all very interesting, but it doesn’t really answer the original question. Yes, businesses are under pressure to gravitate toward bad engineering practices, but shouldn’t they be under equal market pressure to compete against companies that are using actually good software engineering practices? Shouldn’t, at some point, bad companies simply implode under their own weight?
Why sure, in the long run. But as Keynes succinctly put it, “In the long run, we’ll all be dead.” Eventually is a long time. It’s months, years, or decades. A project can be failing a long time before management is clued in. And even longer before management’s management is clued in. And it can be ages before it hits the user. It took just a year or so for Mint to start peeling customers away from Quicken. But Intuit was busy killing themselves for almost a decade before Mint was even on the map. And when Intuit was threatened, they had more than enough cash to up and buy Mint, which they promptly left to rot. Intuit could survive another ten upsets like Mint. At one per decade, that’s 100 years of doing it wrong, or almost one million man-years of runway. And that’s a long time.
Consider Apple. Jobs was fired in 1985, but he was re-hired in 1997. That is twelve long years before they admitted their mistake. And even after Jobs was re-hired, and hit the ground running to turn around the ship, it was 2006 before it was clear to anyone that they were on an upward trajectory. It took Jobs 9 years to turn Apple around. At any point in this 21-year overall period, investors could have pulled the plug on what is now the largest and most successful company in the world.
Reality is slow. It lags a decade behind competent engineering practices. And most people don’t want to wait a decade. They want results today. They want a PowerPoint slide to present at the meeting upstairs in ten minutes that explains how the project is back on track. They don’t want a real solution, they want smoke.
I am reminded by a very powerful scene from The Wire. After the police department has been forcibly shifted almost monthly from focusing on street issues to organized crime to quality-of-life arrests to being completely shut down due to lack of funding at the whim of city hall, the mayor’s office wants to know why crime hasn’t gone down.
I’ve sat in on meetings that are pretty much line-for-line this scene. Implementation has neither the resources, the budget, or the time necessarily to deliver. There’s never time to fix the project, there’s only time to make it worse. Yet the bug count will go down. By sheer will.
A lot of developers, present company included, have looked at the state of affairs of most software shops today and have said “I can do a lot better than this,” and on that basis have gotten into contracting. The theory goes like this: if I use best practices, and write great software, the clients will line up. I can find troubled projects, swoop in and save them through sheer competence, and save the day. This is probably the biggest single lie a developer can tell himself.
Here’s the thing about best practices: they’re not a secret. Just google “software project management“. Or check Amazon. Heck, even my brick-and-mortar B&N carries over a dozen books on Agile, TDD, Scrum, Spolsky, and who knows what else. If somebody in charge really wants to solve the problem, the answer’s right in front of them every place they would look. Best practices are not some “scrappy startup” idea. Microsoft Press has dozens of books on Agile, TDD, Scrum, and everything else, complete with enterprisey acronmys. If they want to do better, they can. Easily.
Here’s a thought experiment. If you were being assigned to manage a mechanical engineering project, would you google “engineering project management?” Grab some books from Amazon? Visit the library? Sign up for a class at the local university? Ask someone with experience for advice? So would I. Before the project began. And I would avoid a lot of headache.
In the vast majority of failed projects I’ve been called to looked at, the managers have not read one book on software engineering. They haven’t taken one class, read one article, or been to one workshop. At best, they’ve managed other failing software projects. And if it didn’t seem like a good idea to do the legwork before the project began, it won’t seem any more of a good idea now that it’s in trouble.
To put things another way, bad project managers don’t iterate on the software process. They iterate on assigning blame. We will put a “zero bugs” clause into the contract, that will really hold their feet to the fire! We will dock them every day it is late! This produces endless rounds of meetings, charts, and stat games, but no improvement in the product. Bad managers do not want to do iterative development, because then they would be in the loop (and thus, more responsible). They do not want to do testing, because if a bug falls through the cracks it reflects poorly on them. Actually writing better software doesn’t matter. What matters is stats.
If you haven’t gone to B&N for a book, if you haven’t listened to Spolsky, Atwood, Fowler, Brooks, and hundreds and hundreds of others, if you haven’t taken one hour to study how any major software development company operates, you’re not going to be convinced. Period. If Steve Jobs with reality-distortion-field-in-tow returned from beyond the grave to personally tell them to use best practices, they would not convert. Some anti-social developer trying to learn sales isn’t going to convince them. There is zero chance of that happening. They will go with whatever developer agrees to their silly zero-bugs clause, a.k.a. an incompetent developer. And after that fails they will show up on your radar again with an even more ridiculous timeline, set of requirements, and contract, owing to the lost time in the previous two failures. Round and round it goes.
This sounds like a “duh” moment by now, but don’t sell to people who won’t be receptive. If you are talking to a prospect about contracting on a project and this is the first software project they’ve ever done in their life, politely back out, even before you know what the project is or how much they want to spend. You’re not going to sell them on best practices.
If you’re talking to a prospect who got “burned bad by a previous developer,” politely back out. They picked a bad one last time, they’ll pick a bad one this time. (Unless they show a serious change of heart and start asking questions about how you do source control, because man, we really dropped the ball on that last time. Then it’s a worthwhile conversation.)
If a prospect objects to being involved in the testing, starts editing your contracts line-by-line in pencil, and starts talking about penalties for missed final delivery deadlines and bugs, run. They are iterating on assigning blame for the mistakes they made last time.
If you keep getting prospects who want to ourtource the project, resist the urge to write articles about why outsourcing X is bad. Then you will start ranking for “X outsourcing” in search results. You will not convince anyone, and you will get more people looking for outsourcing instead of real prospects.
Instead, talk about happy things. TDD, Scrum, Agile, XP, CI, whatever is a practice that you’ve had success with. You will start attracting likeminded people who already know you are right. They already know about how important it is to work hard to keep projects from failing. These are customers. These are people who will listen to you.