Category: Software Development

  • Software Problems: the Role of Incentives

    When lots of human beings work at something for a long time, they tend to figure out how to do it. Building software appears to be a huge exception to that rule. With decades of experience under our belt, why is it that we still can't build good software?

    One of the reasons software projects so often fail and improved methods aren't used appears to be that the people involved have perverse incentives.

    Incentives

    Everyone knows about incentives. They work. Even when we know someone is using incentives to get us to do something, we're more likely to do the thing with incentives than without them.

    Perverse Incentives

    Whether an incentive is perverse or not is in the eye of the beholder. From the incented person's point of view, an incentive is an incentive, and as we know, incentives work. But we normally call incentives "perverse" when they incent people to do something that most other people would agree is a bad thing.

    Perverse Incentives: Mortgages

    The housing boom leading up to the financial crash of 2007 was clearly driven by perverse incentives on multiple fronts. Borrowers were tempted to take what seemed to be easy money. Mortgage companies could make piles of money in fees by packaging up risky mortgages and passing them on. Rating agencies could collect loads of fees by not looking too closely. And the bankers at the top of the food chain made themselves lots of money by creating and selling fancy instruments that ignored the underlying realities and the ultimate consequences of their actions. Then it all came crashing down. Many were hurt, the big guys who made the most money least of all.

    Perverse Incentives:The VA System

    It has recently come out that more than 120,000 veterans are experiencing long waits for care at VA hospitals, even while official reports showed minimal wait times, enabling managers to collect incentive payments. If there ever was a case of perverse incentives leading to bad behavior, this is it.

    VA incentives

    Perverse Incentives in Software

    Software is so rational, so organized, the people involved are so smart and well-educated — surely perverse incentives aren't driving behavior in software, are they?

    Sorry, sweetie, perverse incentives are a human issue. Humans respond to incentives, perverse or otherwise. And as it turns out, there is a rogue's gallery of perverse incentives operating in software — I will only scratch the surface here!

    Estimates

    Estimates are perverse all by themselves.

    They are also a GIANT BILLBOARD incenting EVERYONE involved in the process to make any estimate as long as they can possibly get away with; and since very few people (often including the programmer involved!) has any idea how long something *should* take, the estimates are typically accepted as is; but then, manager often double the estimates before passing them on. Why is this perverse?

    The organization probably would like to get something done in the shortest reasonable time. But the programmers and project people are measured on whether they beat or miss the estimate. The longer the estimate, the better the chances of avoiding failure. It's that simple. It just makes it all the more maddening that, even with inflated estimates, things still go wrong!

    Requirements

    The whole modern software development process starts from requirements. Gamesmanship around requirements is therefore front-and-central. Estimates are based on requirements, and therefore controlling and fixing the requirements is central to the effort of creating "success." The system may fail, the users may hate it, but if it meets the "requirements," the people running the project get to declare "success." What you'd like is for the project to succeed when the needs of the business are met. The perverse incentive is for the people delivering the system to define "meeting the requirements" and then control the requirements to assure that they're met, regardless of what disasters happen to the business.

    False reporting

    Just like at the VA, project managers are highly incented to avoid reporting problems — typically using big fancy reports that are chock full of meaningful-seeming stuff but are in fact just garbage. Just like in the mortgage-driven financial crisis, everyone involved is incented to declare success, take their rewards, and kick the can down the road for the next guy. Eventually, with shocking speed, it all comes crashing down, just like the financial system, and just like the mere 4 days between the laudatory article about how great Cover Oregon was going to be and the admission of total failure.

    False Assessments

    Here's where the rubber meets the road. Who is incented to blow the whistle on a failing software project? How, when and by whom is a software project judged to have failed? Most importantly: what are the consequences of having failed?

    We all know the answer. Who has even heard of a software engineer who was fired for failure to deliver? And the people in charge? Never. It wasn't their fault! And the project didn't fail anyway! The requirements changed every month, the target kept moving, and blah, blah, blah.

    Conclusion

    Your kid comes up to you and asks, "can I play my video game now?" You briefly think about how your question when you were that age was "Can I go out and play now," but the kid isn't interested, and is bouncing around waiting for your "sure." Being the aspiring adult you are, you act responsibly and ask "Have you done your homework?" There's a brief pause. The kid is doing a quick risk-reward ratio calculation. If he says "yes," he probably gets to do what he wants. But you might ask to check. Hmmm.

    This is the breeding ground of perverse incentives. We all learn to balance honesty, openness and getting what we want. Some of us go for honesty and openness, deciding that anything else just isn't worth it. But loads of people make an informed judgment on a case-by-case basis, much like the kid and his homework.

    Whatever the morality of the case, the facts are clear: software projects fail left and right, and perverse incentives are a significant factor in making them fail. Without changing the incentives, we're unlike to abandon the Bad Old Way of building software and achieve success.

  • Joe Torre and Software Development

    Joe Torre had an outstanding run as manager of the NY Yankees baseball team. While managing baseball seems pretty distant from managing software development, there are nonetheless a couple of important lessons to be learned. Put simply, baseball has it right and software has it wrong: if we chose software managers using the common-sense methods that are widely accepted in baseball, our software development track record would emerge from its current long, dismal, always-agonizing depression.

    Joe Torre

    Maybe not everyone knows who Joe Torre is. Now retired, he was a baseball player and manager.

    JoeTorre1982
    Joe Torre had an excellent career as a player, from 1960 to 1977. He was an all-star 9 times, was NL MVP once, and was the NL batting champion once. Unusually for a baseball player, he had extended playing time at multiple positions: catcher, first base and third base.

    Joe_Torre 2005
    He went on to have a stellar career as a manager, from 1977 to 2010. His Yankees won the World Series 4 times. He was AL manager of the year twice. His NY Yankees #6 was retired.

    Players and Managers

    Are most managers former players? Is Joe Torre the exception? Loads of baseball fans imagine they can do a better job than their home team manager. The owners have their own opinions on the subject. How hard can it be?

    I looked into this question. There is a list of every baseball team manager from the start of the game. The list gives lots of information, including the manager’s history as a player (or not).

    Here are the facts: as of today, there have been 686 managers of major league baseball teams, starting in 1871. Of those, 566 were former players, while 120 were never players. So the numbers show that the vast majority of managers have been former players. Just 17% of managers since 1871 were never players.

    Is it just Sports?

    My go-to example in music is, of course, Franz Liszt, who excelled as a performer, composer and conductor.

    Liszt
    But he was hardly alone. The NY Times says

    Times have not really changed. In Bach's day composers played their music at keyboards and conducted the instrumentalists about them. Beethoven conducted. So did Berlioz, Mendelssohn, Wagner, Mahler and Strauss. In our day composers are still conducting…

    Here’s Gerard Baker, managing editor of the Wall Street Journal. Gerard Baker WSJ
    Mr. Baker is an accomplished writer, an excellent reader and judge of other peoples’ writing.

    A pattern seems to be emerging here

    Yes, it’s a pattern. Can you imagine a CFO who can’t add? A managing editor who not only can’t write, but can’t even read? How about a museum director who not only isn’t an artist, but can’t see?

    Let’s apply the pattern to software!

    Oh! Ummmm…

    "Well, software is just another thing that can be managed by good management techniques!"

    "I don’t need to know the details – I manage for results!"

    Can we talk about something else now please?

    Conclusion

    The best qualifications for managing software in general and programmers in particular has never been a hot topic. In spite of all the evidence of massive failure, I doubt it will become a hot topic any time soon. But it should be! Just think about the basics here: however peculiar you may think writers are, do you really think editors don’t need to be able to read and write themselves? You may think of accountants as people with thick glasses hunched over desks with green-shaded lights, but do you really think the CFO doesn’t need to be able to add? Programmers may be weird, but doesn’t similar thinking apply?

    Postscript

    While 83% of baseball managers were players, 17% were not, among them some excellent managers. I'm not saying that only former programmers can manage programming efforts, and I know a couple truly excellent non-programmer managers. But in each case, they do interesting special things that are not widely understood that enable them to achieve excellent results.

  • Building Software: the Bad Old Way and the Good New Way

    Software is hard to build. There are lots of failures. When the stakes are high, all parties concerned are “highly qualified” and failure isn’t an option – there are still lots of failures.

    What is it about software? If cars failed anywhere close to the rate at which software fails, everyone would be afraid to ride in them. If houses failed at the rate of software, we’d see an explosion in people living in tents. Like it or not, building software is different from building almost anything else.

    The reason why there are so many software failures is simple: most people think that building software is pretty much the same as building anything else, and they apply the same methods and criteria to it. Even collections of supposedly smart people who start out designing processes specific to software end up throwing in the towel and admitting that their processes are applicable to nearly anything – which is a good way of distracting people from the fact that they don’t work for software!

    Knowledge of the bad old way to build software is widespread. It’s pretty simple, when you come right down to it. It’s not much different than, say, building a house. You start with requirements (how many bedrooms do you need, etc.) and budget. You work with an architect and get a set of plans (the detailed design). You put it out to bid, and select a contractor based on price, time and quality. The contractor gets permits, builds the house, you make progress payments along the way, there are inspections, a final payment and finally you move in.

    When you try to apply this process to software, things fall apart quickly. I won’t go through the awful details, but imagine you weren’t allowed anywhere close to the job site until move-in day, and that most of the work is done by kids who are learning as they go, led by managers who have never used the tools or materials the kids are using. The Dirty Secret of Peacetime Development is gruesome.

    The Good New Way to build software is just as simple, but rarely practiced. Here it is. Have the developers talk with the users and figure out what’s needed. Not for weeks – for a couple of hours. Then they should build something, taking hours or a couple of days at most. The “something” may not do much, but it should kinda work. Then they should show it to the users and have another discussion about what should be added, what should be changed. The programmers get back to work. No control freaks allowed! “That’s not what I meant” and “I forgot to add that” are just fine. It’s best if everyone agrees that progress is more important than perfection. When the software gets to elementary school age, it should be sent to school – how “real” users react to your precious child will add value and give perspective. If things are going well, the kid can skip grades. Repeating grades is OK too. Gradually you let the kid out on play dates and even summer camp. There’s never a date when the kid is “done;” there’s just increasing independence, and fewer visits back home.

    One of the biggest advantages of this method is the elimination of the classic “big surprise,” that heart-pounding moment of the big demo a few days prior to launch. In an article published September 24, 2013, officials of the Oregon Health Exchange stated:

    Oregon optimistic

    The big surprise happened just 4 days later, on September 28, 2013:

    Oregon shut it down

    There you have it. The nearly-ubiquitous Bad Old Way of building software, as illustrated at great trouble and expense by Cover Oregon (among many others).

    The Bad Old Way masquerades as a heart stress test and as reputation roulette for practically everyone involved. On the other hand you have the Good New Way, the apparently chaotic but amazingly effective way of building software, as exemplified by start-ups, people under pressure and generally speaking programmers who don’t have the time, money or perhaps patience to build things the bad way, and who end up adopting Wartime Software methods. And simply whupping the competition.

     

  • Delivering Software is a Nightmare

    Blackliszt is down!

    Capture

    That led me to reflect on how nice things are when they work, but how many things can go wrong.

    Building and delivering software is nightmare-hard. Given all the difficulties, it sometimes amazes me that anything ever works. This is not news – it’s just the way things are. But I’m going to illustrate it with current events.

    Attacks from the outside

    My blogging platform has been under attack for days. Here's what I got when I went there. Typepad error

    Here's a little bit about what's going on.

    Techcrunch typepad ddos
    Bad guys attacking! For days and days at a time!

    Attacks from the inside

    It's bad enough that there are hordes of tough, effective bad guys roaming around outside the walls looking to cause trouble. But loads of folks inside the castle walls, people who are supposed to be good guys, aren't.

    Government cyber security

    This is a huge problem. It isn't widely accepted.

    Brute incompetence in government systems

    There’s an on-going, massive failure of government computer systems.  DOJ statement
      NYP 4-22 1
    10 days ago as of this writing. A whole nationwide part of the Department of Justice has been thrown back to pre-computer days. Details were revealed here.

    When you build things the right way, no fault can take a system down. When you build things the wrong way, it might take an hour or two to fix things. If you build a system an unimaginably stupid way, it might take a day. Like an anti-Manhattan Project, no bureaucracy other than the government could possibly have a system that someone would know would take a couple weeks to repair. But that's how it is here!

    NYP 4-22 2

    Amazing brain-dead fails in non-government systems

    I got an e-mail today telling me about a wonderful new storage system. They claimed that Gartner had dubbed them a “cool vendor,” which I’d never heard Gartner doing and sounded weird. Exablox email

    So I thought I’d check it out. Here’s the result.

    Exablox DB error
    This wasn’t a fluke. Their whole website, not just the e-mail landing page, gave the same result. Good work, guys! Good to see the private sector showing everyone how the discipline of profit-making leads to great computer systems results!

    Chaos and obsolescence among the “experts”

    When you walk into any computer systems organization, you have no idea what you’re in for. This is because most computer organizations make the Keystone Kops

    KeystoneKops
    look like Eliot Ness and the Untouchables.

    Put aside all the politics and infighting; software “experts” can be continents and decades away from each other in terms of how to get things done. Most software people can’t even follow the arguments, much less decide what the right answer is. It’s amazing these people manage to get out of bed in the morning.

    When it “works” is it vaguely usable?

    It’s not under attack, from the outside or the inside. It’s not broken. The internal comedy of errors has managed to deliver a piece of software that arguably “works.” Wonderful!

    Sadly, this is where reasonable debate on software would start, not end. Because there are endless ways to get things done in software, and endless ways to interact with users. When you have software companies with loads of employees, reputedly smart, with tens of billions of dollars in the bank, and they trumpet their proud new creation — and it’s a barely-usable piece of crap, the debate should be about how crappy-but-not-broken software gets shipped. Instead it’s far worse.

    Conclusion

    This is why smart, motivated start-ups beat rich established companies all the time. You’ve got 100 programmers for every one of mine? Great! The odds are on my side! Of course, the truth is that start-ups mostly fail. Their stuff doesn’t work or nobody cares or they just think they’re great when really they’re just like everyone else.

    Nonetheless, a tough, smart, hard-working team can be literally 100 times more effective than the established players, including the cool modern ones like social media companies. Because building and delivering software that actually works and people want is a shockingly difficult thing to do.

  • Lessons for Software from the History of Scurvy

    Software is infected by horrible diseases. These awful diseases cause painfully long gestation periods requiring armies of support people, after which deformed, barely-alive products struggle to be useful, live crippled existences, and are finally forgotten. Software that functions reasonably well is surprisingly rare, and even then typically requires extensive support staffs to remain functional.

    Similarly, sailors suffered from the dread disease of scurvy until quite recently in human history. The history of scurvy sheds surprising light on the diseases which plague software. I hope applying the lessons of scurvy will lead to a world of disease-free, healthy software sooner than would otherwise happen.

    Scurvy

    Scurvy is caused by a lack of vitamin C. It's a rotten disease. First you get depressed and weak. Then you pant while walking and your bones hurt. Next your skin goes bad,

    378px-A_case_of_Scurvy_journal_of_Henry_Walsh_Mahon
    your gums rot and your teeth fall out.

    Scorbutic_gums
    You get fevers and convulsions. And then you die. Yuck.

    The Impact of scurvy

    Scurvy has been known since the Egyptians and Greeks. Between 1500 and 1800, it's been estimated that it killed 2 million sailors. For example, in 1520, Magellan lost 208 out of a crew of 230, mainly to scurvy. During the Seven Years' War, the Royal Navy reported that it conscripted 184,899 sailors, of whom 133,708 died, mostly due to scurvy. Even though most British sailors were scurvy-free by then, expeditions to the Antarctic in the early 20th century were plagued by scurvy.

    The Long path to Scurvy prevention and cure

    The cure for scurvy was discovered repeatedly. In 1614 a book was published by the Surgeon General of the East India company with a cure. Another was published in 1734 with a cure. Some admirals kept their sailors healthy by providing them daily doses of fresh citrus. In 1747 the Scottish Naval Surgeon James Lind proved (in the first-ever clinical trial!) that scurvy could be prevented and cured by eating citrus fruit.

    JamesLind

    Finally, during the Napoleonic Wars, the British Navy implemented the use of fresh lemons and solved the problem. In 1867, the Scot Lachlan Rose invented a method to preserve lime juice without alcohol, and daily doses of the new product were soon standard for sailors, which is how "limey" became synonymous with "sailor."

    B_scurvy

    Competing Theories and Establishment Resistance

    The effective cures that had been known and used by some people for centuries were not in a vacuum. There were competing theories. Cures included urine mouthwashes, sulphuric acid and bloodletting. As recently as 100 years ago, the prevailing theory was that scurvy was caused by "tainted" meat. How could this be?

    We've seen this movie before. Over and over again. I told the story of Lister and the discovery of antiseptic surgery — and the massive resistance to the new method by the leading authorities at the time.

    Software Diseases

    This brings us back to software. However esoteric and difficult it may be, software is a human endeavor: people create, change and use software and the devices it powers. Like any human endeavor, some of what happens is because of the subject matter, but a great deal is due to human nature. People are, after all, people, regardless of what they do. Patients were killed for lack of antiseptic surgery — and the surgical establishment fought it tooth and nail. Millions of sailors were killed by scurvy, when a cure had been known, practiced and proved for centuries. Why would we expect any other reaction to cures for software diseases, when the "only" consequence of the diseases are explosive growth in the time, cost and risk to build and maintain software, which is nonetheless crappy and late?

    Is there a general outcry about this dismal software situation? No! Why would anyone expect there would be? Everyone thinks it's just the way software is, just like they thought scurvy in sailors and deaths after surgery were part of life. Government software screws up,

    Healthcare-gov-wait
    software from major corporations is awful,

    Hertz fail

    software from cool new social media companies is inexcusably bad. Examples of bad software can be listed for endless, boring, tedious, like forever lengths.

    Toward Healthy Software Development

    If I had spent my life in the normal way (for a software guy), I wouldn't be on this kick. But I didn't and I am on this most-software-sucks kick. Early on, I had enough exposure to large-group software practices to convince me that I wanted none of it. I'd rather actually get stuff done, thank you very much. Now, looking at many young software ventures over a period of a couple decades, the patterns have emerged clearly.

    I have described the main sources of the problems. I have described the key features of diseasefree software development. I have explained the main sources of the resistance to a cure, for example in this post. And I have no illusion that things will change any time soon.

    It will sure be nice when the pockets of healthy software excellence that I see proliferate more quickly than they are, and when an anti-establishment consensus consolidates and gains visibility more quickly than it is. In the meantime, there is good news: groups that use healthy, disease-free software methods will have a massive competitive advantage over the rest. It's like ninjas vs. a collection of retired security guards. It's just not fair!

  • Who Makes the Software Decisions?

    When the home team loses game after game, everyone starts wondering who's in charge, and shouldn't changes be made? Well, this is exactly what's happening in software. It's gotten so bad that it's making the front page of the tabloids.

    Post
    Who's in charge in software? Who makes the decisions? Could it possibly be that the wrong people are empowered to make crucial decisions in software, and that things will only get better if we make a change?

    Software decision makers

    Who makes important decisions in software? The answer is obvious: anyone but programmers (ABP)! Programmers are the ones who do the real work: create and modify source code. They work with various tool sets in one or more programming environments and create the code that leads to the results needed by the business. You might think that programmers, therefore, would be front-and-center in selecting the toolsets and methods to use to get the desired results most effectively. This is rarely the case. Such decisions are typically made by people who are not, and in many cases never were, programmers.

    What the industry thinks

    I receive e-mail solicitations every day for various software products. I'm often invited to attend seminars or webcasts. Here is a typical example I received recently: What

    I'm not drawing attention to it because it's exceptional in any way. The next part of the solicitation is also typical, who should attend: Who

    Who should attend? ABP, of course! Again, the company that sent the solicitation is not doing anything wrong. In fact, they're being smart. They are inviting the people who make important decisions about programming, i.e., ABP.

    How things work in other fields

    In just about any field you can think of, the more highly specialized and skilled the person doing the work, the more involved that person tends to be in all important decisions about the work. While kids starting out in baseball are told what gloves and bats to use, accomplished players have their own gloves and bats they have selected.

    Even when the front-line people in other fields don't make the ultimate decisions, the important managers who do tend to be former front-line people.

    Software is different

    Things are different in software, of course. In sports, anyone can watch the game. They see the players on the field or court. TV commentators can circle a player on the screen and tell you to watch the thing they did, whether stupid or smart. Whatever it was, it makes sense to the viewer.

    There is no equivalent in software! Software is invisible! (to everyone but programmers…) All those important decision-makers ever see is reports. Not only don't they open the hood and look underneath, they can't even see the car! The decision-makers largely rely on rumors and hearsay, but nonetheless develop strong opinions about how best to win on a playing field they can't see, where a game is played they can't play, following rules about which they are entirely ignorant. Hmmm, how is this going to turn out, I wonder???

    It doesn't have to be this way!

    There are places where business-as-usual in software decision-making is … blatantly violated! Someone who … wrote the code! … is actually in charge of things. Of course, since he's a guy entirely without management training — OMG, he doesn't even have an MBA, that's how bad it is! — the place must be a disaster, right?

    One such place is Athena Health. Athena powers doctor's offices. I first encountered them more than ten years years ago, when I had the pleasure of having a phone interview with the guy who wrote their code. I was hearing lots of skepticism, which is why I was having the call. This was still the time when internet bubble thinking ruled technology, and the rumor was that this guy was using "toy" technology and building something that "wasn't scalable." Heh.

    The real problem was that the guy who wrote the code's skill was … get ready for this … writing good code, and making excellent decisions about writing code along the way! He had no experience in or talent for telling ignorant investors what they expected to hear. Bless him!

    We invested, and the company has done, and continues to do, great. A couple years ago, I was pleased to have Ed Park, who is now EVP and COO of the company, attend my nerdfest, a gathering of top CTO's of companies I'm associated with. Here he is explaining something.

    2011 07 03 Nerdfest Sunday 006s
    Ed wrote Athena's original code. He still knows stuff, and continues to make decisions based on the substance, not just go-through-the-motions process.

    Conclusion

    While lots of spinning goes on to disguise the fact, software projects typically fail, and even ones that "succeed" have crappy software. The Post was right (see above) to feature the question, "who makes the software decisions?" The industry's answer is clearly and unambiguously ABP (anyone but programmers). If you want the software your organization produces to sort of, actually, you know, work, you might want to think about removing the "ABP" restriction from the job requirement for software decision-makers.

  • When is Software Development “Done?”

    Almost any activity you can think of, from building a road to composing a symphony, gets to a point where it's done. If not, something awful has happened, and you declare failure and move on. Software projects seem to be different, for no obvious reason. Quite frequently, software isn't a throw-it-out failure, but then it's not done either. What's going on here?

    Building a house

    Why can't software be like building a house? My uncle built a house for himself back in the 1950's. First came the foundation:

    1955 Arch St construction 14s

    He did a great deal of the work himself, as much as he could:

    1955 Arch St construction 67s

    All the way up to finishing the chimney for the fireplace and furnace:

    1955 Arch St construction 97s

    And then it was done! He and his wife could enjoy a nice time with their nephews in front of the fireplace of the completed house:

    1959 01 01 Mountoursville 4-10c

    Which actually was completed, unlike all those software projects, which drag on and on, refusing to get completed or to die. Perhaps this is why books and movies about zombies have become so popular!?

    If houses were like software…

    If houses were like software, instead of actually being done with them, they'd all be like the house built by Sarah Winchester, who bought an unfinished farm house in 1884, and spent the 38 years from then until her death having it worked on and expanded continuously, all day and all night. Here's a clip about it from an old magazine:

    Winchester mystery house

    Building Software

    Frequently, software projects are just failures. In spite of the traditional massive padding of estimates, things take even longer than projected. After the usual remedies (denial, punishing the innocent, rewarding the guilty, etc.) are exhausted, more money and resources are thrown at the project to "rescue" it. This inevitably has the effect of adding to the mountain of evidence supporting the thesis advanced by Fred Brooks in his classic "Mythical Man-Month" that adding resources to a late project makes it even later. Finally, the project is declared to be a "success" and promptly put on the shelf, never to be mentioned in polite company again, or the project in rare cases is declared a failure so that blame can be put onto the innocent target of some politically powerful person's agenda.

    However, there are exceptions. I see such exceptions constantly in the growing, innovative companies I work with. These companies don't just grow. They learn, experiment, evolve, extend and sometimes take great leaps. As modern companies, they do this in close collaboration with their software, and frequently software is all or a major part of the service they provide.

    Instead of thinking of the software as a house that needs to be designed and built, it's more appropriate to think of these companies as starting out with baby software that needs to keep growing and becoming stronger and more independent, like an infant grows to be a toddler and so on. If you stop developing software in this context, you guarantee the demise of the business. With a static business, it's appropriate to think of "finish or fail" as the relevant choices for software. With an innovative, growing business, it's appropriate to think of "evolve or die" as the relevant choices for software.

    Conclusion

    Everyone wants software to be like everything else: figure out what you want, build it, declare completion or failure, and move on. But when software is the engine that runs your business and you're trying to get on track to be a big success, the rules are different. In that case, the rules for software are: make the most important changes, figure out what is most important next, do it, clean up the software a bit, run some experiments, refine the winning approach, and keep evolving. Work fast, work accurate, be responsive, always learn, and keep learning. That's how you win with software.

     

  • Wartime Software Book Available

    I've been threatening to release my book on Wartime Software. It is now available as a Kindle book.

    BBSB cover WTS
    Wartime Software is all about writing software when competition and speed matter. It's about releasing more often. It's about using new methods, as different as building bridges in peacetime and in time of war.

    Here is the introduction, which should give you the idea.

    Most people assume there is one “right” way to build software, and that’s that. While there are various fashion trends that infect software from time to time, none of them are as different as they like to think they are.

    There are some important but little-discussed facts about the mainstream consensus of software development:

    • It is mostly organized to give non-technical people confidence that things are OK, meaning on-time and on-budget. Its highest principle is predictability. Not speed.
    • It mostly doesn’t work. Studies support what everyone in the field knows: most projects fail outright, or have their goals changed to avoid admitting failure.

    So what we have are methods that are slow – and produce crappy results! What happened to slow but sure, or slow but steady? What we’ve got is slow and stupid.

    If everyone you compete against uses the same crappy methods, you’ll be OK. Your projects will be perpetually late and disappointing, but so will everyone else’s, so you’ll be performing “up to standard.”

    But what if you’re not? What if you’re competing against a group that gets way more done in much less time? I’m not talking 10 or 20% here; I’m talking many whole-number factors, like 10, 50 or more. What’s going to happen? It’s simple: you’re going to lose! If that’s OK with you, stop reading right now, close your eyes, and get lost in your muzak. You’ll be happier.

    If your goal is to learn the standard, accepted techniques of software as widely practiced, don't waste your time with this book. But if you're pioneering or really under the gun and need to find a way to program the way software ninjas program, you'll find some useful information in this book.

  • Wartime Software: Optimizing for Speed

    Software Development is a mission-critical issue for increasing numbers of organizations, particularly the growing number of "software-enabled service" organizations. Which makes it all the more surprising that there is a lack of concensus about to best do it.

    I've written about software development quite a bit on this blog. Now, I'm in the final stages of preparing my small book on Wartime Software Development for publication as an inexpensive Kindle book. This post about bridges in war and peace gives some of the flavor. 

    Wartime Software is all about optimizing the process for speed instead of predictabillity. Here's a short excerpt from the book about what optimizing for speed really means.

    The usual procedures for producing code are supremely arrogant. They are arrogant because we decide that we can figure out what the customer wants, and the customer should simply wait while we “get it right.” We’re so sure that we know what the customer wants that we build it, and not just any old way, but we build it industrial strength, loaded up with piles of documentation, test plans for every little jot and tittle, so that when we (finally) roll it out, it’s on silver platters and with bands playing, with code ready to stand the test of time…and sadly, all too often, we’re wrong! We’ve misunderstood the customer, built things they don’t want, failed to build things they do want, built some things they need in confusing, incomplete or simply perverse ways. We frequently spend a year solving last year’s problem, and when we deliver our well-intentioned mess next year, the customer and the market have moved on and sometimes our competitors have leapfrogged us. Most software projects resemble your worst nightmare of a pork-barrel politics public works project, like the “bridge to nowhere,” the project in Alaska that projected nearly $400 million to build a bridge as long as the Golden Gate bridge and higher than the Brooklyn Bridge to Gravina Island, an island with only 50 residents, no stores, no restaurants and no paved roads. Who cares how well the bridge was designed?

    The design of the bridge (or the software) is not the most important thing – the most important thing is the unmet needs of the people who will use the thing you intend to build. And so the number one priority is to discover what those needs are, from the only authoritative source. And by the way, the customer’s opinions may be more relevant than your opinions, but they are not truly authoritative – only the customer’s actions are authoritative.

    And that means that you have to find a way to write code really quickly, so that you can turn your ideas (that hopefully you’ve mostly stolen from customers or other successful services) into services, modify them quickly based on customer feedback, and either discard them and move on, or evolve them until you’ve improved your service, using the real actions of real customers at every step of the path to make your critical decisions. You have to optimize all your processes for speed in order to pull this off.

    And remember – if you’re not doing things this way, you’re probably building a software “bridge to nowhere.”

  • Software Development: the Relationship between Speed and Release Frequency

    There is a deep, fundamental relationship between the velocity of software development and the frequency of releases. I hope this relationship will be studied in detail and everything about it understood, but the basics are clear: with minor qualifications, the more frequently you release your software, the more rapidly it will advance by every relevant measure. It will advance not only in feature/function, but in quality!

    Mainstream thinking on Releases and Development Speed

    The relationship I propose, "more releases = more features & better quality," is counter to the vast majority of mainstream thinking in software. In fact, in those terms, it's counter-intuitive. Here's why.

    Think about software development in the simplest possible terms. You've got define it, plan it, do it, check it and release it. Five basic steps, which apply across a wide variety of process methodologies. Each step takes some time, right? After you do the work, you've got to check it and then release it. And you can't just check what you did — you also have to make sure you didn't break anything that used to work, the "keep it right" part of quality, which grows ever larger as your software evolves.

    This "check and release" process is a kind of necessary evil, the way most people think of it, and as quality failures hit you, it tends to get bigger and longer. A clever project manager (an oxymoron if there ever was one, except when intended ironically, as it is here) will naturally think, gee, let's go from 6 releases a year to 4. By cutting the overhead of the two extra releases, we'll be able to buy some development time back.

    Yup, that really is how people think! Fewer releases = more time to do other stuff = we get more done.

    Not!

    A Real-life Example

    A good example of a company that illustrates the proper relationship between release frequency and development speed is RebelMouse. RebelMouse is a next-generation, socially-fueled publishing platform. It can be used to turn boring-appearing blogs like BlackLiszt from this:


    BL snip

    to this, a snapshot from my RebelMouse page:

    RM page
    v

    Increasingly, they are used by big-media places, for example for Glee:


    Glee

    and the recently released real-time publishing curation features were used for The Following to create a social firestorm:


    The Following

    RebelMouse — the Facts

    The CTO/founder of RebelMouse is Paul Berry. Here he is below explaining something to his fellow nerds at the nerdfest I held a while ago.


    2011 07 02 nerdfest first day 008s

    RebelMouse has grown like crazy in its short life. Currently there are about 280,000 websites powered by RebelMouse, and that number is growing over 100% month-to-month. Their sites have over 2 million unique visitors a month.

    Does RebelMouse have just a handful of releases a year? Duhhh. Try over 10 a day. A day! And there are more than 30 developers, who are not all in the same location.

    Digging in

    There is a lot to be said on this subject. For now, I'm just going to keep it to a single simple but important observation.

    The relationship between development speed and frequency of releases does not hold up at a fine-grained level; so, for
    example, given two organizations, one of which has a release every 10
    weeks and the other every 11 weeks, any difference in speed will be
    random. Similarly, if the two organizations release 5 times a day and 10
    times a day, any difference in speed will also be random. But at a
    coarse-grained level, I observe large differences. HUGE differences.

    Conclusion

    RebelMouse is far from the only example, but they show the relationship between development speed and release fequency very nicely. They move much more quickly than most development organizations of their size — in fact, they manage to push hundreds of releases in the time most organizations would have been able to limp through an "agile" (heh) development cycle or two.

     

  • Software: Comparing Waterfall and Agile

    Lots of people talk about the evils of waterfall-style development. They aspire to move to something they think is better. Agile is high on most short lists for the something better. How different are waterfall and agile? Answer: not much.

    Waterfall

    The Waterfall model is an ordered, systematic method for determining what a computer system needs to do (the requirements) and then getting it done and into production. Like this:

    Waterfall_model_(1).svg
    The method is well-named. It really does look like a waterfall, like that big one famous for honeymoon visits on the St Lawrence River:

    2012 08 08 Niagara Falls 008
    Above is a picture of Niagara Falls I look a little while ago, and is good for understanding software waterfalls. See the big river of water flowing from the upper right? See how everything is clear as it starts to fall? Then you see there's all the mist, making it very hard to see anything clearly at the end. Kind of like most software projects… This one gives you a good sense of the transition from clarity to mist:

    2012 08 08 Niagara Falls 014
    Of course everyone hopes for the good outcome, for the rainbow emerging out of the mist:

    2012 08 08 Niagara Falls 010
    But, I'm sad to say, the experience of Ms. Annie Edson Taylor comes closer to the common experience of waterfall software development:

    2012 08 08 Niagara Falls 020
    While there is a vast array of software development philosophies, waterfall appears to be the standard against which most of them are compared; her concluding remarks saying it all: "nobody ought ever do that again."

    Agile

    Naturally, people look for better ways, and find lots and lots of ways that are thought to be better. It is incredible the number of software development philosophies there are. They go on and on! At least in my experience, Agile is the one I most often hear as a replacement for waterfall.

    Like with all these things, people have a lot to say about Agile. There are books and books and conferences and training and certification, endlessly. Here is a summary diagram, given at roughly the same level of detail as the waterfall diagram above:

    800px-Generic_diagram_of_an_agile_methodology_for_software_development
    Lots of strong claims are made for Agile. It's faster, leads to better results, etc. Stuff that everyone says they want. But what are the real differences?

    Comparing waterfall and agile

    Take a close look at the two diagrams. Both of them start from requirements and go through design, development, test, integration and delivery. Here's the difference: with waterfall, you determine all the requirements up front and then drive through to delivery. The requirements are fixed, and you determine the time from there. In Agile, you determine a bunch of starting requirements, deliver them in a fixed time period (for example 2 to 6 weeks), and then get another set of requirements, and keep cycling until the project is done.

    Waterfall: first fix the requirements, figure out the time.

    Agile: fix the time periods, and then repeat until you're done.

    Putting all the rhetoric aside, the difference between the two methods is simple: one determines the time from fixed requirements, and the other takes fixed time periods and fits requirements into them as appropriate. In other words, Agile is little more than a series of time-fixed waterfalls!

    2004 02 16 Belize Waterfall (8)
    Remember, it's all just Process!

    It's easy to get caught up in all this and forget that the most important thing isn't what makes Waterfall and Agile different — it's how they're the same. Not exactly the same, but the same kind of thing: process!

    You can build 100,000 lines of really crappy code using Agile. You can build 10,000 lines of great code that accomplishes the same thing using Waterfall. Or the other way round.

    In Simple Terms

    In simple terms, Waterfall is:

    Do once: {Define. Design. Do. Check. Deliver.}

    and Agile is:

    Do until done: {Define. Design. Do. Check. Deliver.}

    Conclusion

    There are many good things about Agile. It's more iterative and can allow for more feedback loops than pure Waterfall. But its difference from Waterfall is easily exaggerated, which helps explain why the results in practice are so often disappointing. In the end, switching the precedence of the two key variables (requirements and time) can't make that much difference when the fundamentals of software and its Postulates are not addressed.

  • Software Postulate: the Measure of Success

    There is a Postulate of software development that, like all postulates, has huge impact on much of what goes on in software. This postulate concerns what is the measure of success in software. There are many ways to formulate it, but at heart, it's simple: success is measured by meeting expectations that have been set. Just as getting to modern physics requires changing the parallel postulate in geometry, so does getting to modern software require changing the "meet expectations" postulate in software development.

    Expectations in Software

    Two typical CIO's are talking with each other. They get together to share experiences because, while they don't compete with each other, their groups manage technologies of similar size and complexity.

    CIO A is real happy today. "I guess they finally listened last time. They had been late once too often. I dished out a pretty blistering speech about how awful it was, and I added on a couple of threats about what was going to happen to careers if it happened again. We just had our project review meeting, and for the first time in memory, most items were green, with just a smattering of yellows and a couple reds. I breathed such a sigh of relief."

    CIO B isn't so happy. "That's where I was a couple months ago. Why can't these guys just keep it up? What is it with programmers? We were mostly green, but now green projects are a fading memory. Mostly we're in the yellow and red. Yuck."

    What are these guys talking about? Project Management. They're talking about whether the expectations set by their staff have been met (green) or not (yellow and red).

    The green, yellow and red are at the end of a road that starts with requirements, moves on to the crucial, notoriously difficult art of estimation, and then proceeds to implementation. Green says that the estimates are being met, and yellow and red say, well, maybe not.

    Introducing Absolute Measurement in Software

    The CIO's get over their griping and start to compare notes on some recent projects. As it turns out, they're both building a Data Warehouse. They're in the same industry, and the projects are similar in nearly every way, at least from the outside. Common sense tells them that the internals of their projects should be pretty similar. So they compare notes.

    CIO A (the happy one): "My project sounds about the same as yours. It's such a relief that we're on track. We've got a lean team of just 10 working on it (at one point I thought it might take 20 people), and we're just 6 months from the end of the 18 month project."

    CIO B (the unhappy one): "What? I've got 2 people working on what sounds like the same project as yours. I'm 4 months into the project, and instead of finishing in 2 more months, they're telling me is going to stretch out 2 more weeks, a 25% overrun of the remaining time, which is why I'm so annoyed."

    CIO A: "Are you kidding me? You've got just 20% of the staff with a target of 1/3 of my timeline, and you're mad? Your whole project is a rounding error compared to mine."

    CIO B: "I guess my guys are doing OK after all. I just wish they could set expectations better."

    On this rare occasion, the software managers confronted the reality of absolute measurement in software — but only by chance, and only by comparing two projects to each other. They're not really even approaching absolute measurement — if their two projects had been run equally incompetently, there would have been no surprise!

    What is the Measure of Success?

    What this dialog reveals is the near-universality of measuring success by comparing results to expectations. The CIO who was mad was spending very little money and getting results in a fraction of the time of his compatriot, while the CIO who was happy was doling out the money like confetti and taking the slow boat — but since his team had started by giving him even worse estimates, he thought he was doing great.

    Neither CIO was measuring the success of the projects the way most of us measure. They started with estimates. If the work was coming in better than the estimate, it was judged successful; if the work was delivered worse than the estimate, it was not successful.

    This is the way it works in software. It's an unspoken assumption, a postulate that underlies nearly everything that is done in software. People don't propose alternatives, just as no one proposed an alternative to the parallel postulate in geometry for more than a thousand years. It's considered the one and only way to do things.

    There are other measures of success

    Groups that do an outstanding job of producing software often achieve it with a different measure of success. Just as you can optimize your work for setting expectations and meeting them, you can optimize your work to achieve maximum velocity. Estimates are less important than maximum speed. For example, there's the well-known answer to the question of how to avoid being eaten by a hungry tiger. It has nothing to do with expectations. It's simple: run faster than the other guys.

    Conclusion

    This is a point of deep theoretical interest, and also great practical application. It's related to building bridges in war and peace. If you're under no time or budget pressure, then maybe the meet-expectations assumption is the way you should measure your software efforts. But if you are under competitive pressure, then you might want to think about organizing your software efforts according to a different measure of success: the velocity method.

  • Software Development Process in Simple Terms

    Software development is complicated to understand, and even more complicated to do. What's worse, developers disagree among themselves about nearly everything. Nonetheless, it's worth understanding at least the basics of what they do, confining ourselves here just to software process, ignoring (for now) the far more important software substance.

    Software Terminology

    Most of the talk you hear about software is about process, things like requirements, design, how and when testing should be involved, etc. There is a sea of specialized language about every aspect of software process, much of it coming from conflicting methodologies.

    All of software process can be boiled down to a small number of basic, understandable things.The main steps are nearly always:

    • defining what you're going to do
    • how you're going to do it
    • doing it
    • checking it
    • delivering it

    What are you going to do?

    Whether it's called "requirements" or "user stories," pretty much every software process starts here.

    How are you going to do it?

    This one amounts to the design phase. Are you going to use a DBMS? Existing libraries? Are you going to apply design patterns? Usually groups have strong preferences for these things, so the usual decisions are endorsed and people move on.


    Do it

    Finally! People actually do stuff!

    Check it

    If there were a software equivalent of the Garden of Eden, in which software happened without bugs (sin), I am unaware of it. So everyone assumes that someone (probably the other guy) screwed up, and we need to fix it.

    Deliver it

    Finally the software needs to get from where it's built to where it's used. The methods and destinations vary, but that's what happens in this final step.

    This is all process

    What I've done here, as critics would say, is "over-simplify." Given the incredible number of different software philosophies, this is understandable. Even within a philosophy, differences that seem minor to outsiders are of crucial importance to those who care about that kind of thing.

    This is all just process! We're just talking about formalities. For example, the process I've described also applies to building a physical structure. The same steps apply whether you're building a simple house
    1956 07 43921 Bagley Rd 01-21
    or the Taj Mahal.
    1996 10 12 Taj mahal 01
    If essentially the same process can result in a world-wide tourist destination or a starter home, is process really the most important thing?  In other words, substance is vastly more
    important than process.

    Nonetheless, Process still matters

    If substance is so important, should process be ignored? Of course not — having a sound process is essential. The five steps I defined above need to happen, and depending on the process, appropriate sub-steps as well. For example, unless and until you hire programmers directly from the programmer equivalent of Eden, checking is a non-negotiable requirement. That's exactly why it's important to understand software process in these extremely simple terms. It's got to happen.

    But then you spend most of your time and effort on building your starter homes or your Taj Mahal. In other words, you concentrate on the substance.

    Conclusion

    Software development is plagued with warring methodologies and a surfeit of terminology. It's worth remembering that, in the end, it all boils down to a set of simple, understandable steps that are universal.

  • Process and Substance in Software Development

    High among the concerns of software management are questions of organization and process. While these are reasonable concerns to have, I generally find that paying attention to substance is more productive. If you think of your organization as being like a software factory (a line of thought I generally discourage), this means you should pay more attention to the widgets that come out than the organization of the shop floor.

    Process

    It is easy to be totally consumed by process, organization and people. Everyone wants to know who's their boss. When there are disputes, who has the deciding vote? Many people want to know their "next step" in the organization, the path to greater responsibility, power and pay. Such concerns tend to be greater in the minds of the people on the upper part of the ladder, not to mention the top, since they usually had to work at getting where they are.

    Process and organizational structure are tightly tied. Is QA a separate group with its own head? Or are there QA people as part of each small group of developers? If QA is distributed, what is the reporting structure? This is complicated by the myriad of process fashions that sweep through the industry — there are literally dozens of them in play at any given time, things like Agile and Extreme, with Lean coming up fast.

    Substance

    Substance is embodied in the code that is produced. Given a set of general requirements, the substance of what is produced can differ wildly. Suppose you're extending your application to mobile. Do you use HTML 5? How do you bridge to the details of the local device? Do you write in Objective C (the native language for the Apple devices)? How much do you store locally, and how do you communicate with the servers? What about all the Android devices?

    And I'm just talking about the simplest questions here. Real substance is contained in the details of how the code is written in the chosen environment. For example, the code can be pretty "straight," it can have loads of parameters, it can be layered to varying extents, it can be driven to varying extents by meta-data, etc. These choices have a huge impact on the outcome.

    Process vs. Substance

    Dilbert illustrates the point nicely, as he often does. In the cartoon below, the pointy-haired boss focusses, as you would expect, on process. He is concerned about dates and whether Wally has met expectations that have been set, completely ignorant of the substance.

    Wally, crafty as ever, claims to have created a disastrous substance. The pointy-haired boss, unable to determine whether Wally's claims about substance are true, and unwilling to risk that they may be true, gives in.


    Dilbert Mar 10 2013
    Conclusion

    Don't be Wally — but also don't be the pointy-haired boss. Pay attention to substance. Make it your business to understand it. Your attention will provide an example to your group, telling them what's important to you. Your attention to substance will be like a chef who cares that the diners love the food that comes out of the kitchen, and does so by — what an idea — paying attention to the food itself.

  • Postulates of Software Development

    A great deal of what we do in software is a direct consequence of a couple of fundamental assumptions we make: postulates of software development. Only by questioning and changing those assumptions can we bring about fundamental change in the way we build software.

    Postulates or axioms are rarely discussed or thought about. We just accept them, like breathing air or walking on the ground. Changing a postulate or assumption normally results in a cascade of consequences that changes a great deal.

    Geometry: The Parallel Postulate

    We can understand postulates in software by seeing how they work in geometry. In Euclidean geometry, there are four fundamental postulates, and a pivotal fifth one, the parallel postulate. This is the one that says, basically, if the angle between two lines isn't exactly 180 degrees, the lines will eventually cross; otherwise, they are parallel, and never meet.


    Parallel

    What's important about this postulate (and the others) is that all the rest of Euclidean geometry is derived from them. Given the postulates, all the theorems are implied.

    For example, the famous Pythagorean Theorem is one of the many theorems whose truth grows out of the small seeds of the postulates.


    Pythag

    In the diagram above, the theorem states that a2 + b2 = c2.

    Non-Euclidean Geometries

    What if parallel lines can meet? Think it's impossible? Well, think about lines on a globe.


    729px-Triangles_(spherical_geometry)

    Lines that are parallel end up meeting — and this is business as usual in Elliptic Geometry. What's worse, the Pythagorean Theorem does not hold in non-Euclidean geometries in general, and spherical geometry in particular.

    This isn't just textbook stuff. For example, Einstein's General Theory of Relativity is based on non-Euclidean geometry. In fact, questioning the Parallel Postulate and devising ways of thinking about and describing non-Euclidean spaces was essential to the development of modern physics. So long as geometry was Euclidean and only Euclidean, progress was impossible.

    The Postulates of Software

    So what are the software equivalents of the Euclidean Postulates? There are few questions that are more important, because only when the foundation is questioned and changed is rational, constructive, internally consistant change possible. Only with new postulates can we derive a whole new set of theorems to define software practice. Only then is fundamental change and improvement possible.

     

  • The Disease of Software Project Management

    There are a lot of books on the market about project management in general and software project management in particular. More than 6,000 of them.

    They all appear to think that software project management is a good thing — at least the brand they preach.

    I've threatened to publish a book saying that it ain't so. Giving details, arguments and examples. Sounds radical — but it's not. Most sensible, productive software people know that software project management's effectiveness is best compared to the fineness of the emperor's new clothes:


    Emperor_Clothes_01

    In publishing this book, I'm not doing any more than the little boy in the story, who cried out "But he's not wearing anything at all!" In other words, I'm just saying what everyone who isn't blind already knows.

    The book is now available on Amazon for Kindle. I even made a nerdy cover for it:


    BBSB cover SPM
    My hope with this book is to assure the people who know there's something deeply wrong with project management orthodoxy that they're sane people. but living in an asylum which the inmates have taken over. I hope the book will arm them with the concepts they need to make a break for it, so they can experience the fresh air and freedom they deserve.


  • Software Project Management Book

    I've written a fair amount about software project management in this blog. I've also written a short book about it. Like the software quality book, so far I've only distributed it privately. But also like that book, I'm thinking of publishing it as a Kindle book.

    Tid-bits on the blog

    It's hard to be seriously involved with software and avoid run-in's (not to mention complete co-option) with project management. You can hardly start to think about writing some code without someone popping out with "how long do you think it will take," the question of estimates. If you resist or act uncomfortable, you're put on the spot. Everyone, you see, wants their software group to be as predictable as though it were a software factory. The people who talk this way clearly don't understand that dates are evil, but there are so many of them, it's like you live in a land of zombies.

    Background

    While many programmers resist it, they most often accept project management as a necessary evil, as something that they can't avoid. As they age, sadly, most programmers accept this perverse thought as though it were a natural accoutrement of adulthood: wild young programmers may resist the bridle, but mature ones accept that it's part of life.

    I too resisted it, and I too came to appreciate some of the rhetoric of software project management. But then reality intervened.

    A bit more than 20 years ago I ran a small software group doing pioneering work in document imaging and workflow. A new management team took over, and were appalled that we just wrote code. I was guilty of about the worst thing a manager could be accused of (in their eyes): running an out-of-control, seat-of-the-pants operation in which people just did stuff, without the comfort and support of project management.

    Things changed. Expensive project management software got bought.
    Tour-tasks
    Expensive consultants came in and lots of formerly productive people sat in excruciatingly long training classes. For days! Then we settled into a regimen in which lots of reports and dense charts
    Tour-dashboard
    were generated regularly, and we threw around terms like "critical path."

    Well, we "got under control." And stopped writing much code. And fell behind the market.
    Tour-report
    As we became more predictable, we became more inflexible. Timelines stretched out so far that sales people lost heart. It was sad.

    After that baptism by torture, which was followed by many more, I really began to think about what was going on when I got involved with Oak. I had a chance to see lots of companies producing software with varying doses of project management involved.

    I noticed that the Indian Outsourcing companies were pushing project management big-time, and winning business with it. It must be a good idea, right? When you dove into the details, they did not win by being faster and more flexible. They were completely rigid and slower. But predictable and marginally less expensive. Here's the bottom line: they won business by costing less. They cost less because they paid their programmers only about one tenth of the equivalent programmer in the U.S. But their methods had so much overhead that they staffed every project so much that the final bill to the customer ended up being only about 30% less than doing it in-house. So, oddly enough, the Outsourcers with their devotion to project management proved the point of how bad it is.

    On the positive side, I saw entrepreneurial companies doing more work with less, having more flexibility, less overhead, and shorter cycle times. Had they found clever new ways to implement project management? No. They just found better ways to develop better software with fewer bugs, more quickly. That's all!

    Systematic Thought about Project Management

    These experiences led me to try to understand what project management was really all about — why everyone kept trying to apply it to software, why it never works (except if you don't care about time or money), and what the alternatives are.

    It was a long journey, and I was surprised that I ended up with a short book. As I state in the book:

    “Project management” is as effective at guiding
    software projects to success as hopping and grunting is at helping pool balls
    to drop in the intended pockets – it may be entertaining to watch, but it has
    no constructive impact on the outcome. More important, to the extent that we
    focus on our hopping and grunting technique, we fail to pay attention to what
    really matters – hitting the ball correctly with the queue. Similarly, in
    software projects, the more things get off track, the more we seem to focus on
    project management hopping and grunting activities, so much so that the shaking
    floor actually makes things worse.

    Project Management needs to be taken down a few notches

    Part of the problem is that it just doesn't work. Another part is that everyone with experience knows it doesn't work. The crowning part of the problem is that even people who know it doesn't work and put it to the side when they really have to get something done, continue to kow-tow to it. This is illustrated by a story I personally experienced that I tell in the book.

     I recently spent some time with the seasoned,
    non-technical leader of one of our portfolio companies, and some of his lead
    technical people. We discussed one of their most successful products. The CEO
    described how he got involved with a couple customers who had a problem that no
    one could solve, how he promised them a solution and got his programming team
    to throw something together that sort of worked. They then scrambled, fixing
    problems and coming out with a flurry of new releases, always listening to the
    customer and evolving their code until things settled down, the customer’s
    needs were met and the company had a new product line.


    “Of
    course,” said the CEO, glancing over at his technical people, “that was the
    wrong way to do things. Later, we settled down and got back to proper project
    management.” Of course – the CEO had to intervene and make sure something
    important actually got done. Later, “project management,” i.e., doing very
    little but trying hard to do that little on time and on budget, could be
    allowed to return.

    Conclusion

    Again, I'm thinking of pulling the trigger on the Project Management book. But first I need to finish formatting it.

    Update:

    Trigger pulled. Book available.

  • Software Productivity: the ABP Factor

    There is one central, screamingly obvious factor that impacts programmer productivity. It is unknown, ignored and/or undiscussed. But it matters more than most other factors. It's the ABP factor, "Anything But Programming." When a programmer isn't programming, that programmer isn't, well, writing code. I don't care what that programmer is doing! If it's ABP, that person is not — repeat not — writing code!

    The ABP Factor

    If your job is reading, any time you don't spending reading is time you're not doing your job. Call it anything you like: getting ready, preparing, taking a break, recovering, digesting, blah, blah, blah. Whatever you call it, if you're not reading, you're … (get ready now) … not reading! You're doing something else.

    If your job is cleaning the house, any time you spend not cleaning the house is time you're spending not doing your job.

    I hope you get the idea by now.


    2012 07 10 YoYi meeting
    (Above: typical programmer/manager meeting in Beijing. Photo by me.)

    Understanding the Status of ABP

    Non-technical people, people who don't program, people who used to program but don't any more and people who still think they're programmers but have regressed to a lower form of life (like managers) often value what they can do (which is clearly ABP) highly. Makes sense. If you do it, it must be a good thing. If you can't, don't or won't do it, it must not be the kind of thing really valuable people like you do. This applies to programming in spades. ABP is highly valued. Programmers quickly get the idea that the way to increase your status is to spend increasing amounts of your time indulging in that valuable thing, ABP.

    Is ABP Worth Something?

    Putting my cynicism aside for the moment, the answer is a clear, resounding yes. A certain amount of planning, coordination and other stuff is necessary. Not doing it well leads to really bad things. It's even OK for some people to spend most of their time in such non-programming activities!

    But let's make it even clearer. Think about manufacturing or customer service. Either you're directly contributing to the production of goods or services, or you're not. If you are, we can start talking about how effective and efficient you are. If you're not … either your personal productivity is lower than it could be or, much worse, you're overhead!!

    Conclusion

    Are you writing code? Good. Then we can have a grounded discussion about your productivity — at least you're trying; at least your shoulder is at the wheel and you're pushing. Are you doing ABP? You're probably self-important overhead; stop wasting my time and yours.

    Maybe you really are doing something that makes productivity better in some mysterious way. I'm open to the possibility! But the burden is on you to prove it.

    Personal note: I spent most of every day writing code for a couple decades. I know all about programming overhead. I was acutely conscious of my personal overhead, and contributing to other peoples'. Now I don't write code. I feel guilty wasting the time of programmers. The only way I can justify it is if, as a result of our interaction, their productivity goes up, so that the time spent not programming ended up being a net productivity increase. I always think: minimize the time, maximize the value.

  • Software Productivity Divisors: Doubling the Work

    Software productivity is incredibly important, but hard to measure and hard to achieve. Nonetheless, good people seek ways to make their software productivity better. They seek out productivity multipliers.

    All too often, widely hailed advances in method and technique not only fail as productivity multipliers, they are actually productivity divisors!

    There is a whole family of software productivity divisors whose obvious impact is to double (or more!) the labor for any given body of code. They don't double the pleasure or double the fun — they double the work.
    Doublemint 2
    And yet, these methods are taken seriously by all too many people in the field.

    What are software productivity divisors?

    The field of software is awash with a plethora of tools and techniques, each of which claims amazing benefits, usually boiling down to producing better code with less effort. They are, in effect, promoted as productivity multipliers.

    Some of the techniques are simply useless. But a surprisingly large number of supposed software improvements actively make things worse. They aren't productivity multipliers — they are productivity divisors.

    Doubling the Work

    One large class of fancy new software methods essentially involve doubling the work. There are two main methods for doing this.

    One favorite method of work doubling is very simple. Instead of using one programmer to do a job, use two.

    Another widespread method of work doubling is to write the code twice, in different forms. Usually, one form of the code is the code that you actually want to write. The second form of the code is normally a mirror of the original code, whose nominal purpose is to test the original code.

    Each of these methods is completely brain-dead. They normally double the labor of everything that is done, while not improving the value of the results at all. Costs more … later completion … no greater value … Hmmmmmmmm … what's wrong with this picture??

    Pair Programming, the Hot New Productivity Divisor

    Programming is, by definition, a task that requires great mental concentration. Kind of the opposite of what you do in social situations. Two people working on the same task relate to each other socially. In fact they are supposed to interact. If one person codes and the other one sits there, you've doubled the labor with no gain at all. To the extent that the second person interrupts (as he/she is supposed to), you've made things even worse.

    I know, I know, the whole idea is that the whole is greater than the sum of the parts. The second person catches mistakes the first person missed. The second person comes up with a much better way of doing something than the first person got part way done doing. Etc. It's all a bunch of Bologna. I'm relieved that silly parodies of this silly idea are beginning to appear.  

    Unit Testing, the Classic Productivity Divisor

    There are many variations of unit testing for code, from test-driven development on. People get into arguments about whether you should write the test before or after you write the code, whether the person who writes the code should also write the test, and endlessly on and on. They all amount to arguing which is worse: dirty stinking rotten, or slimey awful junk. Who cares?? It's all bad!!!

    No one claims it's easy to write good code. Code that's accurate, fast, does the job, does only the job, has no side-effects and doesn't crash. Still, there may be bugs. The idea with test code is that someone, in some order, writes test code that assures that the original code is accurate, fast, does the job, does only the job, has no side effects and doesn't crash. If you leave out any one of these items (as is typically the case), you haven't tested everything.

    The person who writes the test code has to understand the requirements just as thoroughly as the person who writes the base code. Just as the person who writes the base code can make a mistake, so can the person who writes the test. It is just as hard to write the test as it is to write the code being tested. So you've doubled the amount of work, at minimum.

    Now think about making changes and debugging. Instead of just changing the code, you've got to change the test code, and either one can have bugs. The test code can fail to test something that is supposed to work, or declared that something works when in fact it does not, as a result of a bug in the test code. So changes are now twice as much work, with twice as many places for bugs, in addition to bugs of interaction.

    Any variation of unit testing has the certain result of at least doubling the cost of building a piece of code, usually increasing the elapsed time, with vague promises that are never proven and never measured about the improved value of the results.

    Conclusion

    In many organizations, there is a dead-simple way to improve software productivity: STOP the madness of institutionalized software productivity divisors! You don't have to be best-in-class; you just have to avoid doing things that for sure double the work while doing some enthusiastic but empty arm-waving about the future benefits.

  • What is Software Productivity?

    People
    in every field are concerned about productivity. They want to know how
    to increase their own productivity and the productivity of their group.

    Software
    is a peculiar field in which history is ignored and fashion too often
    trumps objective fact. In software, we don't even think about productivity, much less have any idea how to measure it.

    What is Productivity?

    Productivity measurement is central to our economy as a whole, to industries as a whole, and to individual companies.

    In general, the more productive you are, the more outputs you generate for a given set of inputs.

    Productivity can be measured by various means, depending on the purpose of the measurement. At the level of the firm, the most basic measurement is comparing the cost of the inputs to the value of the outputs, adding in overhead and the cost of the capital and labor required to produce the outputs.

    Increased productivity can be accomplished by greater automation, improved processes, reduced cost of inputs, labor or capital, and by more effective use of labor, labor that is higher quality, more discliplined, trained and motivated, etc.

    This is basic economics. It is applied from the national level down to mom-and-pop firms.

    What is Productivity in Software?

    Measuring
    software productivity is incredibly important. But practically no one
    actually does it! It's not even studied in any meaningful way — if it
    were, you'd find competing theories, and experiments to settle the
    issue. While there are a couple studies, their paucity and narrowness
    prove the case that software productivity is largely ignored, both in
    theory and in practice. GIven that software is increasingly the engine
    that drives our enterprises, this is amazing.

    The situation with software productivity is a bit different than in your basic widget factory.

    • The cost of the inputs is pretty much zero
    • For all the talk of software factories, the cost of "machines" is generally insignificant
    • While the quantity of output (lines of code, LOC) is tangible, it's not really correlated with value
    • Except in a few cases of commercial software products, the value of the output can be hard to measure, and even then it can be dicey.
    • The overwhelmingly important factor in cost tends to be labor!

    Software Productivity: the best case

    To
    understand software productivity, let's start with an extreme case, but
    one that does happen a small percentage of the time.

    A
    single person understands the requirements of what is to be written.
    That individual, perhaps with the assistance of one or two other people,
    writes, tests and delivers the code. The code runs in production for a
    number of years, and the few bugs that arise and the enhancements that
    are needed are quickly supplied by the original author(s).

    In a case like this, the value of the outputs can still be hard to measure. But at least the other costs are easy: it's basically the cost of the people, with overhead.

    Is
    this a fantasy or does it happen in real life? Well, all I can say is
    that it happened in my life, more than once, and I know people who have done similar things. In my first job after
    college, I wrote a commercial FORTRAN 66
    compiler and run-time system (in assembler language) for a computer
    company
    . I had an assistant for part of the run-time environment. It was
    in production use for about ten years. I fixed a couple minor bugs in
    the first year, and that was it. I personally did a couple similar
    projects in different software fields.
    IMLAC

    In the case of the compiler, the cost was easy to measure. The value was typically challenging, but not too hard. The company sold the compiler. But its greater value was enabling sales of the display computer on which it ran. Potential buyers were refusing to buy because there was no high level language available at the time; my compiler enabled many sales that otherwise would have been lost.


    It's
    easy to imagine the software productivity in this simple case. You can
    pick your measure of value (pick a few — why not?), including lines of
    code, number of users, value of the code, indirect value. You pick your measures of cost
    (again, pick a few) including person-days, elapsed time, salaries, etc. And finally, you relate them into simple, calculable numbers that you can track over time.

    Software Productivity: the Reality

    In practice, software productivity is an absolute bear to measure. Cost isn't too hard. But valuing the outputs? That's the big problem. The other problem is that software just isn't like a widget factory. The items ("pieces" of software) cranked out are simply not comparable to each other, as I've discussed.

    Conclusion

    Productivity is important in general, perhaps even more in software than other fields because it is so hard to measure. Software won't get better until we increase its productivity, and that won't happen until we are hard-nosed and objective about exactly what software productivity is. All the discussions in this blog about software quality, estimating and other things are best understood in the broader context of software productivity, the great unknown frontier of computer engineering.


Links

Recent Posts

Categories