Category: Computer Science

  • What is your Software Project Manager’s Batting Average?

    How often do software projects fail? While it’s never talked about and rarely makes the news, the fact is that software projects fail with alarming frequency. Exactly how often do they fail? The shocking answer is, despite there being august professors of Computer Science and Computer Engineering falling out of the woodwork, doing research, pronouncing their deep knowledge in papers and teaching the next generation of students, the truly shocking answer is that no one knows! Or cares! Oh, they say they care about teaching people how to create great software, all you have to do is learn the intricacies of Object-Oriented Programming (or whatever). But none of them studies the results. There is no science in what is called “computer Science.”  They all declare they know how to create good software – so long as no one takes the trouble to measure and count, judge and evaluate, to see how often it happens.

    In the past, I’ve compared bridges falling with software failures. We can understand this from another angle by looking at … baseball.

    Baseball Science

    Aaron Judge is an amazing professional baseball player, part of the New York Yankees. He’s an excellent fielder, but he’s best known as a hitter. In his nine seasons so far, he’s already hit over 300 home runs, with 50 so far as of this writing. His batting average this season is an awesome .331 so far, with a .288 average for his career.

    Judge batting

    Let’s think about that enviable .331 batting average he’s got so far this year. Any player would love to have such an excellent average. But that’s in the context of professional baseball. Simple arithmetic tells you that this “excellent” average means that he only gets a hit in one third of his at-bats! More than two thirds of the time he strikes out or otherwise fails to get a hit! What would you think of someone who managed to drive safely to work just a third of the time, getting into accidents or otherwise screwing up two thirds of the time? What would you think of a car that only worked a third of the time you tried to drive it? And so on … you get the idea.

    Why is this? Are all these highly paid pro ball players really losers? Of course not. You can see what’s going on if you watch the home run derby, when top hitters go the plate and get soft, easy pitches from a pitcher standing behind a screen so they don’t get hit. Those guys nearly always hit the ball, and lots of them are home runs! But we all know that’s a once-a-year event.

    Home run derby

    Most of the time, there’s a highly skilled pitcher on the mound whose goal in life is to strike out the batter or sucker him into hitting an out. Pitchers like Gerrit Cole, who is so good that he’s being paid $36 million dollars this year.

    Cole pitching

    When you’re a batter and walk up to the plate facing Gerrit Cole, you know you’ve got someone who’s in an excellent position to lower your batting average.

    So is creating software like trying to get a hit with a fearsome skilled pitcher on the mound? Or is it more like trying to drive your car to work or designing and building a reliable car that just works, with remarkably few exceptions?

    The sad fact is that building software resembles the home run derby, except that instead of trying to hit the ball out of the park, all the batter has to do is … not miss. With a rule like this, you’d expect something close to a 1.000 batting average. They try to make it even easier in the software world by messing with requirements, tripling estimates and doing everything they can to make the project a “success.”

    Is software really that bad? Yup. Just for fun, I’m going to share a couple of the rarely publicized stories of software failures I made note of a dozen years ago. With things like ransomware exposing the ugly underside of most software operations — not to mention the awfulness of computer security in general — you can be sure things haven’t gotten better.

    Sample Failures

    In 2014, the VA admitted that they have over 57,000 patients waiting for their fist visit. What's their excuse?

    "The official also said the VA needs to update its scheduling software package, which the department has been using since 1985. “It predates the internet and Blockbuster’s rise and fall,” he said."

    Does that count as a software failure? It's more like the software department died years ago and no one noticed.

    Here's a good one from 2012:

    The U.S. Air Force has decided to scrap a major ERP (enterprise resource planning) software project after spending US$1 billion, concluding that finishing it would cost far too much more money for too little gain.

    Dubbed the Expeditionary Combat Support System (ECSS), the project has racked up $1.03 billion in costs since 2005, “and has not yielded any significant military capability,” an Air Force spokesman said in an emailed statement Wednesday. “We estimate it would require an additional $1.1B for about a quarter of the original scope to continue and fielding would not be until 2020. The Air Force has concluded the ECSS program is no longer a viable option for meeting the FY17 Financial Improvement and Audit Readiness (FIAR) statutory requirement. Therefore, we are cancelling the program and moving forward with other options in order to meet both requirements.”

    The Air Force will instead need to use its “existing and modified logistics systems for 2017 audit compliance,” the statement adds.

    They started spending money in 2005, spent over a billion dollars by 2012, got nothing of value, estimated they'd need to spend the same again for eight more years to get about a quarter of the original plan done. If there were anyone paying attention to batting averages in software, that would likely be a winner.

    Here is rare-to-find information on the frequency of failures from a book.

    The odds of a large project finishing on time are close to zero. The odds of a large project being canceled are an even-money bet (Jones 1991).

    In 1998, Peat Marwick found that about 35 percent of 600 firms surveyed had at least one runaway software project (Rothfeder 1988). The damage done by runaway software projects makes the Las Vegas prize fights look as tame as having high tea with the queen. Allstate set out in 1982 to automate all of its office operations. They set a 5-year timetable and an $8 million budget. Six years and $15 million later, Allstate set a new deadline and readjusted its sights on a new budget of $100 million. In 1988, Westpac Banking Corporation decided to redefine its information systems. It set out on a 5-year, $85 million project. Three years later, after spending $150 million with little to show for it, Westpac cut its losses, canceled the project, and eliminated 500 development jobs (Glass 1992). Even Vegas prize fights don't get this bloody.

    If you care to look, you will find loads more examples of failures the group has been unable to keep secret. The failures keep rolling in spite of the huge efforts to reduce requirements, inflate estimates, extend time lines, increase staff and everything else. You have to ask the question::how many software successes are really failures in disguise? If anyone were serious about calculating software batting averages, this would be a key factor.

    This pattern has resulted in some fairly widespread humor that you can be sure isn't mentioned in project management meetings. For example, here are the stages of a software development project:

    1. Enthusiasm
    2. Disillusionment
    3. Panic and Hysteria
    4. Search for the Guilty
    5. Punishment of the Innocent
    6. Praise and Honor for the nonparticipants

    Why do Software projects fail and how do you win?

    When lots of human beings work at something for a long time, they tend to figure out how to do it. Building software appears to be a huge exception to that rule. With decades of experience under our belt, why is software the exception?

    This is a long subject. I have gone into great detail spelling out the causes … and the cures!

    Start with history and evolution:

    https://blackliszt.com/2023/08/summary-computer-software-history-and-evolution.html

    Everyone knows that software project management is essential to producing software that works, on time and on budget. In spite of decades of "innovation," it doesn't get better. The winners follow a different set of rules.

    https://blackliszt.com/2023/04/summary-software-project-management.html

    Software quality assurance is an important specialty within the non-science of computing, but in spite of all the time and money spent, quality continues to be a major issue. There are solutions that have been proven in practice that are ignored by the experts and authorities.

    https://blackliszt.com/2023/04/summary-software-quality-assurance.html

    How do you win with software? Nearly everyone starts with requirements, makes estimates and is judged whether they deliver on time and on budget. This optimizes for expectations and is a proven path to failure. The winning path optimizes for speed, customer satisfaction and continuous quality. It's what the people who need to win do.

    https://blackliszt.com/2023/07/summary-wartime-software-to-win-the-war.html

     

  • Cat litter is more scientific than Computer Science

    Which is more scientific: Computer Science, which is all about numbers and data and exact algorithms, or cat litter, which is all about giving cats a way to poop and pee while reducing the annoyance of their human servants? Seems obvious, right?

    It’s obvious until you understand the practical disaster of creating software as detailed in these posts,

    https://blackliszt.com/2023/04/summary-software-quality-assurance.html

    https://blackliszt.com/2023/04/summary-computer-science.html

    and until you pay careful attention to the information typically found on a box of cat litter and observe how closely the litter does what it’s supposed to do.

    Result: the litter wins by a mile.

    Check out the information on this box of cat litter, a type I often use.

    Back

     

    Let's see in detail how software stands up against cat litter.

    • Problem clearly defined

              The problem solved by a given piece of software can be stated in documents that are huge. And too often vague.

              By contrast, the problem solved by cat litter is clearly defined and already known to everyone who knows how mammals work. There are lots of variations of the details of how cats do what they do; cat litter handles them all.

    • Verifiable promises

              When you buy software or request it to be built, we all experience how hard it is to tell whether we actually got what we were promised. Departments tend to go through extensive “acceptance testing” with new software, or even a new release of existing software, because it’s too often not right.

              With cat litter, you can tell pretty quickly whether the promises are kept. You can see and smell the results.

    • See the results

              Yes you can see the results of software. But there are a remarkable number of invisible results, like databases and files being created and updated, messages being sent and received, other programs being invoked.

    • Consistent results

              Even when software works, it doesn’t work all the time for everyone. Sometimes it goes down totally. Sometimes it just produces bad results – or none at all – for some people under some circumstances. Sometimes you can tell the result is bad but sometimes it seems right but is wrong. Or it crashes.

              People buy new boxes of litter when the old one is used up. They buy it again and again because … it works! All the time, regardless of the variations of the cat’s output.

    Conclusion

    I could go on, but it should be clear by now: cat litter is an exemplary model of science and the associated engineering and production. What should be jokingly called “Computer Science” may be a lot things, but “Science” is not one of them.

  • Summary: Computer Science

    If you want to learn math, physics or chemistry, you go to the respective departments to find the people who are true, verified experts. Same with software and Computer Science. What could be more obvious? The trouble is, it’s wrong. Whatever it is they do, it has little to do with real-world software. Worse, it isn’t even a science.

    At the beginning and continuing in many universities, software was part of the math department. What could be stronger endorsement of its precision and status as a science?

    https://blackliszt.com/2015/04/math-and-computer-science-in-academia.html

    The sad fact is, math and computer science are at fundamental odds with effective software development.

    https://blackliszt.com/2015/05/math-and-computer-science-vs-software-development.html

    How does Computer Science stack up against real sciences like physics? Not well.

    https://blackliszt.com/2019/11/computer-science-is-propaganda-and-computer-engineering-is-a-distant-goal.html

    https://blackliszt.com/2019/04/software-is-a-pre-scientific-discipline.html

    Real science isn’t about experts or authority; it’s about evidence. Whatever you yammer about in academia, if there aren’t verifiable results in the real world, it’s nonsense. In the medical world, for example, drugs are subjected to a process that verifies that they do what they’re supposed to do.

    https://blackliszt.com/2015/07/the-science-of-drugs-vs-the-science-of-computers-and-software.html

    It took a long time for medicine to pay real attention to evidence. Authority and accepted practice are hard to break. For example, think about the long practice of blood-letting.

    https://blackliszt.com/2019/02/what-software-experts-think-about-blood-letting.html

    The history of scurvy and antiseptic surgery also provide lessons. Even the development of the steam boat!

    https://blackliszt.com/2014/02/lessons-for-software-from-the-history-of-scurvy.html

    https://blackliszt.com/2012/07/what-can-software-learn-from-steamboats-and-antiseptic-surgery.html

    In medicine there are studies that are peer-reviewed and published with results. This is called “evidence-based medicine.” You would think there would be the equivalent in software, wouldn’t you?

    https://blackliszt.com/2017/02/evidence-based-software.html

    The vast majority of software experts strongly resemble medical doctors from those earlier times. The evidence is overwhelming that the "cures" they promote make things worse, but since all the software doctors give nearly the same horrible advice, things continue.

    In real science, there is general recognition that what is accepted may not be perfect or complete. Sometimes there are anomalies, evidence that doesn’t support the current theory. Then there’s a paradigm shift that revises and/or replaces the accepted theory.

    https://blackliszt.com/2021/11/computer-science-and-kuhns-structure-of-scientific-revolutions.html

    Computer Science is flooded with problems that the experts largely ignore. It’s bad. If bridges fell down at anywhere close to the rate that software systems break and become unavailable, there would be mass revolt. Drivers would demand that bridge engineers make radical changes and improvements in bridge design and building.

    One of the interesting things is that Computer Scientists have sometimes acknowledged there are serious problems. For example, there was a big conference to address the “crisis in software” … in 1968!

    https://blackliszt.com/2021/08/the-crisis-in-software-is-over-50-years-old.html

    What did they do? They promoted “structured programming” and decided that the essential “go to” statement was evil.

    https://blackliszt.com/2021/09/software-programming-language-evolution-the-structured-programming-goto-witch-hunt.html

    Aside from a new wave of rhetoric in the software world, nothing of substance changed. When you compare the "batting average" of typical software project managers with baseball, the issues become extremely clear. The typical software project has trouble getting to first base, much less make a run –and the opposing "pitcher" is like the one in the home run derby, who pitches nothing but easy hits.

    https://blackliszt.com/2024/08/what-is-your-software-project-managers-batting-average.html

    It is enlightening to compare the training to became an MD to the training to be qualified in software. In medicine, your job is to keep people healthy and cure them when they’re not. In software, your job is to build healthy software and cure it when it’s not. The training and testing requirements couldn’t’ be more different. Hint: it’s way better for doctors.

    https://blackliszt.com/2020/07/job-requirements-for-software-engineers-should-stop-requiring-cs.html

    Fashion” is a word we associate with clothes. Software is hard, it’s objective, it’s taught in schools as “computer science.” Software can’t have anything to do with “fashion” if it’s a “science,” can it? Sadly, software is infected by fashion trends and styles at least as much as clothes. Fashion has a huge impact on how software is built.

    https://blackliszt.com/2018/11/what-are-software-fashions.html

    https://blackliszt.com/2019/05/recurring-software-fashion-nightmares.html

    When software history becomes as important a part of computer science education as physics history is of physics, we'll know it's approaching credibility. Until then, everything about computer science, education and practice will continue to be a cruel joke.

    https://blackliszt.com/2012/03/computer-history.html

    https://blackliszt.com/2019/06/the-evolution-of-software.html

    What can be done? One thing is to understand and accept the goals of software. In physics, the goal is to make accurate predictions of space/time events. What is the equivalent in software? There is actually widespread tacit agreement. It’s just not talked about and contradicted by the reigning dogmas.

    https://blackliszt.com/2022/05/the-goals-of-software-architecture.html

    Until such time as Computer Science becomes scientific, no one who wants to do software should bother with academic degrees, and anyone hiring for software should not require a college degree, much less one in Computer Science. The degree isn't worth as much as you night think.

    https://blackliszt.com/2015/05/how-much-is-a-computer-science-degree-worth.html

    Things will get better only when there is wide acknowledgement of the fact that today's Computer Science is LESS scientific than … ready for it? … cat litter.

    https://blackliszt.com/2023/12/cat-litter-is-more-scientific-than-computer-science.html

     

  • The Goals of Software Architecture

    What goals should software architecture strive to meet? You would think that this subject would have been intensely debated in industry and academia and the issue resolved decades ago. Sadly, such is not the case. Not only can't we build good software that works in a timely and cost-effective way, we don't even have agreement or even discussion about the goals for software architecture!

    Given the on-going nightmare of software building and the crisis in software that still going strong after more than 50 years, you would think that solving the issue would be top-of-mind. As far as I can tell, not only is it not top-of-mind, it’s not even bottom-of-mind. Arguably, it’s out-of-mind.

    What is Software Architecture?

    A software architecture comprises the tools, languages, libraries, frameworks and overall design approach to building a body of software. While the mainstream approach is that the best architecture depends on the functional requirements of the software, wouldn’t it be nice if there were a set of architectural goals that were largely independent of the requirements for the software? Certainly such an independence would be desirable, because it would shorten and de-risk the path to success. Read on and judge for yourself whether there is a set of goals that the vast majority of software efforts could reasonably share.

    The Goals

    Here’s a crack at common-sense goals that all software architectures should strive to achieve and/or enable. The earlier items on the list should be very familiar. The later items may not be goals of every software effort; the greater in scope the software effort, the more their importance is likely to increase.

    • Fast to build
      • This is nearly universal. Given a choice, who wants to spend more time and money getting a software job done?
    • View and test as you build
      • Do you want to be surprised at the end by functionality that isn't right or deep flaws that would have been easy to fix during the process?
    • Easy to change course while building
      • No set of initial requirements is perfect. Things change, and you learn as you see early results. There should be near-zero cost of making changes as you go.
    • Minimal effort for fully automated regression testing
      • What you've built should work. When you add and change, you shouldn't break what you've already built. There should be near-zero cost for comprehensive, on-going regression testing.
    • Seconds to deploy and re-deploy
      • Whether your software is in progress or "done," deploying a new version should be near-immediate.
    • Gradual, controlled roll-out
      • When you "release" your software, who exactly sees the new version? It is usually important to control who sees new versions when.
    • Minimal translation required from requirements to implementation
      • The shortest path with the least translation from what is wanted to the details of building it yields speed, accuracy and mis-translations.
    • Likelihood of slowness, crashes or downtime near zero
      • 'Nuff said.
    • Easily deployed to all functions in an organization
      • Everything that is common among functions and departments is shared
      • Only differences between functions and departments needs to be built
    • Minimal effort to support varying interfaces and roles
      • Incorporate different languages, interfaces, modes of interaction and user roles into every aspect of the system’s operation in a central way
    • Easily increase sophisticated work handling
      • Seamless incorporation of history, evolving personalization, segmentation and contextualization in all functions and each stage of every workflow
    • Easily incorporate sophisticated analytics
      • Seamless ability to integrate on and off-line Analytics, ML, and AI into workflows
    • Changes the same as building
      • Since software spends most of its life being changed, all of the above for changes

    Let’s have a show of hands. Anyone who thinks these are bad or irrelevant goals for software, please raise your hand. Anyone?

    I'm well aware that the later goals may not be among the early deliverables of a given project. However, it's important to acknowledge such goals and their rising importance over time so that the methods to achieve earlier goals don't increase the difficulty of meeting the later ones.

    Typical Responses to the Goals

    I have asked scores of top software people and managers about one or more of these goals. I detail the range of typical responses to a couple of them in my book on Software Quality.

    After the blank stare, the response I've most often gotten is a strong statement about the software architecture and/or project management methods they support. These include:

    • We strictly adhere to Object-oriented principles and use language X that minimizes programmer errors
    • We practice TDD (test-driven development)
    • We practice X, Y or Z variant of Agile with squads for speed
    • We have a micro-services architecture with enterprise queuing and strictly enforced contracts between services
    • Our quality team is building a comprehensive set of regression tests and a rich sandbox environment.
    • We practice continuous release and deployment. We practice dev ops.
    • We have a data science team that is testing advanced methods for our application

    I never get any discussion of the goals or their inter-relationships. Just a leap to the answer. I also rarely get "this is what I used to think/do, but experience has led me to that." I don't hear concerns or limitations of the strongly asserted approaches. After all, the people I ask are experts!

    What's wrong with these responses?

    In each case, the expert asserts that his/her selection of architectural element is the best way to meet the relevant goals. The results rarely stand out from the crowd for the typical answers listed above.

    The key thing that's wrong is the complete lack of principles and demonstration that the approaches actually come closer to meeting the goals than anything else.

    The Appropriate Response to the Goals

    First and foremost, how about concentrating on the goals themselves! Are they the right goals? Do any of them work against the others?

    That's a major first step. No one is likely to get excited, though. Most people think goals like the ones listed above don't merit discussion. They're just common sense, after all.

    Things start to get contentious when you ask for ways to measure progress towards each goal. If you're going to the North Pole or climbing Mt. Everest, shouldn't you know where it is, how far away you are, and whether your efforts are bringing you closer?

    Are the goals equally important? Is their relative importance constant, or does the importance change?

    Wouldn't it be wonderful if someone, somewhere took on the job of evaluating existing practices and … wait for it … measured the extent they achieved the goals. Yes, you might not know what "perfect" is, but surely relative achievement can be measured.

    For example, people are endlessly inventing new software languages and making strong claims about their virtues. Suppose similar claims were made about new bats in baseball. Do you think it might be possible that the batter's skill makes more of a difference than the bat? Wouldn't it be important to know? Apparently, this is one of the many highly important — indeed, essential — questions in software that never gets asked, let alone answered.

    Along the same lines, wouldn't it be wonderful if someone took on the job of examining outliers? Projects that worked out not just in the typical dismal way, but failed spectacularly? On the other end of the spectrum, wouldn't amazing fast jobs be interesting? This would be done on start-from-scratch projects, but equally important on making changes to existing software.

    A whole slew of PhD's should be given out for pioneering work on identifying and refining the exact methods that make progress towards the goals. It's likely that minor changes to the methods used to meet the earlier goals well would make a huge difference in meeting later goals such as seamlessly incorporating the results of analytics.

    Strong Candidates for Optimal Architecture

    After decades of programming and then more of examining software in the field, I have a list of candidates for optimal architecture. My list isn't secret — it's in books and all over this blog. Here's a couple places to start:

    Speed-optimized software

    Occamality

    Champion Challenger QA

    Microservices

    The Dimensions

    Abstraction progression

    The Secrets

    The books

    Conclusion

    I've seen software fashions change over the years, with things getting hot, fading away, and sometimes coming back with a new name. The fashions get hot, and all tech leaders who want to be seen as modern embrace them. No real analysis. No examination of the principles involved. Just claims. At the same time, degrees are handed out by universities in Computer Science by Professors who are largely unscientific. In some ways they'd be better off in Art History — except they rarely have taste and don't like studying history either.

    I look forward to the day when someone writes what I hope will be an amusing history of the evolution of Computer Pseudo-Science.

  • Computer Science and Kuhn’s Structure of Scientific Revolutions

    If bridges fell down at anywhere close to the rate that software systems break and become unavailable, there would be mass revolt. Drivers would demand that bridge engineers make radical changes and improvements in bridge design and building. If criminals took over bridges and held the vehicles until they paid a ransom anywhere close to the number of times criminals rob organizations or their data or lock their systems until a ransom is paid, there would be mass revolt. In the world of software, this indefensible state of affairs is what passes for normal! Isn't it time for change? Has something like this ever happened in other fields that we can learn from?

    Yes. It's happened enough that it's been studied, and the process of resistance to change until the overwhelming force of a new paradigm breaks through.

    Thomas Kuhn was the author of a highly influential book published in 1962 called The Structure of Scientific Revolutions. He introduced the term “paradigm shift,” which is now a general idiom. Examining the history of science, he found that there were abrupt breaks. There would be a universally accepted approach to a scientific field that was challenged and then replaced with a revolutionary new approach. He made it clear that a paradigm shift wasn’t an important new discovery or addition – it was a whole conceptual framework that first challenged and then replaced the incumbent. An example is Ptolemaic astronomy in which the planets and stars revolved around the earth, replaced after long resistance by the Copernican revolution.

    Computer Science is an established framework that reigns supreme in academia, government and corporations, including Big Tech. There are clear signs that it is as ready for a revolution as the Ptolemaic earth-centric paradigm was. Many aspects of the new paradigm have been established and proven in practice. Following the pattern of all scientific revolutions, there is massive establishment resistance, led by a combination of ignoring the issues and denying the problems.

    The Structure of Scientific Revolutions

    Thomas Kuhn received degrees in physics, up to a PhD from Harvard in 1949. He was into serious stuff, with a thesis called “The Cohesive Energy of Monovalent Metals as a Function of Their Atomic Quantum Defects.” Then he began exploring. As Wiki summarizes:

    As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant.

    Structure-of-scientific-revolutions-1st-ed-pb
    His path for coming to his realization is fascinating. I recommend reading the book to anyone interested in how science works and the history of science.

    After studying the history of science, he realized that it isn't just incremental progress.

    Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.[1]

    Real-life examples of this are fascinating. The example often given is the shift from "everything revolves around the earth" to "planets revolve around the sun." What's interesting here is the planetary predictions of the Ptolemaic method were quite accurate. The shift to Copernicus (Sun-centric) didn't increase accuracy, and the calculations grew even more complicated. The world was not convinced! Kepler made a huge step forward with elliptical orbits instead of circles with epicycles and got better results that made more sense. The scientific community was coming around. Then when Newton showed that Kepler's laws of motion could be derived from his core laws of motion and gravity the revolution won.

    While the book doesn't emphasize this, it's worth pointing out that the Newtonian scientific paradigm "won" among a select group of numbers-oriented people. The public at large? No change.

    Anomalies that drive change

    One of the interesting things Kuhn describes are the factors that drive a paradigm shift in science — anomalies, results that don't fit the existing theory. In most cases, anomalies are resolved within the paradigm and drive incremental change. When anomalies resist resolution, something else happens.

    During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

    The strength of the existing paradigm is shown by the strong tendency to blame things on mistakes of the researcher — or in the case of software, on failure to follow the proper procedures or to write the code well.

    The Ruling Paradigm of Software and Computer Science

    There is a reigning paradigm in software and Computer Science. As you would expect, the paradigm is almost never explicitly discussed. It has undergone some evolution over the last 50 years or so, but not as radical as some would have it.

    At the beginning, computers were amazing new devices and people programmed them as best they could. Starting over 50 years ago, people began to notice that software took a long time and lots of effort to build and was frequently riddled with bugs. That's when the foundational aspects of the current paradigm were born and started to grow, continuing to this day:

    1. Languages should be designed and used to help programmers avoid making mistakes. Programs should be written in small pieces (objects, components, services, layers) that can be individually made bug-free.
    2. Best-in-class detailed procedures should be adapted from other fields to assure that the process from requirements through design, programming, quality assurance and release is standardized and delivers predictable results.

    The ruling paradigm of software and computer science is embodied in textbooks, extensive highly detailed regulations, courses, certifications and an ever-evolving collection of organizational structures. Nearly everyone in the field unconsciously accepts it as reality.

    Are There Anomalies that Threaten the Reigning Paradigm?

    Yes. There are two kinds.

    The first kind are the failures of delivery and quality that continue to plague by-the-book software development, in spite of decades of piling up the rules, regulations, methods and languages that are supposed to make software development reliable and predictable. The failures are mostly attributed to errors and omissions by the people doing the work — if they had truly done things the right way, the problems would not have happened. At the same time, there is a regular flow of incremental "advances" in procedure and technology designed to prevent such problems. This is textbook Kuhn — the defenders of the status quo attributing issues to human error.

    The second kind of anomalies are bodies of new software that are created by small teams of people who ignore the universally taught and proscribed methods and get things done that teams 100's of times larger couldn't do. Things like this shouldn't be possible. Teams that ignore the rules should fail — but instead most of the winning teams are ones that did things the "wrong" way. This is shown by the frequency of new software products being created by such rule-ignoring small groups, rocketing to success and then being bought by the rule-following organizations, including Big Tech, who can't do it — in spite of their giant budgets and paradigm-conforming methods. See this and this.

    When will this never-ending stream of paradigm-breaking anomalies make a paradigm-shifting revolution take place in Computer Science? There is no way of knowing. I don't see it taking place any time soon.

    Conclusion

    The good news about the resistance of the current consensus in Computer Science and software practice to a paradigm shift is that it provides the room for creative entrepreneurs to build new things that meet the unmet needs of the market  The entrepreneurs don't even have to go all-in on the new software paradigm! They just need to ignore enough of the bad old stuff and use enough of the good new stuff to get things done that the rule-followers are incapable of. Sadly, the good news doesn't apply to fields that are so outrageously highly regulated that the buyer insists on being able to audit compliance during the build process. Nonetheless, there is lots of open space for creative people to build and grow.

  • Software Programming Language Evolution: the Structured Programming GOTO Witch Hunt

    In prior posts I’ve given an overview of the advances in programming languages, described in detail the major advances and defined just what is meant by “high” in the phrase high-level language. I've described the advances in structuring and conditional branching that brought 3-GL’s to a peak of productivity.

    The structuring and branching caught the attention of academics. Watch out! What happened next was that a theorem was proved, a movement was declared and named, and a certain indispensable part of any programming language, the GO TO statement, was declared to be a thing only used by bad programmers and should be banned. Here's the story of the nefarious GOTO.

    Structures in Programming Languages

    I've described how structures were part of the first 3-GL's and how they were soon elaborated to more clearly express the intention of programmers, making code even more productive to write. The very first FORTRAN compiler, delivered in 1957, included primitive versions of conditional branching and loops, two of the foundations of programming structure. It was so powerful that the early users figured it decreased the number of statements needed to achieve a result more than 10 times.

    These are the people who actually WRITE PROGRAMS! They want to make it easier and jumped on anything that gave a dramatic improvement.

    “Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used cross-platform programming language.”

    Before long, the structuring capabilities of the original IF (conditional branching) and DO (controlled looping) statements were enhanced and augmented to something close to their current form. I describe this here. The result was a peak of programmer productivity that has not substantially been increased since, and often been degraded.

    The Bohm-Jacopini Theorem

    Completely independent of the amazing advances in languages and programming productivity that were taking place, math-oriented non-programmers were hard at work deciding how software should be written. Here is the story in brief:

    The structured program theorem, also called the Böhm–Jacopini theorem,[1][2] is a result in programming language theory. It states that a class of control-flow graphs (historically called flowcharts in this context) can compute any computable function if it combines subprograms in only three specific ways (control structures). These are

    1. Executing one subprogram, and then another subprogram (sequence)
    2. Executing one of two subprograms according to the value of a boolean expression (selection)
    3. Repeatedly executing a subprogram as long as a boolean expression is true (iteration)

    The structured chart subject to these constraints may however use additional variables in the form of bits (stored in an extra integer variable in the original proof) in order to keep track of information that the original program represents by the program location. The construction was based on Böhm's programming language P′′.

    The theorem forms the basis of structured programming, a programming paradigm which eschews goto commands and exclusively uses subroutines, sequences, selection and iteration.

    This theorem got all the academic types involved with computers riled up. The key to good software has been discovered! The fact that math theorems are incomprehensible to the vast majority of people, and the fact that perfectly good computer programs can be written by people who aren't math types didn't concern any of these self-anointed geniuses.

    The important thing to note about the theorem is that it was NOT created in order to make programming easier or more productive. It just "proved" that it was "possible" to write a program under the absurd and perverse constraints of the theorem to compute any computable function. Assuming you were willing to use a weird set of bits to store location information in ways that would make any such program unreadable by any normal person. Way to go, guys — let's go back to the days of writing in all-binary machine language!

    The Crisis in Software and its solution

    Not long after this, the academic group of Computer Science “experts” formed. They had a conference. They looked at the state of software and declared it to be abysmal. The whole conference was about the "crisis" in software. See this for details.

    One of the most prominent of those Computer Scientists was Edsger W. Dijkstra. He looked at the powerful constructs for conditional branching, loops and blocks that had been added to 3-GL's and invented the term "structured programming" to describe them. He related those statements to the wonderful but useless math proof about the minimal requirements for programming a solution to any "computable function." The proof "proved" that such programs could be written without the equivalent of a GOTO statement. BTW, I do not dispute this. He wrote "the influential "Go To Statement Considered Harmful" open letter in 1968."

    Among the solutions to the software crisis they proclaimed was strict adherence to the dogma of what Dijkstra called “structured programming,” which prominently declared that the GOTO statement had no place in good programming and should be eliminated.

    Does the fact that's it is POSSIBLE to program a solution to any computable function without using GOTO mean that you SHOULD write without using GOTO's? When children go to school, it's POSSIBLE for them to crawl the whole way, without using "walking" at all. Everyone accepts that this is possible. When you're on your feet all sorts of bad things can happen — you can trip and fall! Most important, you can get the job done without walking … and therefore you SHOULD eliminate walking for kids getting to school. QED.

    This is academia for you – a prime example of how Computer Science works hard to make sure that programs are hard to write, understand and deliver, all in the name of achieving the opposite.

    The debate about structured programming

    There was no debate about the utility of the conditional branching, controlled looping and block structures that rapidly became part of any productive software language. They were there and programmers used them, then and now. The debate was about "structured programming," which by its academic definition outlawed the use of the GOTO statement. That wasn't all. It also outlawed having more than one exit to a routine, breaks from loops and other productive, transparent and generally useful constructs.

    I remember clearly as a programmer in the 1980's having a non-technical manager type coming to me and quizzing me about whether I was following the rigors of structured programming, which was then talked about as the only way to write good code. I don't remember my answer, but since I knew the manager would never go to the trouble of actually — gasp! — reading code, my answer probably didn't matter.

    The most important thing to know about the leader of the wonderful movement to purify programming is his lack of interest in actually writing code:

    Dijkstra quote

    Fortunately, there are sane people in the world, including the incomparable Donald Knuth (an academic Computer Scientist who's actually great!) and a number of others.

    An alternative viewpoint is presented in Donald Knuth's Structured Programming with go to Statements, which analyzes many common programming tasks and finds that in some of them GOTO is the optimal language construct to use.[9] In The C Programming Language, Brian Kernighan and Dennis Ritchie warn that goto is "infinitely abusable", but also suggest that it could be used for end-of-function error handlers and for multi-level breaks from loops.[10] These two patterns can be found in numerous subsequent books on C by other authors;[11][12][13][14] a 2007 introductory textbook notes that the error handling pattern is a way to work around the "lack of built-in exception handling within the C language".[11] Other programmers, including Linux Kernel designer and coder Linus Torvalds or software engineer and book author Steve McConnell, also object to Dijkstra's point of view, stating that GOTOs can be a useful language feature, improving program speed, size and code clarity, but only when used in a sensible way by a comparably sensible programmer.[15][16] According to computer science professor John Regehr, in 2013, there were about 100,000 instances of goto in the Linux kernel code.[17]

    Any programmer can make mistakes. Any statement type can be involved in that mistake. For example, I think nearly everyone accepts that cars are a good thing. But over 30,000 people a year DIE in car accidents! So where's the movement to eliminate cars because of this awful outcome! It makes as much sense as outlawing the GOTO because sometimes it's used improperly. Like every other statement type.

  • The Crisis in Software is over 50 years old

    Everyone knows there’s a problem in software. No one likes to talk about it. Waves of new tools and techniques are introduced, hyped and fade away. Are there waves of new tools and techniques in accounting? During interviews, are accountants asked if they practice the currently “hot” methods? Of course not! What about cars? Hah, you might say – look at the wave of new electric cars! Yes, let’s look at them – do electric cars crash more than normal ones? If cars crashed or mis-performed at the same rate as software, we would all be afraid to ride in them!

    The existence of a software crisis was first prominently identified in 1968. The results of the crisis were clear at that time, and haven’t changed a bit since then. Nothing – no wonderful new language, paradigm, project management or quality method, architectural technique or anything else has made a dent in the crisis.

    The solutions to the crisis have largely been identified and repeatedly proven in practice by small groups of programmers who desperately need to get things done. Their methods are ignored or suppressed by the ruling elites in academia, government and corporations – including Big Tech. Sadly, this is unlikely to change any time soon. Happily, effective methods for building software that are ignored and suppressed provide a way for small, ambitious groups to do new stuff and win!

    What is the software crisis?

    The term “software crisis” was coined during the 1968 NATO Software Engineering Conference. Many computer science experts attended. There was general agreement that there was a serious problem. Part of the solution was identified by creating a discipline of “computer engineering,” which would structure and formalize the techniques for building software. Papers identifying the problems and solutions were generated as a result of the conference and a similar one held the following year.

    The problems are summarized in the Wiki article:

    Software crisis

    Look familiar?

    How do we know there’s a crisis?

    When things are really bad, it’s typical for people to avoid talking about it. When there’s a whiff of a solution, though, it’s likely to receive attention. When people engage in something that’s likely to go wrong, most of them will get the best advice and be sure to follow widely accepted practices in order to avoid trouble. If something goes wrong anyway, having followed the standard accepted techniques provides a good defense – I did what people in my position are supposed to do! When hiring people, the hiring organization will often trumpet their use of these authorized and accepted methods to assure avoidance of blame. And finally, when things go wrong, great care is taken to assure that as few people know about the problem as possible.

    When you hire someone in physics, do you advertise that the position requires knowledge and acceptance of relativity theory? When you hire a doctor, do you need to make sure that the doctor adheres to the infectious theory of disease?

    We know there’s a crisis because, in spite of all the effort to keep it quiet, disasters happen that are so public, widespread and annoying that they get talked about. The trouble is that the highly publicized disasters are just the visible tip of a truly gigantic iceberg that takes up most of the space in the ocean of software.

    We know there's a crisis because the academic field devoted to the subject, Computer Science, isn't a "science." Not even close. Check out these.

    We know there's a crisis because the famous big tech companies have amazing reputations, like they're all geniuses — while the facts show that they're pretentious bumblers with astounding salaries.

    We know there's a problem because of the fashion-driven nature of the field. Here is a discussion of specific fashions, and see these for examples.

    Conclusion

    Look again at the list of problem identified over 50 years ago: software that's late, over budget, not what you wanted, poor quality, and sometimes has to be thrown out. This is one of the reasons managers take estimates from programmers and multiply the time by 3X before passing them on up the chain. Higher managers pad even more. People try to avoid publicizing the wonderful thing that's coming real soon because all too often, days before completion, it blows up.

    There have been decades of widely-touted solutions to the problem that never solve the problem. Here is a review of 50 years of "progress" in programming languages, for example.

    There are solutions to this 50 year problem. Here is one of those solutions. Small groups of programmers, pressured to produce great results in short period of time, re-invent the solutions. There are underlying errors of thought that are the primary factors behind the problem and its incredible persistence. How else can you understand the near-universal resistance to winning methods– proven in practice! — to be ignored?

     

  • Job requirements for software engineers should stop requiring CS

    I've read thousands of job requirements for computer programmers over the years, and written or edited quite a number. I’ve interacted with hundreds of software groups and seen the results of their work. I’ve spent a couple decades cranking out code myself in multiple domains. There are a couple near-universal problems with job requirements that, if changed, would improve the quality of software groups and their productivity.

    Of course, it’s not just the job requirements and what the hiring people do – it’s also the managers, from the CEO down. They also have to not just support but champion the changes I describe. If they do, everyone will enjoy better results from the software team.

    In this post, I'm going to concentrate on just one of the issues: academic degrees

    A near-universal job requirement is an academic CS degree. When analytics, ML or AI is involved, the requirement is often “upgraded” to a Master’s or PhD.

    There are many capable programmers who have degrees of this kind. Often it doesn’t seem to hurt or hold them back much. But all too often it does! The more specialized the training, for example in project management or quality, the more likely the “education” the person has received is an active impediment to getting good work done.

    Here’s the simple, raw, brutal fact: Getting a degree in Computer Science does NOT train you to become an effective programmer in the real world. All too often, the degree results in the person performing worse than someone with self-training and apprentice experience.

    That is the fact. Surprised?

    Let’s do a little compare and contrast with medicine. Yes, I know that an MD is a graduate degree, while CS is often undergrad. Medical training has evolved to what it is now after decades and centuries of finding out what it really takes to help make people healthy. By contrast, CS degrees just started being granted 50 years or so ago, and are far from even trying to figure out what kind of training helps create good programmers.

    First let’s look at the training and testing:

    • You don’t even get into med school without taking the MCAT, a challenging test that takes over 7 hours that few do well on.
    • Once you’re in med school you take a four year course to get your MD.
      • The first two years are academic, including hands-on labs. Then you take the USMLE-1. If you don’t pass this challenging test you’re out. End of med school.
      • The second two years are clinical! You’re in hospitals, clinics and offices seeing patients under supervision. And you’re graded. And then you take the USMLE-2, which is harder than part 1 and has lots of clinical stuff. If you fail, you’re not an MD.
    • To practice, even as a general practitioner, you have to apply and be accepted into a Residency. Depending on specialty, this can be 3-7 years of mostly hands-on practice, under close supervision.
      • During your first year, you have to take and pass the USMLE-3. Fail and you’re out.
      • During your last year you have to take and pass the test specific to your specialty. Fail and you’re out.

    Here’s the equivalent of the training and testing in CS:

    • There is NO equivalent in CS. No entry testing. No exit testing. Just grades on courses determined by professors who usually pass everyone.

    A little compare and contrast between medicine and CS:

    • Medicine is taught by doctors who practice medicine
      • CS is taught by professors, most of whom have never practiced programming in the real world.
    • A large part of medical training is working with real patients with real problems, under the supervision of practicing doctors.
      • CS is primarily classroom teaching with textbooks and homework exercises. You have to write programs as exercises, but it’s completely artificial. There is nothing apprentice-like or truly clinical about it.
    • Medical training is led by doctors who are incented to produce great doctors.
      • CS training is led by academic PhD’s with no real-world experience who are incented to publish papers read by people like them.
    • Medical journals publish essential information for practicing doctors, giving advances and new discoveries.
      • CS journals are read by the tiny group of academics who publish in them. Practicing programmers pay no attention for good reason.
    • Bad doctors are fired for incompetence and barred from practicing.
      • CS graduates are rarely fired for incompetence. If CS graduates can’t program well, they usually shift into using their non-skills in “management.”
    • In medicine, best practices are increasingly codified. You rapidly fall under scrutiny for deviating.
      • CS grads seek out and follow fashions that are the software equivalent of blood-letting, enthusiastically promoting them and getting them adopted with disastrous results.
    • Hospitals are compared with each other in terms of results. It’s not hard to find which are the best hospitals.
      • Groups of CS grads make it impossible to make comparisons between groups, with the result that huge groups produce major disasters at great expense, while tiny groups of effective programmers perform 10X or more better.

    All this doesn’t make things uniformly wonderful in medicine. But it goes a long way towards explaining why software is so bad. It’s awful. The awfulness is so widespread that it’s rarely talked about!. If bridges fell down at 1/100 the rate that software projects fail, there would be a revolt! Instead, everyone in the industry just sighs and says that’s the way things are.

    You think things are great in software? Check out a couple of these:

    https://blackliszt.com/2015/09/software-quality-at-big-companies-united-hp-and-google.html

    https://blackliszt.com/2014/12/fb.html

    https://blackliszt.com/2014/02/lessons-for-software-from-the-history-of-scurvy.html

    The fact is, CS training leads to horrible results because Computer “Science” is roughly at the same level as medicine was when bleeding patients was the rule. See this:

    https://blackliszt.com/2019/11/computer-science-is-propaganda-and-computer-engineering-is-a-distant-goal.html

    Conclusion

    There are lots of things you can do to improve the results of hiring software programmers and managers. Here's how the usual interview process goes; here is specific advice about interviewing. There is a whole pile of advice in my book on software people. If all you did was drop the CS degree requirement, you would have taken a big step forward in quality improvement.

  • Computer Science is Propaganda and Computer Engineering is a Distant Goal

    To call what is taught in the “computer science” departments of universities a “science” is a mind-game to get everyone involved to believe that what is taught meets the normal criteria for being a “science.” It doesn’t come close. Well, you might say, some of those departments are more humbly and accurately called “computer engineering.” True. At some point in the distant future, what is taught in computer engineering might rise to the level of what is taught in, say mechanical or electrical engineering. Until that goal is in sight, it would be more accurate to call the classes something like “Fads, fashions and sects in computer software practice.”

    Physics, Chemistry and Biology

    I hope we can all agree that physics, chemistry and biology are sciences. It wasn’t always that way! They have only gotten to be sciences after long struggles. Physics started emerging about 400 years ago, chemistry a couple hundred years ago, and biology just in the last 150 years or so. In each case, they are studies of reality, with generally accepted statements of how that reality works, as shown by many experiments. In each case, they advance by someone making a hypothesis that can be dis-proven by experiment. If the hypothesis is supported by experiment, more tests are done by many people to refine it, and then it becomes part of the accepted science. Sometimes a new hypothesis contradicts something that’s accepted, but more often refines it – for example, the best way to understand most of Einstein’s work is that it refined Newton’s for special, highly unusual cases. In the vast majority of cases, you can safely use Newton’s laws of motion without being concerned about relativity theory.

    How does Computer Science stand up to these paragons of science?

    Right away, there's a problem. It's this thing called evidence. You know, like when Newton comes up with his law of gravity, how well does it predict how gravity works? Experiments! Evidence! Same with Einstein's relativity theorem — no one believed it until crucial experiments proved it. In medicine there are studies that are peer-reviewed and published with results. This is called “evidence-based medicine.” You would think there would be the equivalent in software, wouldn’t you? Check out the link — doesn't exist.

    The problem is deeper than no evidence, which by itself would be enough to prove it's not a science. In physics, you learn the rules of matter, energy and motion. In chemistry, you learn first of all, the periodic table of elements, and then all the molecules into which they assemble themselves and how they interact. In biology, you learn about all the living things, from viruses, through plants and animals. What’s the equivalent in computer science? You can say, “oh, it’s the science of computers;” except that physics, chemistry and biology are things that exist in the world. Humans didn’t create them. Computers are 100% human creations, no less than spoons and baseballs. There is no science of spoons – there is a bit of history and a range of modern methods and styles of making them, just like computers. Until we decide that it’s OK to create an academic deparstment of Tablecloth Science, we should be able to agree that there is no such thing as Computer “Science.”

    How did it happen that loads of departments with courses about obviously bogus Computer “Science” come to be?

    The innocent explanation is that, in the early days, computers were closely related to math, and were seen basically as giant programmable calculators. In fact, the word “computer” was originally the name of a person, almost always female, who “computed” the answers to math problems. While math isn’t a “science” in the normal sense of the word, it is a kind of pre-existing non-physical reality that, in ways and for reasons that have never been explained, pervades and underlies everything about us and the world we live in. It’s the grounding of all of science. Computers were often studied by the math people in academia, and the precision naturally associated with computers seems to justify the association with science.

    But in the end, computers are just fancy machines that people design and build to do stuff. How would you feel about “Refrigerator Science?” Refrigerators are great, and I like what they do. The advances in Refrigerator Science since we used to call them “ice boxes” is amazing. Uhhh, maybe not.

    The less innocent explanation is that everyone involved realized that no way is there such a thing as computer science, if we’re at all serious about the term “science.” But there’s no doubt that it makes everyone involved feel better about themselves, so as propaganda, “computer science” gets an A+.

    OK, OK, We’ll call it Computer Engineering

    Many places do call it Computer Engineering. Is that any better? Think about mechanical engineering, for example. Let’s look at my favorite example of building bridges. I’ve gone into huge detail about the differences between bridge-building in peace and war and how it applies to software.

    Let’s step back and compare the normal peace-time bridge engineering project with the normal computer software one. At the outset, they seem pretty similar. They start with requirements, then an overall design, then build, testing and inspection and finally production. In fact, many engineering-type methods are used in software, including project management.

    For bridges, it works out pretty well. Bridges get built and work, 24 by 7, year after year, with routine maintenance. There are exceptions, but they're rare. Software? Not so much. Just to hit some highlights, the problems include:

    • Bridge-building project management methods are sound and generally produce reasonable results. Software project management methods don’t work
    • Bridges are generally built in-place, so installation is an integral part of the build process. Installing standard software is a nightmare
    • Once built, bridges work. Software QA is broken
    • Bridges just keep working. Software is full of horrible bugs
    • No one stealthily sneaks onto bridges and steals the cars that are on it. Software is full of horrible security holes
    • Bridge-building is driven by solid, proven engineering practice, backed by scientific principles. Software is driven by fashion instead of sound practice

    Given all this, I guess we can still call it Computer Engineering. But to be fair, we should have it on probation, and call it “Computer Sadly-Deficient Engineering” until it rises to the level of mediocrity — which it may, with some luck, someday achieve.

    Is there any value in Computer Science?

    Yes, there is value in Computer Science. Some of the value is real. For example, I greatly respect Donald Knuth and the study of algorithms, for example. The trouble is, this kind of thing is a tiny part of what software developers do when building code. There is also some value in learning the basics of how to write code, and a degree in CS continues to be a help in getting a job — even though it shouldn't be.

    Computer Scientists have sometimes acknowledged there are serious problems. For example, there was a big conference to address the “crisis in software” … in 1968! What did they do? They promoted “structured programming” and decided that the essential “go to” statement was evil.

    In the end, the value of much of Computer Science, apart from things like studying algorithms, should be forming the basis of producing good software that works. Here is how that's working out.

    Conclusion

    Computer Science is not a "science." Not close. To call it science is propaganda, fraud, ignorance, whatever. Computer Engineering is another matter. Computer Engineering, both as taught and as practiced, is horrible. The methods, largely lifted from other disciplines, just don't produce good results most of the time. There isn't even a movement that recognizes this and agitates to fix it!

    Happily, the solutions — good computer engineering practices — exist and have been proven in practice, many times over by different groups in different times and places. I have discussed what I know of these methods extensively in this blog and in books. Top programmers discover large parts of them on their own, and use them to achieve stellar results — outside the purview of corporate types, regulators and bureaucrats. For entrepreneurs versed in these methods, it's a serious competitive advantage, while the crippled, lumbering giants of software shuffle along.

  • Recurring Software Fashion Nightmares

    Computer software is plagued by nightmares. The nightmares vary.

    Sometimes they are fundamentally sound ideas that are pursued in the wrong way, in the wrong circumstances or at the wrong time. Therefore they fail – but usually come back, sometimes with a different name or emphasis.

    Sometimes they’re just plain bad ideas, but sound good and are cleverly promoted, and sound like they may be relevant to solving widely acknowledged problems. Except they just don’t work. Sometimes these fundamentally bad ideas resurface, sometimes with a new name.

    Sometimes they’re a good idea for an extremely narrow problem that is wildly applied beyond its area of applicability.

    The worst nightmares are the ones that slowly evolve, take hold, and become widely accepted as part of what “good programmers” believe and do. Some of these even become part of what is ludicrously called “computer science.” Foundation stones such as these, accepted and taught without proof, evidence or even serious analysis, make it clear to any objective observer that computer “science” is anything but scientific, and that computer “engineering” is a joke. If the bridges and buildings designed by mechanical engineers collapsed with anything close to the frequency that software systems malfunction, the bums and frauds in charge would have long since been thrown out. But since bad software that breaks is so widespread, and since after all it “merely” causes endless delays and trouble, but doesn’t dump cars and trucks into the river, people just accept it as how things are. Sad.

    Following is a brief review of sample nightmares in each of these categories. Some of what I describe as nightmares are fervently believed by many people. Some have staked their professional lives on them. I doubt any of the true adherents will be moved by my descriptions – why should they be? It was never about evidence or proof for the faithful to start with, so why should things change now?

    Sound Ideas – but not here and now

    • Machine learning, AI, OR methods, most analytics.
      • These things are wonderful. I love them. When properly applied in the right way by competent people against the right problem at the right time, amazing things can be accomplished. But those simple-sounding conditions are rarely met, and as a result, most efforts to apply these amazing techniques fail. See this.

    Bad ideas cleverly promoted

    • Microservices.
      • Microservices are currently what modern, transformative CTO’s shove down the throats of their innocent organizations, promising great things, including wonderful productivity and an end to that horror of horrors, the “monolithic code base.” While rarely admitted, this is little but a re-incarnation of the joke of Extreme Programming from a couple decades ago, with slightly modified rhetoric. A wide variety of virtues are supposed to come from micro-services, above all scalability and productivity, but it’s all little but arm-waving. If software valued evidence even a little, micro-services would be widely accepted as the bad joke it is.
    • Big Data
      • There's nothing wrong in principle with collecting data and analyzing it. But in practice, the whole big data effort is little but an attempt to apply inapplicable methods to random collections of data, hoping to achieving generic but unspecified benefits. See here.

    Great ideas for a narrow problem

    • Blockchain.
      • My favorite example of an excellent solution to a problem that has been applied way beyond its area of legitimate application is blockchain. Bitcoin was a brilliant solution to the problem of having a currency that works with no one in charge. How do you get people to “guard the vault” and not turn into crooks? The way the mining is designed with its incentives and distributed consensus scheme brilliantly solves the problem. However, the second you start having loads of crypto-currencies, things get weaker. Once you introduce “smart contracts,” there’s a fly in the ointment. Take away the currency and make it blockchain and you’ve got a real disaster. Then “improve” it and make it private and you’ve got yourself a full-scale contradiction in terms that is spectacularly useless. See here.

    Bad ideas that are part of the Computer Science and Engineering canon

    • Object-orientation.
      • Languages can have varying ways of expressed data structures and relating to them. A particular variant on this issue called “object-orientation” emerged decades ago. After enjoying a heyday of evangelism during the first internet bubble, O-O languages are taught nearly universally in academia to the exclusion of all others, and adherence to the O-O faith is widely taken to be a sign of how serious a CS person you are. For some strange reason, no one seems to mention the abject failure of O-O databases. And no one talks about serious opposing views, such as that of Linus Torvalds, the creator/leader of the linux open source movement, which powers 90% of the world's web servers. Programmers who value results have long since moved on, but academia and industry as a whole remains committed to the false god of O-O-ism.
    • Most project management methods.
      • I’ve written a great deal about this, including a book. Whether it’s the modern fashion of Agile with its various add-ons such as SCRUM, project management methods, ripped from other disciplines and applied mindlessly to software programming, have been an unmitigated disaster. The response? It ranges from “you’re not doing it right,” to “you need to use cool new method X.” Yet, project management is a profession, with certifications and course on it taught in otherwise respectable schools. A true nightmare. See these.
    • Most software QA methods.
      • All software professionals accept what they’ve been taught, which is that some kind of software automation is an essential part of building software that works and keeping it working. Except the methods are horrible, wasteful, full of holes and rarely make things much better. See this.

    Conclusion

    I make no claim that the list of items I’ve discussed here is comprehensive. There are things I could have included that I have left out. But this is a start. Furthermore, if just half the items I’ve listed were tossed and replaced with things that actually, you know, worked, much better software would be produced more quickly than it is today. I don’t call for perfection – but some progress would sure be nice!

  • Software is a Pre-Scientific Discipline

    For a wide variety of human-understandable reasons, software is perceived as a science. In academia, it's taught in the Computer Science department, often part of the math department. What could be more precise and scientific than that?

    Whatever its pretensions, software is anything but scientific. It's mostly driven by fashions and fads, led by "experts" who promote theories that sound good when described — but which entirely lack any form of scientific process, testing or evidence.

    Software diseases will continue to severely hamper our computer systems until we wake from our long, pre-scientific sleep. We will know we're making progress when software practices at least to the level medicine had achieved 150 years ago. See these examples: the history of scurvy; the history of bloodletting; Yahoo and Hadoop.

    The Evolution of Science

    Science is not one thing, although the core principles of hypotheses, testing and evidence are always the same. Sciences don't pop up full-grown from nothing. Each field of human endeavor evolves towards being a science (or not) at its own time and pace. There is typically a long gestation period during which resistance is deep and widespread. Physics is a classic example.

    Even when an area of science is well-established, the temptation to simply declare and assert the truth of something without careful proof remains strong and happens all too often. Science is something that human beings do, so it's never "perfect." It's also never "done." The resistance of the entire physics establishment to the now-accepted fact that time changes as the speed of an object approaches the speed of light, and that light has no mass but nonetheless exists and travels at a measurable rate, are classic examples of the fits-and-starts evolution of even the well-established science of physics.

    It was a long, hard slog for medicine to emerge from its pre-scientific state. While there is loads of room for improvement, as I have often pointed out in this blog, medicine has clearly and explicitly embraced scientific discipline in the large majority of its practices, of course with the occasional embarrassing slippage.

    The non-evolution of software towards science

    Let's compare the emergence of powered flight as a science to the methods for building software projects. Here are the key points:

    • Powered flight was widely recognized as important more than 100 years ago. There were widely accepted experts, and the entire establishment gave them money and support. After a couple spectacular failures by the experts, the obscure people who actually figured out how to create a heavier-than-air flying machine got it done, and their methods were soon universally followed. Here is the story in more detail.
    • Building effective software is widely recognized as important today. There are widely accepted experts, whose methods are taught in schools, practiced in all major institutions and mandated by government regulations. After spectacular, repeated failures, everyone says "oh, that's software, what can you do," and moves on. Meanwhile, sometimes obscure people build amazing new software quickly and well most often using unauthorized methods. Their software is widely used and their companies acquired by the big organizations who can't build anything. Nothing changes. Here is an example and details.

    To take another example, while there are many ways that the science of medical drug development could be improved, there is little doubt that it is a scientific venture. In terms of science, in spite of its problems, limitations and inefficiencies, drug development is probably a hundred years ahead of "computer science" in general and software development in particular. See this for a comparison.

    If software were a science

    Think of a list of established principles in software — if software were in fact a science, the things that would be like the basic equations of non-relativistic motion. What's on the list? I suspect it includes things like: Object-oriented programming, comprehensive test automation, architecting for scalability.

    Now think of a list of hot new methods or techniques in software, things that are widely accepted but early in widespread adoption. The list may include, depending on the circles in which you travel, things like micro-services, the Clojure language, Agile methodology with SCRUM, test-driven development.

    Which of the items on either list — your version of the lists, not mine — went through anything like this process:

    There was a bad problem that accepted methods weren't solving. People hypothesized an underlying cause and/or a cure, tested it first on a small scale and then on a larger scale. The evidence was overwhelming in A:B comparison that the cure was effective, so it became accepted.

    Or,

    There were observations that didn't fit existing theories, data that wasn't explained, or discrepancies that couldn't be accounted for. Someone came up with a theory that made sense out of the rogue data. Others formulated the theory exactly and conducted careful experiments; the results were made public, and maybe there was a period of refinement. Finally, the new theory was accepted, because it was experimentally proven to account for all the measured data, something the old theory could not do.

    Anyone with reasonable software experience knows the answer to these simple questions: software doesn't work that way! Not even a little bit! Instead, new practices are invented, promoted and sometimes accepted into common practice. In no case is there a scientific vetting process! People just accept the theory because

    • it makes sense to them, or
    • it's what they've been taught, or
    • it's required by the mandated practice of the group in which they work
    • it somehow advances their career or enhances their prestige

    Sometimes, happily, software fads just fade away for as little reason as they started. A fairly recent example is pair programming, which I describe and examine here.

    In the face of this evidence you may swallow hard and admit that software may not be a science, but it is an established discipline with standards and processes that are widely accepted, as for example you can see in the FDA software regulations. Sadly, that makes it even worse, if it's possible to imagine that. The standards and processes that constitute modern software practice are taken from other fields and jammed onto software. They don't fit and they don't work.

    Conclusion

    No testing. No hypothesis with controlled experiment. No evidence. No process that resembles either medicine (we know there's a problem, we have a possible solution, let's prove it works before using it widely) or physics (we have this data we can't explain, let's propose a theory that accounts for it and run experiments that will prove it right or wrong) or anything else.

    You can say that building software is part of "computer science" until you're blue in the face. You can require CS degrees for your new hires. But the evidence is that software is, without question, pre-scientific.

    We need to at least start building towards a true Science of Software.

     

  • What Software Experts think about Blood-letting

    Software experts do NOT think about blood-letting. But ALL medical doctors thought about blood-letting and considered it a standard and necessary part of medical practice until well into the 1800's. They continued to weaken and kill patients with this destructive "therapy," even as the evidence against it piled high.

    The vast majority of software experts strongly resemble medical doctors from those earlier times. The evidence is overwhelming that the "cures" they promote make things worse, but since all the software doctors give nearly the same horrible advice, things continue.

    Blood-letting

    Blood-letting is now a thoroughly discredited practice. But it was standard, universally-accepted practice for thousands of years. Here is blood-letting on a Grecian urn:

    11

    Consider, for example, the death of George Washington, a healthy man of 68 when he died.

    GW death

    Washington rode his horse around his estate in freezing rain for 5 hours. He got a sore throat. The next day he rode again through snow to mark trees he wanted cut down. He woke early in the morning the next day, having trouble breathing and a sore throat. Leaving out the details, by the time of his death, after treatment by multiple doctors, about half the blood in his body had been purposely bled in attempt to "cure" him of his sickness!!! If he hadn't been sick before, losing half the blood in his body would have killed him.

    If you are at an accident and you or someone else is bleeding badly, what do you do? You stop the bleeding, because if you don't, the person will bleed to death. That's now. Then? You bleed the sick person because it's the universally accepted CURE for a wide variety of sicknesses.

    Bloodletting was first disproved by William Harvey in 1628. It had no effect. It remained the primary treatment for over 100 diseases. Leaches were a good way to keep the blood flowing. France imported over 40 million leaches a year for medicinal purposes in the 1830's, and England imported over 6 million leaches from France in the next decade.

    While blood-letting faded in the rest of the 1800's, it was still practiced widely, and recommended in some medical textbooks in the early 1900's. We are reminded of it today by the poles on barber shops — the red was for blood and the white for bandages; barbers were the surgeons who did the cutting prescribed by doctors.

    Blood-letting in software

    By any reasonable criteria, software is at the state medicine was in 1799, when everyone, all the experts, agreed that removing half the blood from George Washington's body was the best way to cure him.

    If you think this is an extreme statement, you either don't have broad exposure to the facts on the ground or you haven't thought about what is taken to be "knowledge" in software compared to other fields.

    I hope we all know and accept that the vast majority of what we learn and come to believe is based on authority and general acceptance. This is true in all walks of life. Of course not everyone believes the same thing — there are different groups to which you may belong that have widely varying belief systems. But if you're somehow a member of a group, chances are very high that you accept most things that most members of that group believes.

    This is no less true in science-based fields than others. The difficulty of changing widely-held beliefs in science has been deeply studied, and the resistance to change is strong. See for a start The Structure of Scientific Revolutions. I have described this resistance in medical-related subjects, and in particular showed how the history of scurvy parallels software development methods all too well.

    But at least, to its great credit, medicine has gone through the painful transition to demanding facts, trials and real evidence to show that a method does what it's supposed to do, without awful side-effects. That's why we hear about evidence-based medicine, for example, while there is no such thing in software!

    I hear from highly-qualified and experienced software CTO's that they are going to lead a transition of their code base so it conforms to some modern cool fashion. One of the strong trends this year has been the drive to convert a "monolithic code base" (presumed to be a bad thing) to a "micro-service-based architecture." When I ask "why" the initial response ranges from surprise to a blank stare — they never get such a question! It's always smiling and nodding — my, that CTO is with-it, no question about it.

    Eventually I get the typical list of virtues, including things like "we've got a monolithic code base and have to do something about it" and "we've got to be more scalable," none of which solves problems for the company. When I press further, it becomes obvious that the CTO has ZERO evidence in favor of what will be a huge and consequential investment, and has never seriously considered the alternatives.

    As is typical in cases like this, when you scan the web, you see all sorts of laudatory paeans to the micro-service thing, very little against it. Most important, you find not a shred of evidence! No double-blind experiments! No evidence of any kind! No science of any kind! What you also don't find is stories of places that have embarked on the micro-service journey and discovered by experience all the problems no one talks about, all the problems it's supposed to solve but doesn't, and the all-too-frequent declarations of success accompanied by a quiet wind-down of the effort and moving on to happier subjects. Because of my position working with many innovative companies, this is exactly the kind of thing I do hear about — quietly.

    Conclusion

    We've got a long way to go in software. While software experts don't wear white coats, the way they dress, act and talk exudes the authority of 19th century doctors, dishing out impressive-sounding advice that is meekly accepted by the recipients as best practice. No one dares question the advice, and the few who demand explanations generally just accept the meaningless string of words that usually result — empty of evidence of any kind. It's just as well; the evidence largely consists of "everyone does it, it's standard practice." And that's true!

    Software experts don't think about blood-letting. But they regularly practice the modern equivalent of it in software, and have yet to make the painful but necessary transition to scientific, evidence-based practice.

     

  • What are Software Fashions?

    “Fashion” is a word we associate with clothes. Software is hard, it’s objective, it’s taught in schools as “computer science.” Software can’t have anything to do with “fashion” if it’s a “science,” can it?

    Sadly, software is infected by fashion trends and styles at least as much as clothes. Fashion has a huge impact on how software is built. Understanding this, along with other key concepts like those involved in Wartime Software, can contribute greatly to building great software that powers a business to great success.

    Fashion

    We all know what fashion is, exemplified by fashion shows like this one:

    Lady models

     with impossibly thin female models strutting down cat walks wearing clothes that no one is likely to wear in real life.

    Not as often, but guys too: Male models

    Far more important than models wearing extreme clothes is everyday fashion. I grew up looking at men dressed like this: Suit

    It's what men wore to church and to aptly-named "white-collar" businesses all the time. But there's nothing special about a suit and tie. Here's a look at Dutch fashion in the early 1600's:

    11
    11

    Fashion is arbitrary! it's just what people wear. Everyone judges you by what you or don't wear, according to prevailing fashions.

    Of course, "fashion" goes way beyond how people dress. It's how you act, how you speak, the accent you use, the interests you express, just about everything. If you don't think it matters, just trying wearing NY Yankee regalia into a South Boston sports bar and start spouting trash about the Red Sox. Lots of people who think they're the kind of people who are "above" fashion are driven by it nonetheless — just look how they respond to people walking into a room, and if there's any doubt, seconds after the newcomer opens his mouth.

    Fashion is about people. It's about belonging, status, fitting in or "making a statement." We live in a fashion-dominated world, like it or not.

    Fashion in Software

    I was one of those people who was convinced I was "above" fashion. Being above it made me superior, in my mind, to those who were slaves to it. During and beyond college, I bought the few clothes I wore from a local used clothing store and wore hiking boots most of the time. Once when I was in my first post-college programming job, I was called out of the cubical where I spent most of my time, heads-down, programming away, and asked to come into a meeting in the front of the building. I walked in to a meeting populated entirely by men in suits. One of the men I didn't know glanced at me and immediately exclaimed "finally! We get to talk with someone who knows things!"

    I had been called into a sales meeting, and one of the visitors had software questions no one knew the answer to, so "the suits" had called in the guy who knew the answers. How I dressed and acted in fact made a statement to the visitors — I dressed the way a programmer dressed, the kind of programmer who wanted to program, not one who aspired to management. So, like it or not, I was making a "fashion statement," while fooling myself thinking that I was "above" fashion. The hard fact is, no one is "above" fashion. The way we dress and act and talk, the choices we make, says loads about us. Those unavoidable choices clearly establish our place in various groups, social and status hierarchies.

    Software Fashions

    Given how fashion-driven our lives are, it would be shocking if programmers weren't fashion-driven in their shared activity of software. In fact, they are! Sadly, the vast majority don't think their choices are fashion-driven. They believe they're modern, with-it software professionals who are using the proven, advanced methods for doing software. The trouble is, few of them take the trouble to cast a knowledgeable, cold, hard eye on the arguments, experience and facts concerning their chosen methods and tools. They've made their choices so they can be with whatever software social group they identify with and/or aspire to. It's all about relationships and status. If you're ambitious, you may want to be with the "cool kids," members of an elect social group, yes, in software. And it works! If it didn't "work" (elevate their software group status), they wouldn't do it.

    We like to think of people who wear fashion-forward clothes as being empty-headed, shallow people. Surely, programmers aren't that! But to the extent that they adapt fashion-forward software, that's exactly what they're doing — only worse! They're lying to themselves, deceiving others, and making believe they're pushing some trendy software thing because it's advanced technology, yielding results superior to the obsolete stuff that used to be the standard.

    Just like with clothes, software fashions evolve. Fads start and may become hot. The fad may evolve into a fashion as it spreads, with people who haven't adopted it taking notice. The fashion may further evolve into standard practice, with eyebrows being raised for anyone who dares to question it. More often, the fashion simply fades away.

    It's rare for any fad or fashion to be explicitly repudiated — oh, that was a terrible idea, people are turning away from it for good reason and here's why. No one says that! In the "advanced clothing" area, everyone knows that fashion is "just fashion." In software, fashions aren't considered "fashions;" they are considered "advances," emerging modern techniques that are objectively better than what came before, like a new drug or operation that has emerged from clinical trials and now saves lives that used to be lost! Saying that a widely adopted software fashion was never proven, was always a bad idea, but got widely used and promoted anyway would expose the game. So when software fashions die, they fade slowly away and simply stop getting used and talked about.

    After a fashion fades away, it's generally forgotten by nearly everyone, usually except for a band of true believers. Some of the more intellectually heavy-weight fashions retreat to academia, where they live on, always with "exciting futures."

    Some of these flourished-but-died fashions rise to live renewed lives. In one pattern, the fashion was so broadly accepted but such a failure (though rarely discussed as such) that when it becomes fashionable again, it has a new name. No one ever refers to last time, why the older fashion didn't work out, and why this slightly altered version of the same thing will. In another pattern, the fashion baldly re-emerges with exactly the same name, and nearly the same blazing-bright future as the last time. Sometimes there are even some successes. But it remains a fashion and therefore has disappointing results to anyone who cares to look, which is essentially no one — such is the social power of fashion!

    Not all fashions die. Some fashions have such powerful support that they become locked in as part of modern mainstream practice, sometimes even becoming part of so-called Computer Science, or at least IT Management. I don't fully understand how and why this happens, but I know that in part, the fashions that become standards address some widely felt need in the people involved in software. When this happens, there is often a series of waves of renewal or reform — while the reformers refuse to acknowledge fundamental problems with the fashion-enshrined-as-best-practice, they latch onto some minor tweak or addition and promote it, usually with a new name, as the best way to get results with standard-practice X.

    What are these software fashions exactly?

    I have already talked about a few important toxic software fashions. I have gone into huge detail for a couple of them, with multiple blog posts and even books. I'm gradually starting to understand this bizarre phenomenon in terms of powerful social fashions with bad results, masquerading as "advances." I'm seeing the resistance to seeing the Emperor's New Clothes for what they are because of the self-delusional conception of software as a science/math-based STEM field, rather than as the pre-scientific collective group-think that it largely is.

    I have already challenged a couple of the modern hot fashions, for example my series of posts on AI/ML — a classic example of a once-hot fashion that has died away and been re-born multiple times. This particular fashion is distinguished from some of the others because there are some truly excellent algorithms at the heart of it that can be applied to great benefit — and this has been true for decades! But because of widespread fashion-itus, the money and effort spent on them is mostly wasted.

    In future posts and at least one future book (in process), I will continue to dive into and expose specific software fashions for what they are. I do this in part to strengthen the resolve of those special people and groups (some of them ones we've invested in) with the understanding that the "wrong" or "uncool" things they're doing give them a fundamental business/technical advantage, and they should stick to their guns and ride their truly effective methods to success, to the benefit of all concerned.

    Further reading:

    Resistance to treating scurvy compared to software disease treatments.

    The modern AI/ML fashion.

    The story of how I discovered the fashion vs. what-really-works issue.

    Deconstructing project management.

    The story of fashions-becoming-standard-practice.

    The Cloud fashion.

    Big Data fashion. Big Data bubble.

    Evidence-based software methods don't exist.

    The recurring fashion of data definition location.

     

  • Evidence-based Software

    Have you heard of "evidence-based medicine?" It's a relatively new trend in medicine based on the idea that what doctors do should be based on the evidence of what works and what doesn't. What's scary as a patient is the thought that this is a new idea. What is it replacing? Voo-doo-based medicine?

    At least the field of medicine has accepted that evidence matters. So much better than not!

    Let's turn to software. Have you ever heard of evidence-based software? Of course not! There is no such thing! How software is built is based on loads of things, but sadly, evidence is not among them. Among other things, this explains why software projects fail, and/or result in expensive, rigid bloat-ware that is riddled with errors and security holes.

    The Golden Globes 2016

    One of the reasons to watch the Golden Globes awards ceremony is for the fashion. Everyone knows it — which is why there's a multi-hour Red Carpet pre-show, and even a pre-show to the red carpet show.

    You watch the show if you want to see what the new fashions are. You wouldn't want to look silly, would you? If you watched this year's show, you could see Amanda Peet looking really nice:

    11 Peet

    And you could see Sarah Jessica Parker looking like something else altogether:

    11 Parker

    I heard the expert on one of the shows talking about the new colors and lines in the dresses, something we'd see more of in the upcoming year.

    What's the "best" fashion? The one leading people seem to like. What will be the best fashion next year? About all you can be certain about is that it will be something different from what was most-liked this year.

    Software development fashions

    The methods used in software development are selected with just about the same criteria as the leading fashions in dresses. Who's wearing what? What do leading people think? What did I use (wear) last time that got admiring looks?

    Fashions come into software development. They get promoted. They get used and mis-used, adapted and morphed. Programmers take them with varying degrees of seriousness. Wherever you're programming, you have to more or less go along with the prevailing fashion. If everyone else crosses themselves, you'd better too. If there's a daily stand-up, you'd better stand up when everyone else does, and not look too abstracted or bored.

    Effectiveness, Productivity and Quality

    In fashion, you want the admiration of other people who look at what you're wearing. In software, since you spend most of your time building software, it's nice to have the admiration of people who look at you building software.

    But unfortunately, other points of view sometime intrude. Managers want to know about budgets and productivity and deadlines. After the software is put into use, there are ignorant and annoying users to contend with. What you've worked so hard to build is never enough. They complain about it! Crashes, performance, quality issues? Sometimes people get upset. And security? Rule number one is keep it quiet! The last thing we need is this getting into the papers!

    Then you find out that most outsiders could care less what goes on in the sausage factory. Whether it's organized or chaotic, ugly or pretty, in the end all they seem to care about is how the sausage tastes. These simple-minded people can only keep one thing in their heads at a time, and that one thing is most often: the results!

    Wouldn't it be nice if we had a way of picking through the dozens of software methods that are in widespread use, and based on the evidence, settle on just a couple that were the best that actually … produced the best results!!?? Or maybe that's just too radical a thought.

    That's why we need something like evidence-based software — or at least acknowledgement that it could help things out.

    Coda: EBSE: Evidence-Based Software Engineering

    I started writing this blog post based on the comparison to evidence-based medicine as a way to frame the fashion-based chaos that surprisingly rules the day in this highly exacting field of work. I certainly had never heard the phrase "evidence-based software." But as a check before clicking "publish," I thought I'd better do a quick search. Imagine my surprise when I found that there is, indeed, something called EBSE, evidence-based software engineering, explicitly inspired by the analogy in medicine!

    I've interacted with a large number of software engineering groups over the last twenty-plus years, and been inside a few for many years prior to that. The groups have been highly varied and diverse, to put it mildly. I've seen loads of trends, languages, methodologies and tools. And never — not once! — have I heard of the existence of EBSE. It should be just what we need, right?

    So I dove in. It's sad. Or pathetic. Both, actually.

    There's a moribund website on the subject:

    11 EBSE

    • It doesn't have a domain name, it's just hosted at some obscure university in the UK midlands.
    • The last "news" is from 2011. Not much happenin'…
    • All the "evidence" appears to come from published academic papers — you know, those things that practicing software people absolutely depend on.
    • "The core tool of the evidence-based paradigm is the Systematic Literature Review (SLR)…" The SLR is basically a meta-analysis of lots of published academic papers. Whoopee!
    • The whole thing is organized "using the knowledge areas defined by the SWEBOK."
    • I couldn't find a single useful thing in the whole pile of words.

    The "SWEBOK"??? Another thing I've never heard of. It turns out it's an official IEEE guide to the Software Engineering Body of Knowledge. This essential guide tells us everything that leading academics are convinced are must-knows, "generally accepted knowledge about software engineering." If only I had known! Think how much trouble I could have saved myself and others over the years! Best of all, it's practically up-to-date — just over 12 years old!

    EBSE and SWEBOK are great demonstrations of just how bad things are in the software field: even when you start with a great metaphor, you still make no progress if you continue to accept as gospel the broken assumptions that the field's academics take to be eternal TRUTH. The sad fact is, math and computer science are at fundamental odds with effective software development. As I've shown. Sad, but true.

    Having something like evidence-based medicine for software instead of the ugly, ineffective chaos we have today would be nice. EBSE is a nice name, but as a reality, a non-starter.

  • The Science of Drugs vs. the Science of Computers and Software

    Prescription drugs are important elements of our lives. There is a strict, scientific, testing-based process to assure that drugs that become widely used are safe and effective, with known side effects. Computers and software are also important elements of our lives. There is a chaotic, fashion-trend-based process used to select the mixture of tools and techniques used to build, maintain and operate our IT systems, resulting in widespread failures, along with cost and quality problems. Worse, there is no recognition that this is the state of affairs, and no movement to correct the situation.

    Pharmaceuticals

    Everyone knows that drugs are important, and an important part of our economy. Here are some numbers from the CDC. Medical spending
    In 2013, we spent about $271 Billion dollars on prescription drugs. That's quite a bit, but just about 10% of national health spending.

    I won't recount the process drugs go through to get approval from the FDA, but I think everyone knows it's an elaborate, multi-year and multi-stage process, with testing at each step to assure that we know how a proposed new drug will work in human beings. While I have my complaints, there is a process, and it's scientific and evidence-based.

    IT

    The IT industry is also a large one. Here's a breakdown of it worldwide.

    Techcrunch

    There are conflicting estimates of its size in the US, but here's a representative one.

    US IT spend

    Note that the definition of IT does not include the activities of well-known IT-centric companies like Google.

    I was fascinated to see that in 2013, IT was three times the size of the entire pharmaceutical industry. Amazing.

    Drugs and IT

    Drugs are developed by scientists. They are vetted by a strict scientific process. Only drugs that make it through all the tests are widely used. As a result, the vast majority of drugs are used safely and effectively by the vast majority of patients, with a few experiencing side effects that have already been identified.

    IT is run by professionals and staffed with computer scientists and engineers, using tools and techniques developed over many years by scientists and engineers. No matter how high-profile and important the project, regardless of the involvement by government or private companies, a shocking fraction of IT projects end up late, too costly, ineffective or worse. Industry-accepted certifications seem to make no difference. New methods and techniques emerge, become talked about and are deployed widely without any evidence-based process being used to assure their safety and effectiveness. The industry is rife with warring camps, each passionately committed to the effectiveness of their set of tools and techniques. But there isn't even postmortem testing to see which ones were better at gaining its adherents admittance into IT "heaven."

    Conclusion

    I think the FDA-run drug acceptance process could be much better than it is. But the important thing is, everyone involved in prescription drugs understands and acts scientifically about the process. No one, including me, wants that to change.

    The IT industry is at least three times the size of the drug industry. There are computer science and computer engineering departments in every major university, and their graduates staff the industry. It's hard to imagine that they don't understand science, scientific process and evidence-based reasoning. However: they adhere to faith-based processes and vendor-driven products that yield horrible results year after year. None of them say, "hey, this stinks, maybe we can apply that thing that Galileo, Newton and Einstein did, what's it called, science?"

    The last thing I want is government involvement in IT, given how horribly government handles its own IT affairs, and I'm not suggesting it here. But it's a sign of just how bad things are in IT that the bureaucratic, government-run FDA does a more scientific job with drugs than anyone does with IT.

  • How much is a computer science degree worth?

    The median annual wage of a college grad with a computer, math or statistics degree is over $70,000. This is better than the vast majority of college majors, and compares really well with the median annual wage of high school grads, which is under $40,000. The conclusions are clear:

    • Go to college
    • Major in computers, math, statistics, architecture or engineering
    • Otherwise, you’re screwed.
    • Well, all right, majoring in education or psychology leads to crappy salaries, but at least it’s better than being just a high school grad.

    Here is the data: Wages of college grads

    This is a test!

    Trigger Warning! From here to the end of this post could trigger feelings of inadequacy among certain people. Others could feel anger towards the author, causing potentially dangerous heightening of the pulse rate. Others could feel that the author is hopelessly arrogant or elitist, resulting in generally uncomfortable feelings. So read on at your own risk.

    This post is a test of whether you’re qualified to be a top computer programmer, or an outstanding achiever in any technical/quantitative field. The thoughts in this post up to this point summarize what the article accompanying the chart intends you to conclude, and what most people will think on looking at the chart.

    The author of the article clearly failed the test.

    Did you?

    Understanding the data

    If you haven’t already, look at the chart again. Note the big, fat explanation at the top. The endpoints of the lines represent 25th and 75th percentiles. The 75th percentile for high school grads is about $50,000. This means that a quarter of high school grads have salaries above that. The 25th percentile for computer etc. grads is roughly $50,000, perhaps a little more. Which means that a quarter of the computer etc. grads make less than $50,000. In summary: a quarter of high school grads have salaries that are greater than a quarter of college grads with degrees in computers, math or statistics. Read that sentence again. Get it? Did you figure it out before reading this?

    Implications for Hiring Computer Programmers

    I hope you’ve just seen why, when I’ve hired people, I really haven’t given a %^* about their education or their degree – in fact, the higher the education and the fancier the degree, the more concerned I am to weed out the folks with bad attitudes, the ones who have been granted the knowledge and the certification to prove it, and want to spend their lives resting on and/or milking their degrees. Some of the best programmers I’ve met in decades of programming did not have college degrees. Most of the ones who are less than excellent and/or have “risen” in management are experts at glancing at things and reaching the wrong conclusions. Like most people do when looking at the salary chart above. FWIW, here are some good examples of drop-outs who did pretty well. Including the Wright Brothers — after all, how hard can inventing the airplane be?

    The people who are best in computing combine big-picture, visual/conceptual abilities with an utterly uncompromising attention to detail. Computer programs shouldn’t have even a single byte wrong, and the bytes should be selected and arranged according to a deep conceptual understanding of the problem at hand. Amateurs and pretenders don’t do well at either of these jobs, much less in combination.

    Conclusion

    If you care about attracting, selecting and retaining the very best software people, you would be well advised to alter your hiring practices as required to select the people who … get ready for it … can actually do the work! Really well! Having degrees or whatever is not nearly as correlated to that outcome as you might think.

  • Math and Computer Science vs. Software Development

    In a prior post, I demonstrated the close relationship between math and computer science in academia. Many posts in this blog have delved into the pervasive problems of software development. I suggest that there is a fundamental conflict between the perspectives of math and computer science on the one hand, and the needs of effective, high quality software development on the other hand. The more you have computer science, the worse your software is; the more you concentrate on building great software, the more distant you grow from computer science.

    If this is true, it explains a great deal of what we observe in reality. And if true, it defines and/or confirms some clear paths of action in developing software.

    A Math book helped me understand this

    I've always loved math, though math (at least at the higher levels) hasn't always loved me. So I keep poking at it. Recently, I've been going through a truly enjoyable book on math by Alex Bellos.

    Bellos cover

    It's well worth reading for many reasons. But this is the passage that shed light on something I've been struggling with literally for decades.

    Bellos quote

    When we learn to count, we're learning math that's been around for thousands of years. It's the same stuff! Likewise when we learn to add and subtract. And multiply. When we get into geometry, which for most people is in high school, we're catching up to the Greeks of two thousand years ago.

    As Alex says, "Math is the history of math." As he says, kids who are still studying math by the age of 18 have gotten all the way to the 1700's!

    These are not new facts for me. But somehow when he put together the fact that "math does not age" with the observation that in applied science "theories are undergoing continual refinement," it finally clicked for me.

    Computers Evolve faster than anything has ever evolved

    Computers evolve at a rate unlike anything else in human experience, a fact that I've harped on. I keep going back to it because we keep applying methods developed for things that evolve at normal rates (i.e., practically everything else) to software, and are surprised when things don't turn out well. The software methods that highly skilled software engineers use are frequently shockingly out of date, and the methods used for management (like project management) are simply inapplicable. Given this, it's surprising, and a tribute to human persistance and hard work, that software ever works.

    This is what I knew. It's clear, and seems inarguable to me. Even though I'm fully aware that the vast majority of computer professionals simply ignore the observation, it's still inarguable. The old "how fast do you have to run to avoid being eaten by the lion" joke applies to the situation. In the case of software development, all the developers just stroll blithely along, knowing that the lions are going to to eat a fair number of them (i.e., their projects are going to fail), and so they concentrate on distracting management from reality, which usually isn't hard.

    What is now clear to me is the role played by math, computer science and the academic establishment in creating and sustaining this awful state of affairs, in which outright failure and crap software is accepted as the way things are. It's not a conspiracy — no one intends to bring about this result, so far as I know. It's just the inevitable consequence of having wrong concepts.

    Computer Science and Software Development

    There are some aspects of software development which are reasonably studied using methods that are math-like. The great Donald Knuth made a career out of this; it's valuable work, and I admire it. Not only do I support the approach when applicable, I take it myself in some cases, for example with Occamality.

    But in general, most of software development is NOT eternal. You do NOT spend your time learning things that were first developed in the 1950's, and then if you're good get all the way up the 1970's, leaving more advanced software development from the 1980's and on to the really smart people with advanced degrees. It's not like that!

    Yes, there are things that were done in the 1950's that are still done, in principle. We still mostly use "von Neumann architecture" machines. We write code in a language and the machine executes it. There is input and output. No question. It's the stuff "above" that that evolves in order to keep up with the opportunities afforded by Moore's Law, the incredible increase of speed and power.

    In math, the old stuff remains relevant and true. You march through history in your quest to get near the present in math, to work on the unsolved problems and explore unexplored worlds.

    In software development, you get trapped by paradigms and systems that were invented to solve a problem that long since ceased being a problem. You think in terms and with concepts that are obsolete. In order to bring order to the chaos, you import methods that are proven in a variety of other disciplines, but which wreck havoc in software development.

    People from a computer science background tend to have this disease even worse than the average software developer. Their math-computer-science background taught them the "eternal truth" way of thinking about computers, rather than the "forget the past, what is the best thing to do NOW" way of thinking about computers. Guess which group focusses most on getting results? Guess which group would rather do things the "right" way than deliver high quality software quickly, whatever it takes?

    Computer Science vs. Software Development

    The math view of history, which is completely valid and appropriate for math, is that you're always building on the past, standing on the shoulders of giants.

    The software development view of history is that while some general things don't change (pay attention to detail, write clean code, there is code and data, inputs and outputs), many important things do change, and the best results are obtained by figuring out optimal approaches (code, technique, methods) for the current situation.

    When math-CS people pay attention to software, they naturally tend to focus on things that are independent of the details of particular computers. The Turing machine is a great example. It's an abstraction that has helped us understand whether something is "computable." Computability is something that is independent (as it should be) of any one computer. It doesn't change as computers get faster and less expensive. Like the math people, the most prestigious CS people like to "prove" things. Again, Donald Knuth is the poster child. His multi-volume work solidly falls in this tradition, and exemplifies the best that CS brings to software development.

    The CS mind wants to prove stuff, wants to find things that are deeply and eternally true and teach others to apply them.

    The Software Development mind wants to leverage the CS stuff when it can help, but mostly concentrates on the techniques and methods that have been made possible by recent advances in computer capabilities. By concentrating on the newly-possible approaches, the leading-edge software person can beat everyone else using older tools and methods, delivering better software more quickly at lower cost.

    The CS mind tends to ignore ephemeral details like the cost of memory and how much is easily available, because things like that undergo constant change. If you do something that depends on rapidly shifting ground like that, it will soon be irrelevant. True!

    In contrast, the Software Development mind jumps on the new stuff, caring only that it is becoming widespread, and tries to be among the first to leverage the newly-available power.

    The CS mind sits in an ivory tower among like-minded people like math folks, sometimes reading reports from the frontiers, mostly discarding the information as not changing the fundamentals. The vast majority of Software Development people live in the comfortable cities surrounding the ivory towers doing things pretty much the way they always have ("proven techniques!"). Meanwhile, the advanced Software Development people are out there discovering new continents, gold and silver, and bringing back amazing things that are highly valued at home, though not always at first, and often at odds with establishment practices.

    Qualifications

    Yes, I'm exaggerating the contrast between CS and Software Development. Sometimes developers are crappy because they are clueless about simple concepts taught in CS intro classes. Sometimes great CS people are also great developers, and sometimes CS approaches are hugely helpful in understanding development. I'm guilty of this myself! For example, I think the fact that computers evolve with unprecedented speed is itself an "eternal" (at least for now) fact that needs to be understood and applied. I argue strongly that this fact, when applied, changes the way to optimally build software. In fact, that's the argument I'm making now!

    Nonetheless, the contrast between CS-mind and Development-mind exists. I see it in the tendency to stick to practices that are widely used, accepted practices, but are no longer optimal, given the advances in computers. I see it in the background of developers' preferences, attitudes and general approaches.

    Conclusion

    The problem in essence is simple:

    Math people learn the history of math, get to the present, and stand on the shoulders of giants to advance it.

    Good software developers master the tools they've been given, but ignore and discard the detritous of the past, and invent software that exploits today's computer capabilities to solve today's problems.

    Most software developers plod ahead, trying to apply their obsolete tools and methods to problems that are new to them, ignoring the new capabilities that are available to them, all the while convinced that they're being good computer science and math wonks, standing on the shoulders of giants like you're supposed to do.

    The truly outstanding people may take computer science and math courses, but when they get into software development, figure out that a whole new approach is needed. They come to the new approach, and find that it works, it's fun, and they can just blow past everyone else using it. Naturally, these folks don't join big software bureaucracies and do what everyone else does. They somehow find like-minded people and kick butt. They take from computer science in the narrow areas (typically algorithms) where it's useful, but then take an approach that is totally different for the majority of their work.

  • Math and Computer Science in Academia

    Math and music are incredibly inter-related, as has been understood at least since Pythagoras. But they are never studied in a single academic department. Math and music are arguably more intimately bound than math and computer science. But math and music are never in the same department, while math and computer science frequently are. Hmmm….

    Math and Computer Science are joined at the hip in Academia

    Math and Computer Science are so intimately related in academia that they are frequenty part of the same department. This is true at elite institutions like Cal Tech. Mathcs caltech

    Math and Computer Science are in the same department at private liberal arts schools, too, like Wesleyan. Mathcs wesleyan

    They're a single department at major state universities, like Rutgers. Mathcs rutgers

    Same thing as lesser state schools. Here's how it goes at Cal State East Bay. Mathcs CSEB

    I make no argument that this is universal. Don't need to. If you search like I did, you'll find that putting math and computer science in a single department is a common practice.

    Why are Math and Computer Science so Academically Intimate?

    Most people seem to think that math and computer science are pretty much the same thing. Consider this:

    • Most "normal" people who try either of them don't get very far.
    • The people who are way into either of them are really nerdy.
    • If you're good at one of them, there's a good chance you'll do well at the other.
    • They are incredibly detail-oriented. They're full of symbols and strange languages.
    • What you do doesn't seem to be physical at all. What are you doing while programming or doing math? Mostly staring into space or scribbling strange symbols, it seems.
    • You can write programs that do math, and math applies broadly to computing.

    Meanwhile, there are other remarkably similar things that don't end up in the same department. Consider the "life sciences." They all have loads of things in common. Everything they all study starts life, develops, lives for awhile, maybe has offspring, and dies. DNA is intimately involved. Oxygen and carbon dioxide play crucial roles. But since when have you ever seen a department of botany and zoology? Like never, right? In the humanities it's just as extreme. Ever hear of a department of French and German? Academics already fight enough among themselves without that…

    Academics clearly think that math and computer science aren't just similar or highly related. If so, they'd treat them the way they do languages or life sciences. A broad spectrum of academics think they're so interwoven that there are compelling reasons for studying them together. Thus a single department that has them both.

    Math and Computer Science, a Marriage made in ????

    It's a common practice for math and computer science to be studied together. Obviously, most people have no trouble with the concept. Of all the things to question or worry about in the world, this seems pretty low on the list.

    I would like to change this. I'd like to cause trouble where there is none today — or rather, I'd like to EXPOSE the deep-seated, far-reaching, trouble-causing consequences of the fact that everyone thinks it's quite alright that math and computer science are thought of as pretty much two halves of the same coin. In fact, I will argue that the math-computer-science-marriage is just fine for math — but the root cause of a remarkable variety of intractable problems that plague software development.

    Note that I did a quick shift there. I have no problem with math and computer science being together. They kinda belong together. My problem is that everyone thinks that you study computer science in school so that you're qualified to do software development after graduating. And that software development shops require CS degrees, and pay more for advanced degrees in CS, on the theory that if some is good, more must be better.

    I will flesh this out and explain why it's the case in future posts. But I thought throwing down the gauntlet was worth doing. Or at least fun!

  • Lessons for Software from the History of Scurvy

    Software is infected by horrible diseases. These awful diseases cause painfully long gestation periods requiring armies of support people, after which deformed, barely-alive products struggle to be useful, live crippled existences, and are finally forgotten. Software that functions reasonably well is surprisingly rare, and even then typically requires extensive support staffs to remain functional.

    Similarly, sailors suffered from the dread disease of scurvy until quite recently in human history. The history of scurvy sheds surprising light on the diseases which plague software. I hope applying the lessons of scurvy will lead to a world of disease-free, healthy software sooner than would otherwise happen.

    Scurvy

    Scurvy is caused by a lack of vitamin C. It's a rotten disease. First you get depressed and weak. Then you pant while walking and your bones hurt. Next your skin goes bad,

    378px-A_case_of_Scurvy_journal_of_Henry_Walsh_Mahon
    your gums rot and your teeth fall out.

    Scorbutic_gums
    You get fevers and convulsions. And then you die. Yuck.

    The Impact of scurvy

    Scurvy has been known since the Egyptians and Greeks. Between 1500 and 1800, it's been estimated that it killed 2 million sailors. For example, in 1520, Magellan lost 208 out of a crew of 230, mainly to scurvy. During the Seven Years' War, the Royal Navy reported that it conscripted 184,899 sailors, of whom 133,708 died, mostly due to scurvy. Even though most British sailors were scurvy-free by then, expeditions to the Antarctic in the early 20th century were plagued by scurvy.

    The Long path to Scurvy prevention and cure

    The cure for scurvy was discovered repeatedly. In 1614 a book was published by the Surgeon General of the East India company with a cure. Another was published in 1734 with a cure. Some admirals kept their sailors healthy by providing them daily doses of fresh citrus. In 1747 the Scottish Naval Surgeon James Lind proved (in the first-ever clinical trial!) that scurvy could be prevented and cured by eating citrus fruit.

    JamesLind

    Finally, during the Napoleonic Wars, the British Navy implemented the use of fresh lemons and solved the problem. In 1867, the Scot Lachlan Rose invented a method to preserve lime juice without alcohol, and daily doses of the new product were soon standard for sailors, which is how "limey" became synonymous with "sailor."

    B_scurvy

    Competing Theories and Establishment Resistance

    The effective cures that had been known and used by some people for centuries were not in a vacuum. There were competing theories. Cures included urine mouthwashes, sulphuric acid and bloodletting. As recently as 100 years ago, the prevailing theory was that scurvy was caused by "tainted" meat. How could this be?

    We've seen this movie before. Over and over again. I told the story of Lister and the discovery of antiseptic surgery — and the massive resistance to the new method by the leading authorities at the time.

    Software Diseases

    This brings us back to software. However esoteric and difficult it may be, software is a human endeavor: people create, change and use software and the devices it powers. Like any human endeavor, some of what happens is because of the subject matter, but a great deal is due to human nature. People are, after all, people, regardless of what they do. Patients were killed for lack of antiseptic surgery — and the surgical establishment fought it tooth and nail. Millions of sailors were killed by scurvy, when a cure had been known, practiced and proved for centuries. Why would we expect any other reaction to cures for software diseases, when the "only" consequence of the diseases are explosive growth in the time, cost and risk to build and maintain software, which is nonetheless crappy and late?

    Is there a general outcry about this dismal software situation? No! Why would anyone expect there would be? Everyone thinks it's just the way software is, just like they thought scurvy in sailors and deaths after surgery were part of life. Government software screws up,

    Healthcare-gov-wait
    software from major corporations is awful,

    Hertz fail

    software from cool new social media companies is inexcusably bad. Examples of bad software can be listed for endless, boring, tedious, like forever lengths.

    Toward Healthy Software Development

    If I had spent my life in the normal way (for a software guy), I wouldn't be on this kick. But I didn't and I am on this most-software-sucks kick. Early on, I had enough exposure to large-group software practices to convince me that I wanted none of it. I'd rather actually get stuff done, thank you very much. Now, looking at many young software ventures over a period of a couple decades, the patterns have emerged clearly.

    I have described the main sources of the problems. I have described the key features of diseasefree software development. I have explained the main sources of the resistance to a cure, for example in this post. And I have no illusion that things will change any time soon.

    It will sure be nice when the pockets of healthy software excellence that I see proliferate more quickly than they are, and when an anti-establishment consensus consolidates and gains visibility more quickly than it is. In the meantime, there is good news: groups that use healthy, disease-free software methods will have a massive competitive advantage over the rest. It's like ninjas vs. a collection of retired security guards. It's just not fair!

  • Computer History

    In software, history is ignored and its lessons spurned. What little software history we are taught is often simply wrong. Everyone who writes or uses software pays for this, and pays big.

    But we know about history in software — there's Babbage, the ENIAC, etc.

    Yes, we've all heard about various people who are said to have invented modern computing. A  shocking amount of what we are taught is WRONG.

    Babbage is a case in point. People just love to go on and on about him. There are  problems, though. I'll just mention a couple.

    220px-Charles_Babbage_-_1860

    One problem is that his machines simply didn't work, even after decades of work, and huge amounts of skilled help and money. He must have known they wouldn't; although he was personally wealthy, it was other people's money he spent on his famous dalliance.

    Another problem is that his best idea wasn't his. The idea of using punched cards

    220px-Jacquard.loom.cards
    to contain the program was invented in France and was a key aspect of the Jacquard Loom — a machine that pre-dated all his work, and a machine that actually worked and was in widespread use.

    The ENIAC is another good example of what appears to be the typical pattern in computing, which is someone invents a good thing, makes it work, and then someone else steals it, takes credit for it and tries to cover up the theft, often without delivering results as good as the original.

    250px-Eniac

    If you only read the standard literature, you would still be convinced that the ENIAC and its inventors were giants of the field. Once you read everything, you discover that reality is more interesting. It turns out that the inventors of the ENIAC were "inspired" by prior inventions, much like Babbage and the Jacquard Loom. In this case, the inspiration was the Atanasoff-Berry Computer.

    ABCdrawing
    Here is an excerpt from the ruling in the patent dispute that settled the issue:

    Judge Larson had ruled that John Vincent Atanasoff and Clifford Berry had constructed the first electronic digital computer at Iowa State College in the 1939-1942 period. He had also ruled that John Mauchly and J. Presper Eckert, who had for more than twenty-five years been feted, trumpeted, and honored as the co-inventors of the first electronic digital computer, were not entitled to the patent upon which that honor was based. Furthermore, Judge Larson had ruled that Mauchly had pirated Atanasoff's ideas, and for more than thirty years had palmed those ideas off on the world as the product of his own genius.

    Other fields don't need history — why should software?

    Not true. Other fields are saturated with history.

    Politicians study history in general and the last election in particular. Fiction writers frequently read fiction, current and historic. Generals study old battles for their lessons; even today at West Point, they read about the Civil War. Learning physics is like going through the history of physics, from Galileo and Newton and through Planck and Einstein to the present. Even the terms used in physics remind you of its history: hertz, joules and Brownian motion.

    Software, by contrast, is almost completely a-historical. Not only are most people involved uninterested in what happened ten years ago, even the last project is unworthy of consideration – it’s “history.”

    Consequences of the lack of history

    War colleges study past wars for the highly pragmatic purpose of finding out how they were won or lost. What was it the winner did right? Was it better weapons? Better strategy? Better people? Some combination? And how exactly did the loser manage to lose? Was it a foregone conclusion, or was defeat snatched from the jaws of victory? People who conduct wars are serious about their history — they want to win!!

    In software, no one is interested in history. Everyone thinks they know the "right" way to build software, and thinks that the only possible source of loss is failing to do things the "right" way — the requirements weren't clear; the requirements were changed; I wasn't given enough time to do a proper design; there was no proper unit testing; the lab for testing was insufficently realistic. The list of complaints and excuses is endless, and their net effect is always the same: crappy software and whining: I need more people, more time and more money. Because studying history is so rare, few are exposed to the software "wars" that are fought and won by teams that didn't follow their rules.

    There is only one conclusion to be drawn: software people would rather lose with lots of excuses than win by doing things the "wrong" way. Ignoring history is a great way to stay in this comfortable cocoon.

    When software history becomes as important a part of computer science education as physics history is of physics, we'll know it's approaching credibility. Until then, everything about computer science, education and practice will continue to be a cruel joke.

Links

Recent Posts

Categories