Category: Software Development

  • Coupling in Software: Loose or Tight?

    Some of the biggest mistakes in software architecture do not involve making the wrong choice; they involve failing to consider an important choice altogether, by assuming (without consideration) the approach to take. A great example of this is whether software components should be "loosely coupled" or "tightly coupled."

    What is Coupling?

    "Coupling" is closely related to layering in software, which I've discussed elsewhere. Given that two bodies of software are going to "talk" with each other, how "tight" is the communication layer?

    There is a spectrum of how tight the coupling is.

    At one end of the spectrum is "tight" coupling, which is like when two people are physically together and talk with each other. 1980 07 15 wedding C1-080
    Just a little in from that end is like when two people are not physically together, but still communicate in real time, like a phone conversation.

    At the far loose end of the spectrum is when the people are separated by space, time and perhaps other factors. A newspaper is an example of loose coupling in this way, particularly when you throw in letters to the editor. (Credit.) 1letter
    The more you stretch the time and how personal the communications are, the looser it is. Notice that when you stretch time, normally there isn't just a communications medium involved, but also a storage medium of some kind, like the newspaper.

    Tight Coupling in Software

    When two computer modules are tightly coupled, they communicate with each other in real time. If they're in the "same room," the communication is simply via an interface of some kind. If they're not, they use some kind of API involving networking; RESTful API's are a typical current example of this. In any case, the communication is pretty much real time, like a phone call.

    Loose Coupling in Software

    Loose coupling is everywhere in software. Everyone experiences a common form of it with e-mail. You send your e-mail and it sits around until the recipient grabs it from the mail server and reads it. Queuing systems are a bit tighter, but still pretty loose. Databases and document repositories are also a form of loose coupling.

    Which is the best Coupling: Tight or Loose?

    You know the answer: it depends on the application. If you're standing at an ATM waiting for money, you demand tight coupling, and so does the bank that holds your money. But e-mail wouldn't be e-mail unless it was loosely coupled. The whole point is that the other guy doesn't need to be sitting around anxiously awaiting your e-mail.

    If the application itself doesn't determine the answer, there are other factors that can tip the scale. Tightly coupled systems are generally harder to upgrade and test, because both sides need to be upgraded and tested together, in real time. So when you can get away with loose coupling, it makes the communicating components more independent of each other and makes life easier all around.

    Loose and Tight Coupling in the Literature

    Warning: the computer literature defines tight and loose coupling in different ways than I have here. I'm talking about a rather different concept, one that IMHO deserves more attention than it gets.

    Conclusion

    While you're thinking about layering, you would benefit from thinking about coupling at the same time. A clever, appropriate coupling strategy, using loose and tight to greatest advantage, can help unleash software efforts and make everything go faster.


  • What Can Software Learn from Steamboats and Antiseptic Surgery?

    Software is among the most advanced, rapidly changing fields of technology. Only the "kids" who grew up with the latest techniques seem to be able to master them. At the same time, really bad ideas spread through software groups like the plague; they take hold and resist cure, in spite of producing terrible results. How can we make sense out of a field that advances rapidly and resists change at the same time?

    History

    As I've pointed out, software people are strongly averse to learning about computer history. In some fields (e.g. physics), the very terms used are named after historical figures; in others, history is treated with reverence (e.g., Santayana: "Those who cannot remember the past are condemned to repeat it."); in software, by contrast, we use the phrase "that's history" to dismiss anything that happened in the past as obviously irrelevant to the present.

    I think studying history is the only way to understand the present, software included. I think we can understand the strange software phenomenon of rapid change combined with resistance to change by taking two examples from history: one in which new methods in technology were rapidly accepted by all concerned parties, and the other in which clearly superior new methods were resisted for many long years by the leading people in the field.

    Steamboats

    It would be great if software advances were adopted quickly, like the way steam technology rapidly overcame wind as a method for moving boats.

    The displacement of wind by steam is clearly laid out in T.J. Stile's excellent biography of one of the major figures in the transformation, Cornelius Vanderbilt. 449px-Cornelius_Vanderbilt_Daguerrotype2

    Vanderbilt started in business by running a sailing-boat "taxi" service from Staten Island to Manhattan. He transitioned into the rapidly emerging steam boat transportion business, not only as a captain and owner, but (surprisingly to me) as an engineer.

    The public took to the new steamboats quickly. The reason is clear: speed. There was, at the time, no quicker way to get from point A to point B if there were a water route between them. The speed of the boat was immediately obvious to the simple observer, and easy to verify by noting departure and arrival times. To prove whose boat was the fastest, there were races. 800px-Cornelius_Vanderbilt_(steamboat)

    Vanderbilt's steamboats were judged by a clear standard: whose was the fastest? The criteria were easy to measure.

    Antiseptic Surgery

    The benefits of antiseptic surgery, as introduced by Joseph Lister, were clear: instead of a large number of patients dying of infection after surgery, they would live. Ego clearly played a role in resisting the adoption of the new method. But, to be fair, there is another important factor.

    What made surgery different from steamboats? They were both major technical advances. They both involved major changes in what you did and how you did it — more so with boats than with surgery! So why did steam catch on quickly, even though they required whole new boats of radically different design and operation, while the antiseptic method was resisted for decades, even though it was subsidiary to the surgery itself, which was left largely unchanged?

    Boats and Surgery

    The fact that steamboats were faster than sailboats was easy and unambiguous to measure, while the surgery outcomes were difficult and ambiguous to measure.

    The time of each boat trip is easy to measure. It's just a time duration. When you watch two boats, anyone can see which one moves more quickly. By contrast, every surgery is different. The patient is different, the trouble being fixed is different, and the ultimate outcome may not be determined for weeks. Many surgery patients continued to die with antiseptic methods because it wasn't the only factor influencing the outcome. Furthermore, excellent surgeons who were dirty could save patients that would have been killed by crappy surgeons who happened to use antiseptic methods, since after all not every patient got infected.

    In retrospect, it's completely maddening that surgeons failed to be swayed by the arguments and evidence in favor of Lister's carbolic acid methods, and ego certainly played a role. But the case of the rapid acceptance of the more radical change to steam in boats makes it clear that something more than ego is at work here. Simply put, it is how comparable and measurable are the outcomes of the new technology? With steamboats, you can tell the difference in seconds with the naked eye, and verify it with a stopwatch. No arguments. With surgery, the cases are not clearly and unambiguously comparable, statistics are needed, and there is major variability. There is room for arguments.

    Software, steamboats and antiseptic surgery

    Is any given advance in software like moving from sailboats to steamboats, or is it more like adding antiseptic methods to surgery?

    That's easy: unlike straightforward competitions like races, every software project is different. In a race, the competitors take off from a starting line at the same time, and whichever crosses the finish line first is the winner. Simple! But in the real world of software, every project is different; you can always point to differences in requirements, conditions, deployment, or other things to explain why this project took more time and resources than that project. It sounds like software is kind of like surgery!

    Conclusion

    It is my personal experience and judgment that ego can play a significant role in explaining why many software groups stay mired in the same old methods, getting the same lousy results, year after year. But I think that if software projects were as comparable as transportation schedules, the evidence would simply force more rapid change, like it or not, on intransigent software groups. But because of how genuinely challenging it is to compare software projects to each other, it is at least understandable how only the most enterprising and eager-to-be-the-best software groups seek out and adopt the very best methods. 

  • A Lesson from Joseph Lister: Ego, the Killer of Software Projects

    In the 1880's, American surgeons lost a large fraction of their patients to post-operative infection, including assasinated president James A. Garfield. They simply refused to adopt the antiseptic methods pioneered by Joseph Lister, although those methods were thoroughly documented and proven. In the 2010's, programmers fail to effectively complete a large fraction of their software projects, including ones that are essential to the survival of the organizations that employ them. They, and the educators who train them, simply refuse to adopt effective methods of modern software development. What is the common thread? Ego.

    Joseph Lister

    Lister was far more significant than the source of the name for the mouthwash Listerine.

    170px-Joseph_Lister2
    A surgeon at the University of Glasgow, he read the work of Louis Pasteur, who showed that rotting and fermention were due to micro-organisms. After confirming Pasteur's results with his own experiments, Lister experimented with antiseptic techniques for treating wounds.

    Lister found that carbolic acid solution swabbed on wounds remarkably reduced the incidence of gangrene. In August 1865, Lister applied a piece of lint dipped in carbolic acid solution onto the wound of an eleven year old boy at Glasgow Infirmary, who had sustained a compound fracture after a cart wheel had passed over his leg. After four days, he renewed the pad and discovered that no infection had developed, and after a total of six weeks he was amazed to discover that the boy's bones had fused back together, without the danger of suppuration.

    Chfa_03_img0510
    Lister wrote papers, a book, and did his best to spread the word of his life-saving technique.

    The Doctors' Response

    You would think that doctors would have quickly and enthusiastically adopted the antiseptic method to help improve their awful survival rates. But you know what actually happened. As described in her excellent book, Candice Millard tells the story:

    Although the results were dramatic — the death rate among Lister's surgical patients immediately plummeted — antisepsis had provoked reactions of deep skepticism, even fury. In England, Lister had been forced repeatedly to defend his theory against attacks from enraged doctors. "The whole theory of antisepsis is not only absurd," one surgeon seethed, "it is a positive injury." Another charged that Lister's "methods would be a return to the darkest days of ancient surgery."

    Things got better in Europe. But not in the US.

    By 1876, Lister's steady and astonishing success had silenced nearly all of this detractors at home and in Europe. The United States, however, remained inexplicably resistant. Most American doctors simply shrugged off Lister's findings, uninterested and unimpressed. Even Dr. Samuel Gross, the president of the Medical Congress and arguably the most famous surgeon in the country, regarded antisepsis as useless, even dangerous. "Little, if any faith, is placed by any enlightened or experienced surgeon on this side of the Atlantic in the so-called carbolic acid treatment of Professor Lister," Gross wrote imperiously.

    James A. Garfield

    Garfield was one of the most extraordinary men ever elected president. He was shot in the back four months after being inaugurated in 1881, which is the event that brings us to Lister.

    800px-Garfield_assassination_engraving_cropped
    In short, had he been under the care of English or European doctors, he would almost certainly have survived the attack.

    The Medical Treatment of Garfield

    This site provides a summary of what happened:

    The first doctor on the scene administered brandy and spirits of ammonia, causing the president to promptly vomit. Then D. W. Bliss, a leading Washington doctor, appeared and inserted a metal probe into the wound, turning it slowly, searching for the bullet. The probe became stuck between the shattered fragments of Garfield's eleventh rib, and was removed only with a great deal of difficulty, causing great pain. Then Bliss inserted his finger into the wound, widening the hole in another unsuccessful probe. It was decided to move Garfield to the White House for further treatment.

    Leading doctors of the age flocked to Washington to aid in his recovery, sixteen in all. Most probed the wound with their fingers or dirty instruments. Though the president complained of numbness in the legs and feet, which implied the bullet was lodged near the spinal cord, most thought it was resting in the abdomen. The president's condition weakened … It was decided to move him by train to a cottage on the New Jersey seashore.

    Shortly after the move, Garfield's temperature began to elevate; the doctors reopened the wound and enlarged it hoping to find the bullet. They were unsuccessful. By the time Garfield died on September 19, his doctors had turned a three-inch-deep, harmless wound into a twenty-inch-long contaminated gash stretching from his ribs to his groin and oozing more pus each day. He lingered for eighty days, wasting away from his robust 210 pounds to a mere 130 pounds. The end came on the night of September 19. Clawing at his chest he moaned, "This pain, this pain," while suffering a major heart attack. The president died a few minutes later.

    A whole nation of doctors simply ignored the evidence of the effectiveness of Lister's methods. Lister himself came to America and lectured on them, and American doctors were well aware of the methods. Full of themselves and comfortable in their ways, they continued wantonly killing patients who could have lived — including President Garfield.

    Antiseptic techniques and Software

    What can possibly account for this behavior? I know one answer, because I see the equivalent in software groups all too frequently. Put simply, it's ego.

    Each group, and particularly its leader, is convinced that they're doing the best possible job that can be done, against steep obstacles. They feel starved for resources, pressed for time, but nonetheless performing at a very high level. They are educated, experienced, and feel they've made the best possible choices of tools, methods, designs and architectures.

    Somehow, the leader or someone in the group hears about something new that's supposed to be really effective for tasks like theirs. Maybe they hear about it from some know-it-all who's somehow associated with investors, or someone else they are "supposed" to listen to.

    In my experience, the reaction of the software group is nearly identical to that of the American doctors. Whatever the new thing (tool, method, technique, design, architecture), it can't possibly be as good as what they're already doing. Even when they hear stories about great results, they are deeply skeptical, and are convinced that their own methods are still better. Period. Why, to consider the possibility that someone else is doing things better than they are implies that they are not the best at what they do. Impossible!

    What explains this profound lack of interest? While other factors are sometimes involved (I'll explore them another time), it's hard to discount the impact of human ego, pure and simple.

  • Layers in Software: Fuss and Trouble without Benefit

    Most everyone in software seems to accept that layers are a good thing. In general, they're not. They take time and effort to build, they are a source of bugs, and make change more difficult and error-prone.

    What are layers in software?

    It's possible to get all sophisticated with this, but let's keep it simple. Imagine that your application stores data, presents some of it to users, the users change and/or add data, and the application then stores it again. Everyone thinks, OK, we've got the UI, the application and the storage. That's three layers to create, and for data to pass through and be processed. This is the classic "three-tier architecture," usually implemented with three tiers of physical machines as well.

    Everyone knows you use different tools and techniques to build each layer. You'll use something web-oriented involving HTML and javascript for the UI, some application language for the business logic, and probably a DBMS for the storage. Each has been adapted to its special requirements, and there are specialists in each layer. Everyone agrees that this kind of layering is good: each specialist can do his/her thing, and changes in each layer can be made independent of the others. We end up with solid, secure storage, a great UI and business logic that isn't dependent on the details of either.

    More layers!

    If layers are good, more layers must be better, right? It's definitely that way with cakes, after all. We know layer cakes are in general wonderful things. In some places, having 12 layers or more is what's done. ArticleLarge

    It's not unusual for application environments to have six layers or even more. Among the additional layers can be: stored procedures in the database; a rules engine; a workflow engine; a couple layers in the application itself; an object-relational mapper; web services interfaces; layers for availability and recovery; etc.

    It's hard to find anyone say this isn't a good thing. Imagine a speaker and a group of software developers. He says "Motherhood!" Everyone smiles and nods. He says "apple pie!" Everyone smiles and licks their lips. He says "layer cake!" 1960 06 22 WR 09-09
    Everyone can picture it, perhaps remembering blowing out the candles on just such a cake. 1959 06 22 WR 05-09
    You can remember as a kid opening wide and biting in to a nice big piece of birthday layer cake. He says "Software should be properly layered!" Everyone gets a look that ranges from professional to sage and nods in agreement at such a statement of the obvious.

    Layers are good, aren't they?

    Layer Cake, yes; Software Layers, uh-uh

    Take another look at the pictures above; you'll notice that cake alternates with icing, whether there are 3 layers or 15. One person makes both. There's a way to make icing, a way to make the cake. Usually one person makes both and assures that a wonderful, integral layer cake is the result.

    It's a whole different story in software. Even though the data flowing down from the top (UI) to bottom (storage) may be the same (date, name, amount, etc.), each layer has its own concerns and pays attention to different aspects of that data. Here's the real rub: when a change is made to data, far from being isolated, each component that touches the data has to be changed in different but exactly coordinated ways. The data is even organized differently — that's why ORM's exist, for example.

    One of the fundamental justifications for thinking layers are good is separation of concerns: you can change each component independently of the others (the same fraudulent justification that lies at the heart of object-orientation, BTW). But this is just wrong (except in trivial cases)! Any time you want to add, remove or non-trivially change a field, all layers are affected. Each specialist has to go to each place the data element is touched and make exactly the right change.

    But it gets worse. Because each layer has its own way of representing data, there are converters that change the data received from "above" to this layer's preferred format, and then when the data is passed "down" it is converted again. If you are further saddled with web services or some other way to standardize interfaces, you have yet another conversion, to and from the interface's preferred data representation. Each one of these conversions takes work to build and maintain, takes work to change whenever a data element is changed, and can have bugs.

    Think it can't get worse? It can and does! Each group in charge of a layer feels the need to maintain the integrity of "their" layer. Those "foreign" layers — they're so bad — they do crappy work — we better protect ourselves against the bad stuff they send us! So we'd better check each piece of data we get and make sure it's OK, and return an error if it's not. Makes sense, right? Except now you have error checking and error codes to give on each piece of data you receive, and when you send data, you have to check for errors and codes from the next layer. Multiplied by each layer. So now when you make a change, just think of all the places that are affected! And where things can go wrong!

    Here's the bottom line: every layer you add to software is another parcel of work, bugs and maintenance. With no value added! Take a simple case, like moving to zip plus 4. Even in a minimal 3 layer application, 3 specialists have to go make exactly the changes to each place the field is received, in-converted, error checked, represented locally, processed, out-converted and sent, with code to handle errors from the sending.

    In Software, the Fewer Layers the Better

    I'm hardly the first person to notice this. Why is the Ruby/RAILS framework so widely considered to be highly productive? Because it exemplifies the DRY principle, specifically because it eliminates the redundancy and conversion between the application and storage layers. What RAILS is all about is defining a field, giving it a name, and then using it for storage and application purposes! Giving one field a column name in a DBMS schema and another name in an class attribute definition adds no value. What a concept! (Although far from a new one. Several earlier generations of software had success for similar reasons, for example, Powerbuilder with its Data Window.)

    It's simple: In cakes, more layers is good. In software, more layers is not good.

  • The Nightmare of Software Estimation

    We make estimates of effort all the time. No big deal. But estimating software projects is a big problem, often a nightmare. Why should software be uniquely difficult?

    Estimating effort is a reasonable thing to want

    How long will it take you to get to the meeting? When can I have the summary report? How long will it take to crank out the press release?

    We ask for and get estimations of effort all the time. It's a reasonable thing to want. If you think something will take a day and the person tells you it will take a week, probably something is wrong, and it's helpful to expose the issue and resolve it.

    Estimating software effort seems like a  reasonable thing to want

    If estimating effort before you start is generally reasonable, it would seem to be even more a good idea when it comes to software. After all, software projects tend to be expensive and take a long time. Worse, they have a nasty habit of running over time, over budget, and then not even giving you what you expected. So it would appear that creating detailed requirements and then providing careful time and effort estimates would be even more advised in software than in other fields. At least then we can track our progress see have a chance to fix things when they go off track.

    Mainstream thinking in software management thoroughly endorses this view. The highest virtue a software effort can have, in this view, is being "in control." A software project effort is deemed to be "in control" when it is working its way up to creating estimates (i.e., gathering requirements, etc.), or logging work against estimates already made and tracking how well the estimates are being met.

    There is a body of work on this subject that is both broad and deep. You may be impressed by this, but actually it's an indication of how difficult giving good estimates is, and how rarely it's accomplished. When projects go badly, get "out of control," the blame is usually placed on the process not being taken seriously enough. Inevitably, people with experience respond to bad experiences by laying on the process deeper and thicker, and making sure the estimates are long enough to account for all sorts of unforeseen events. In other words, even more time is spent on things other than actual programming, and the estimates are padded like crazy — even more so than last time.

    This, in a nutshell, is why programming groups get larger and larger; accomplish less and less; and never do anything new. It's why startups with bright programmers, unencumbered by the ball-and-chain of estimation and process, usually are the ones that accomplish the cool new stuff that programming groups with literally hundreds more programmers cannot.

    Software Estimations are Different

    The facts on the ground are that estimate-based programming leads to no good. How can we make sense out of this? Is programming really that different? Yes, programming really is way different than all those things that are reasonable to estimate. Here are some of the ways to understand this:

    Huge variations in Productivity

    Software estimates are in terms of the work — not the person doing the work. But there are HUGE variations in the productivity of different programmers, more than 10X. So when you estimate that a given task will take ten days, is that 10 days of the 1X-er's time (the 10X-er will get it done in a day), or 10 days of the 10X-er's time (the 1X-er will take 100 days, and the 10-20% of programmers who are 0X-er's will never deliver)?

    Skeptical of this huge variability? Here's a summary of the literature. McConnell concludes, "…the body of research that supports the 10x claim is as solid as any research that’s been done in software engineering."

    Unknowns

    What kills you with any estimate are the unknowns. The newer the kind of software you're developing, the greater the unknowns tend to be. Worse of all are the things you don't know you don't know. If estimating resolved the unknowns, I would warm up to it — but in practice, estimating tends to make everyone feel better about how "in control" the project is, and delay the discovery of the unknowns until a later time, when they are even less convenient, a.k.a. "nightmares."

    How do you discover unknowns sooner? Dive in and start doing the work sooner!

    Requirements Evolution

    You can only estimate if you know what you're building. The more exact the requirements, the better the estimate — or so the theory goes. It's why everyone who's big into estimating is even bigger into requirements, and all sorts of squishy-sounding things like getting all the "stake-holders" on board.

    In the real world, requirements change. Call it "evolution" if you want, but it always happens — worse of all is when the real requirements in the real world are evolving while the project's requirements remain static — in the name of maintaining the "integrity" of the software project, real world be damned! Only in academia (or silo-ed software departments of bureaucracies) are requirements immune to outside-world changes, discoveries made during the project, etc.

    Learning by Doing

    All programming starts with thinking. Thinking about the existing code, the tools, the architectural structure, the functionality that needs to be built and where and how you'll accomplish that functionality in the code. An integral part of this process is exploration — filling in your knowledge of missing pieces as you think things through. In fact, it's hard to separate exploration from thinking; as you think, you discover holes in your knowledge, which you go off and fill, which helps flesh out and evolve your thinking. The more clear and complete your combination of thinking and exploring, the more cleanly, concisely and quickly you'll code it.

    In practice, the best people write and change code as they go through this process. Nothing like trying to actually write the code to clarify the issues with writing it. The best way to discover road-blocks? Get on the road!

    Do the people who create estimates do this? If not, what possible value is their estimate? If so, why don't they dive in and do the work, while everything is in the foreground of their mind?

    Implementation paths

    There is more than one way to peel an apple (what a stupid phrase — who wouldn't want to just eat the darn thing, apple peels are really good!), and there is more than one way to get stuff done with computers. I'm in the middle of looking at some new functionality an Oak company is trying to build. Naturally, many people at the company want to take their existing, comfortable implementation path and apply it to the new problem. It's a good deal of work, involving a DBMS, stored procedures, a couple development environments, some business logic, a rules engine, a workflow engine and some UI designing. There is no doubt in anyone's mind, including mine, that the existing tool set can get the job done.

    But there's another implementation path. The functionality is pretty simple, basically offering choices to consumers and getting them to fill in fields, with some branching and edit checking. The whole thing can be done using documents, XML and javascript, with a highly responsive UI. Taking this new approach would require a tiny fraction of the effort that using the existing implementation approach would require. Anyone involved can see it. It meets the inter-ocular test. Is it being warmly embraced by everyone at the company? Of course not!

    A kind of estimating causes me to strongly prefer the new implementation path — I'm sure it will be a lot less work, with results that will be more pleasing to consumers. A sense of estimation is indeed important, as one of the factors that helps you choose the optimal implementation path for getting a given job done.

    Moore's Law

    The software approaches and implementation paths taken by most companies for getting their work done were established many Moore's-Law-generations ago, when computers were 10, 30, or 100 times slower and/or smaller and/or more expensive than they are today. For example, most programmers continue to think in terms of databases, when Moore's Law has changed the game for databases and for storage.

    It is far more fruitful to figure out how to take advantage of changes like this than to put effort into estimating the work using what are likely to be obsolete paradigms for building the software. And the best way to accomplish this is usually to try it.

    Conclusion

    Asking how long something will take to program seems so reasonable, so innocent. Who would ever guess that it tends to start a cascade of disastrous consequences?

    The truth is that estimation does play a role in good software development, but in a completely subsidiary way, as a minor aspect of your technical and conceptual approach to building the software. It's like running a race: yes, you have in mind your past times, but if you ever get into making estimates and then padding them and then judging success by hitting your estimated time, you're going to lose the race. It's just not what winners do. Winners focus on … winning!

  • How Effective are Software Factories?

    Software factories are truly excellent. They are highly reliable, with an error rate near zero. But here's the catch: software factories may be something different than what you think they are.

    What Are Factories?

    We all know what factories are.

    745px-Airacobra_P39_Assembly_LOC_02902u
    A factory is one of those big plants where parts and assemblies go in one end, and through a series of steps, get turned into finished goods.

    Factories have played a major role in creating the modern world, by magnifying the effort of humans with machines and power.

    What are Factories Conceptually?

    The purpose of a factory is to produce identical copies of a thing you already have and know how to build. Henry Ford's factory didn't produce the very first Model T — his design engineers did that.

    692px-1919_Ford_Model_T_Highboy_Coupe
    Then they created a factory to churn out lots of copies of the original Model T. Everyone credits the Ford factory with producing cars at low cost. All factories accomplish their core function of producing copies at low cost by replacing labor with machines, and by reducing the amount and the skill of the remaining labor.

    In addition, huge amounts of effort have been poured into factory cost-effectiveness and quality. Supply-chain optimization (how to most effectively get the inputs to a factory) is well-understood at this point. Similarly, both the theory and practice of factory output optimization is highly advanced. Methods for assuring consistent high quality have also been developed, starting for example with statistical process control.

    The Dream of the Software Factory

    We all know that software departments aren't very good at churning out great software. Wouldn't it be great if we could build a factory for software — a factory that spits out great, high-quality software, on time and on budget, just like all the other factories?

    No need to dream — it's been done! There are big companies behind these factories, there are lots of books about how to build them, scary books about how it's being done better in Japan, everything you'd want.

    Of course, when you look more closely, it's all hype. Most software isn't built in that kind of software factory; if it were, they'd have long since taken over software development, and no such thing has taken place.

    The Reality of the Software Factory

    Fortunately, there really is a software factory. It's effective, efficient, and it's quality is so near to 100% that it's not worth measuring. Furthermore, software factories are widely used — they're so much a part of programming life, that no one thinks much about them.

    One of the most widely used software factories is the cp utility. This amazing utility does exactly what a factory is supposed to do — it makes you an exact copy of the thing you want to have. Amazing! This factory is also high adaptable. If you change what it is you want to churn out, it can still make a copy of it.

    I wish the cp utility worked for cars. If it did, I could point it at my car and it would give me an identical copy. I could then make changes to my car, sic ol' cp on it again, and shazaam — a copy of the modified car! Cool! Sadly, cp doesn't work on cars (yet) — it only works on software — though let's not forget it also works on data, which isn't too shabby…

    Factories and Software

    I can only hope that people keep coming back to comparing the process of building software to a factory because computer software is so consistently bad. The motivation must be great indeed for so many people to fall for such an obviously flawed metaphor.

    After all, factories are for building copies of things that are fully known and understood — like Model T cars — things that have already gone through an extensive design and prototyping process. Ooohh — I like that — let's make lots of them! Kind of like the iPhone, right? Foxcon didn't get a call from Steve Jobs asking them to start cranking out iPhones until the Apple design engineers had already designed and built them. That widely-reported last-minute switch of the glass surface of the phone happened after Steve played with the prototype and didn't like it — before the factory started doing its thing.

    So repeat after me: designing is what you do to create the first copy of something; if you like it, a factory is used to crank out copies.

    If you like a piece of software, cp (or the relevant copy utility) is all you need to make a copy of it! The only reason software engineers get involved is if you want something different. For which a "software factory" as properly understood is simply not relevant.

    Conclusion

    There's good news: Software factories exist! They are universally used in the software community! They work: they work consistently; they work quickly; they work flawlessly. Be happy.

  • Field-Tested Software

    When you're at war, your software needs to work — not in the lab, but in reality. In the field! You don't have time to test your software in the lab, and you don't care whether it works in the lab. You need field-tested software. Software that works — in the field — where you need it to work.

    Normal Software QA

    Normal software QA pays lots of attention to the process of defining, building and deploying software. You hear phrases like "Do it once and do it right;" "quality by design;" "we don't just test, quality is part of our process." There are lots of them. They all, one way or another, promote the illusion that mistakes can somehow be avoided, and that we can — finally — have a software release that works, and works the way it's supposed to. This time we're going to take the time, spend the money, and do it right!

    How did that work out for you? Most often, it's like predictions of the end of the world. The date comes, the world is still here, and people try to avoid talking about it. Similarly with that great this-time-we're-doing-it-right release, the release comes, there are roughly the usual problems, and people try to avoid talking about it. Or there were fewer problems, but the expense and time were astronomical. Or there were fewer problems, but not much got released. Whatever.

    Here are some favorite phrases: "It worked in the lab!" "How could we have anticipated that case?" "The test database just wasn't realistic enough." "Joey So-and-So really let us down on test coverage." "We had the budget to do it right, but not enough time." "We didn't have enough [tools] [training] [experienced people] [cooperation from X] [lab equipment]…" Excuses, every one. Perhaps there's a fundamental reason why we always fail?

    This is a subject your CTO and your chief architect should stop ignoring and pay serious attention to. It isn't the only subject, but it sure should be #1 on the list.

    QA Should be Field-Based

    Who cares how the software operates anywhere except in production?? The lab environment is always different from the production environment. And the most embarassing problems are the ones where it worked in the lab but failed when deployed. The number of potential causes is endless; different machines; different loads; different network delays; different database contents; different user behavior; different practically-anything!

    Given this, why wouldn't you test your software on the actual machines it will run on when it's put into production? Of course, you don't load-balance normal user traffic to the test build as though everything were hunky-dory. That's just asking for trouble. But it's not hard to send a copy of the traffic to the test machine. That alone tells you huge amounts. Did it crap out with a normal load? Now there's a real smoke test. And there's lots more you can do as well.

    Conclusion

    Your customers don't care how your software worked in the lab. They only care how it works for them. Yes, that's awfully self-centered, but that's just how they are, and no one is likely to talk them out of it. So live with it, and shift from pointless lab testing and back-office quality methods to actual field-testing of your software. Yes, it's messy, dirty and uncontrolled — but it's real life! It's where your software has to run! Better it should get used to it sooner rather than later.

     

  • The Dirty Secret of Peace-time Software Development

    We use a large number of intimidating words and abstruse concepts to make our software development methods sound like they're highly evolved and refined. But all too often they take way too long to produce software that doesn't work and isn't what we need.

    The Propaganda

    The normal, "peace-time" process of software development sounds pretty deep. There are requirements.

    Req

    There are designs.

    Design

    There is all sorts of testing to assure quality.

    Test

    That's just for a start — there's loads more.

    And on top of it all, there are levels and levels of analysis to assure you've got a repeatable process that is documented, measured, and continuously improved.

    CMM

    The Reality

    When we try to create software in this way, we resemble Dr. Frankenstein in his laboratory,

    Lab
    with his carefully crafted plans to bring something into being that has never before existed.

    And when you've done all that impressive-sounding stuff, what do you have???

    This…

    Monster
    is what you have.

    He's late.

    He cost way too much.

    He doesn't work.

    And worst of all…

    …he's not what you wanted in the first place!

    Conclusion

    The reality is all those software methods we argue about amount to pretty much the same thing. They are all "peace-time" software methods. They promise to be careful and deliberate. They promise to deliver safety and predictability. But they can't! Which explains why, in the vast majority of cases, they don't! That's the dirty secret of all those high-minded concepts and abstruse words — they put a high-minded gloss on incompetence and ineptitude.

  • Bridges and Software in Peace and War

    We build bridges in times of peace. They take a long time to build; they tend to last a long time, but sometimes they crash. We also build bridges in war-time. Working in the face of enemy fire, they get built really quickly, and tend to serve the purpose well.

    What is war-time software? Are there methods that enable us to build software in a fraction of the usual time in highly competitive circumstances, while still serving its purpose well? The answer is yes.

    Peace-time bridge building

    The bridge over the Firth of Forth in Scotland was the world’s first major steel bridge. It took about seven years to build, was completed in 1890, and is in use to this day.

    RailbridgeMain
    (Credit)

    As many as 4,000 men worked on the bridge at a time, with 57 losing their lives.

    The Golden Gate bridge in San Francisco is more recent, having been completed in 1937 after about 4.5 years of work.

    Sevisitorarea_view
    (Credit)

    Peace-time bridge building: the results

    I’ve given just a couple of examples, but they are typical: bridges take years to build in peace-time, and people die while building them. And while we expect them to never crash, in fact they do. It’s not as rare as you may think! Here’s a collapse of a bridge in Canada in 1907 killing 95 people:

    Bridges_down_01

    The Silver Bridge was built in 1928 over the Ohio River. Here is in when it was completed.

    Silver_Bridge _1928

    It collapsed in 1967 during heavy rush hour traffic. 46 people were killed.

    Silver_Bridge_collapsed _Ohio_side

    And here’s a portion of the Route 95 in Connecticut that collapsed in 1983:

    Bridges_down_04

    There are many more examples. Peace-time bridges take years to build and are expected to work without problems, but in fact they sometimes collapse and kill people.

    War-time bridge building

    Building bridges in war time is a whole different matter. The bridges aren’t allowed to collapse and kill people any more than those built in peace-time, and the loads they’re required to carry can be much greater. Frequently they are built under enemy fire. Yes, they look different and are constructed using different techniques:

    (Credit)

    But that’s the whole point. The time constraints are severe: instead of years to build a bridge, it must be done in days.

    Here’s the story of the bridge pictured above:

    It was during this week, in late March of 1945, that the U.S. Third Army under Gen. Patton, began its famous bridging and crossing operations of the Rhine.

    The first unit to cross was the 5th Infantry Division that used assault rafts to cross the raging Rhine … in the early morning hours of March 23. … By 1880 that evening, a class 40 M-2 treadway bridge was taking traffic. The following day, a second 1,280 foot class 24 bridge was completed in the same area. It was later upgraded to a class M-40 bridge. Without the benefit of aerial bombardment or artillery preparation, units landed quickly and established a beachhead that was seven miles wide and six miles deep in less than 24 hours…When daylight came, the Luftwaffe attacked the enclave with 154 aircraft in an attempt to dislodge the foothold on the east bank. Effective anti-aircraft fires brought down 18 of the attacking planes and destroyed 15 more.

    By March 27, five divisions with supporting troops and supplies had crossed the three bridges constructed at Oppenheim. The entire 6th Armored Division crossed in less than 17 hours. During the period of March 24-31, a total of 60,000 vehicles passed over these bridges.

    Peace-time software

    Most of the software built today is built using “peace-time” methods. Those methods are so ubiquitous that they are simply considered to be “the right way to do things.” We document everything. We have a nicely, orderly flow from requirements through design, coding, testing and deployment. Whether waterfall or “agile” is used, everyone is given time to do their job, and frequently asked how long it will take. Estimates are critical, and the most important thing is delivering on the expectations you set.

    In this environment, it’s important to make sure your estimates are long enough to account for things you forgot about. Taking a long time to get a job done isn’t a problem; taking longer than you said you would take is the problem.

    War-time software

    So what is war-time software? It’s a looooong subject, and can’t be done right in a short post. But the principles should be obvious from the bridge-building metaphor:

    • Time is the most important thing; if you take a year to do what the other guy gets done in a month, you’ve lost the war.
    • Solving immediate problems is far more important than effort put towards some imagined future.
    • Something is better than nothing.
    • Finding and fixing problems is more important than preventing them.
    • Did I mention that nothing is more important than speed, except possibly avoiding getting killed (usually)?

    Those war-time bridge-building guys made it up as they went along, but they couldn’t have done it without an elaborate tool kit, appropriate supplies and matching skills and procedures. The scene on the river may seem chaotic, but there’s a pattern and lots of coordinated activity, with everyone working towards a common goal: the least they can possibly do that gets things safely over the river. When is peace-time software ever subjected to that kind of parsimonious discipline?

    War-time software development is development that is organized and optimized for speed: getting the least acceptable solution built and deployed in the shortest possible amount of time, and rapidly iterating from there. And then doing it again. Obviously, you spend time gathering and organizing your supplies and improving your technique.

    War-time software is not doing things the usual way, only skipping steps, doing things sloppily and writing half-done crap code. That’s doing a bad job using peace-time methods. War-time software is doing things in a war-time way using war-time techniques.

    Conclusion

    Are you truly operating in peace-time? Is your competition frozen? Do you have no time constraints or money limitations? Then, by all means, continue to use peace-time software methods — take huge amounts of money, incredible amounts of time, document and plan and manage everything with precision, and build your software. Software that will crash when you least expect it.

    If, on the other hand, you are at war, and if you, you know, want to, like, survive — well, you may want to consider building software that actually meets the immediate need.

     

    Find a way to get that data, those screens and workflows over that threshold, soldier. Now! Yes, I know that in your previous life, it would have taken you a week to write a proposal for creating a plan to get it done. These screens, workflows and databases are going to be on the other side of that threshold in under a week, while enemy forces and programmers are doing their best to kill us in the marketplace. Move it!

    War-time software. It’s the way to win.

  • Fundamental Concepts of Computing

    There are a small number of truly fundamental concepts in computing. They are not generally taught or talked about, but they underlie most of the smart things you can do in computing.

    The fundamental concepts are like "the fundamentals" in a sport, the very basic things you have to do, like dribbling in basketball or blocking in football. It's where the phrase "blocking and tackling" comes from. Woe to the team that puts all its energy into fancy stuff — it will be beaten by the team that does the fundamentals.

    The fundamentals are generally recognized in sport because there are objective measures of scoring and determining which team won. Eventually, people figure out which activities contribute most to winning. But in computing, it's a sad story.

    Competition would help us understand what are "computing fundamentals"

    If we competed in programming the way we do in sports, teams from different places would take on the same job at the same time. Each would complete the job, roll it into production and run it for awhile. For each team and their product, we would collect a variety of information, including: the size and cost of the team; the elapsed time they spent building, the resources required to build and operate, the number of bugs, the level of user satisfaction, etc.

    Who won? Well, we'd take some combination of the information above.

    I bet if we did this a lot in programming, the "fundamentals" of programming would gradually become clear to everyone. Everyone would want to know what the winning team did to win, what winning teams had in common, and over time, the programming equivalent of "blocking and tackling" would become obvious.

    But we never compete!

    Oh, you think we do? Like when there are competing products, or when competing companies have similar computer systems?

    Well, sure, at a business level there is competition, but at a programming level? Think about football. How would you feel if the team that won had twice as many players as the other team? What if the winning team got to use a different ball than the other team? What if the winning team was given ten downs per possession, and the other only had three? What if one team always had a goal post that was half as high and twice as wide as the other team? With differences like this, it's obvious that the game is rigged and there's not much to learn from the game. There is as little to learn from examining the programming practices of companies or products that win in business.

    So what are the fundamentals of programming?

    There is no generally accepted answer to this dead simple but incredibly important question! And given the lack of meaningful competition, there is no objective way to prove what they are!

    I've spent way too much time:

    • programming, and
    • trying to get better at it, and
    • wracking my brain to determine exactly what "getting better at programming" means, and
    • trying to identify the key factors that lead to better results.

    So I've got opinions. I've written about some of the fundamental concepts in some of my private papers, and I intend to post about some of them here.

    For this post, it's sufficient to establish the basic concept: in fields that we care about, there are measures of goodness and a not-too-large collection of "fundamentals" that constitute the "blocking and tackling" for the field. And that, sadly, a broad understanding of those things are lacking in the field of computing.

     

  • Thanksgiving Meals and Software Turkeys

    Software development is normally conducted the same, common-sense way that Thanksgiving feasts are created. Perhaps this is why software so often resembles a post-Thanksgiving mess.

    Requirements

    The "requirements" (i.e., the menu and number of guests) for Thanksgiving don't differ all that much from one year to the next, and after all, the menu is a variation on a well-practiced theme: a dinner. But you'd still better be exact in defining the menu: "some kind of vegetable," for example, won't do. Furthermore, the requirements are made by experts for highly experienced users, none of whom will need documentation or training to "use" the resulting product.

    The requirements for a typical software project, by contrast, are not made by people who are experienced consumers of the product. They are usually made by people who are experienced at making requirements, which is roughly as effective as having menus designed by people who never eat.

    Design and Construction

    Once the menu has been planned, recipes are selected (not many people wing it and risk cooking a dish that neither they nor anyone else has ever cooked before), the shopping list is made, the shopping done and finally the dishes are cooked. You can see here a picture of some of the advanced cooking techniques used by experts in 1963.

    1963 11 Thanksgiving 21-25s

    Software developers try to follow roughly the same method of finding recipes (designs) and getting ingredients (lines of code). But they're always making dishes neither they nor anyone else has ever made, so the recipes they find need severe adaptation, and basically they have to make it up. The same goes for ingredients: they find people, try to get them to understand the made-up recipes, and then have them create ingredients (lines of code) that work together. They try to convince themselves and anyone who will listen that everything will turn out OK because strict project management techniques are being adhered to. Uh huh.

    The Finished Product

    The finished Thanksgiving meal is often a sight to behold. So are the people assembled to admire and to consume it, who typically have at it with skill, experience and vigor. While some of the younger participants may need talking to (see below: the wise guy with his feet up looks like he's about to get it from the lady on the far right), in the end things work out remarkably well. Everyone knows their job and does it.

    1960 11 Thanksgiving 10-17s

    Not everything goes perfectly when cooking the Thanksgiving meal, but most of it works out really well, and in the end all the requirements of the end users (that they end the meal being happily full) are satisfied.

    In software, once the "meal" is cooked, there is usually an extensive testing, integration and quality process involving labs, staging areas and other things to which the supposed "cooked and ready" meal is subjected for fear that it simply won't be edible. In spite of all these measures, everyone knows disaster is not just possible but likely, and so before being brought to the table, the meal is served to special people who are used to eating half-cooked, never-been-cooked-before dishes. This is called an "alpha" release. It resembles getting some poor fool to eat bites of the meal intended for the king to assure that it wasn't poisoned; the trouble is, in the world of software, it usually has been, if only inadvertantly as a result of the usual chaos of building never-been-built-before software. In the world of software, there is usually no equivalent of the dinner-table picture above.

    The Aftermath

    In the world of Thanksgiving dinners, the aftermath is pretty typical. Here are typical remains of the kind of meal cooked by the ladies pictured above:

    1961 11 Thanksgiving 14-22s

    Looking at a mess like this is generally a happy thing, which is why my dad took the picture. You remember how good the meal was and chuckle about what was left.

    In software, however, this picture resembles the meal that was actually served: a ripped-apart, cold, coagulated mess that you may be able to pick at. Hey, maybe we can make a turkey sandwich by bringing in some extra tried-and-true ingredients (bread, mayo)! The sad fact is, by the time most software is developed and delivered, the original cast of characters has given up, moved on or descended into open cynicism. Aided by the fact that the software doesn't work and/or the situation has changed so much that the software is no longer relevant, at least as it is.

    Summary

    Software development techniques, even today, have a remarkable parallel to making a Thanksgiving meal. But Thanksgiving meals have a track record of working out pretty well for all concerned, certainly in the are-you-full-afterwards department. And software development techniques have a track record of not working out so well, except for the turkeys who run the projects, who rarely seem to be fired for the messes they so consistently deliver — after all, we learned alot from this, and things are going to be different next time! Sure!

    If you like hearing gobble-de-gook babble about project management and late software that doesn't work, by all means continue to model your software development after Thanksgiving. But if you look forward to legitimately associating the concepts of "software" and "grateful" together, without sarcasm, then I suggest you leave the turkeys to Thanksgiving and try something else for software.

  • Developers, Designers and Project Managers at War

    There is a natural conflict between the various groups that create computer products. This graphic captures it pretty well.

    Developers-designers-managers.jpg.scaled1000

    Credit:

    http://alextoul.posterous.com/the-war-between-developers-designers-project

  • Software: Move Quickly While Breaking Nothing

    In places that want to succeed (why would you be anywhere else?), the pressure is on the software team. Change more stuff. Move more quickly. Where are the results?

    Ok, great you say. I'm already working as hard as I can. The only other thing I can do is relax the process, the safeguards that assure good, high quality results. I'll give it a try.

    Maybe you (and your company) get lucky for a bit. Then the inevitable happens. The release that just got pushed had bugs and/or unintended consequences. There are panicked calls, late nights, frayed nerves and angry managers. The managers are extra angry at the programmers who did what the managers said, not what they thought they implied (i.e., don't break anything). The software people fume about management that acts like the ancient regime, making insufferable, impossible demands of the oppressed workers. No one is happy.

    "Push More Releases" is NOT the Answer (by itself)

    Sometimes when I talk with programmers, I'll give some version of my "releases are stupid" talk or my "dates are evil" talk and some brave soul comes out with the classic "I tried that and got screwed" response, as summarized at the beginning of this post. And I always agree with the brave soul: if all you do is push more releases, you're not just inviting disaster, you're sending a chauffeured limo for disaster and welcoming it with a red carpet and cheering crowds.

    The classic, project-management-centric methodology with its infrequent releases is broken. Broken!! So who in his right mind would think that keeping the methodology except for some essential, integral parts is a good idea?

    Let me see if I've got this right: you don't like your methodology, so … you're going to cripple it and hope things will improve??!!

    By the way, if you're sitting there with a smug expression thinking "we don't have that problem, we use agile," think again. Agile does not change the situation.

    The Answer is a New Methodology, Optimized for Speed

    The details of the new methodology are important; I've written about them at great length in my private-distribution papers. It's a seismic shift from the normal way of doing things. Like with any big change, it's useful to start with baby steps. So here's a start:

    Grow the Baby

    The mainstream process of software development resembles a car factory. There are lots of inputs. The inputs are assembled into sub-assemblies and then assemblies. There is a final integration phase, after which the car rolls off the assembly line. When it rolls off, people hope it works; there is no expectation that it will work prior to the end of assembly; until then, it's "work-in-process." This is typically the way things work whether you release once a month or once a week.

    The speed-oriented method resembles growing a baby. There is one and only one overriding rule: Don't Kill the Baby! The baby is grown by tiny incremental additions; each addition takes awhile to get right, but none of them is fatal. It takes a while for it to learn to walk, for example, and spends some time walking poorly. But while this is going on, the baby doesn't lose its ability to crawl, eat, burp, roll over.

    Don't Kill the Baby!

    When the subject is babies, there is near-universal agreement that killing them is something to be avoided. But when software is developed with the usual methods, it's alive only some of the time — mostly it's dead! The cornerstones of the speed-oriented method are:

    • Small, frequent changes. You should make progress towards your goal every day. Do the best you can and make the most progress you can every day.
    • The new stuff doesn't need to work at the beginning.
    • The old stuff can't be allowed to break. This is usually achieved by some kind of continuous integration and live parallel testing. How you do it isn't important. That you do it is. Your software must not break. Clear?
    • Iterate. Don't lay out months' worth of work. Set overall goals, then spend some time looking at what you did yesterday to help decide what to do today.

    Baby Yourself!

    One of the Oak companies was having trouble meeting all their commitments. The pressure was on to deliver more, and deliver it quickly. But the company was also getting beat up because of serious flaws in recent releases. Talk about being caught between a rock and a hard place!

    It took quite a bit of work, but they shifted to the kind of speed-oriented, always-alive method described here. They had some fun with it along the way. Part of the fun was the mascot they adopted, which I got to see during a recent visit.

    2011 03 23 Good- Grow the Baby(S)

    Grow the baby!

  • Software Project Management: “Releases” are Stupid and Out-moded

    Tell me, do you use Google by any chance? Yes? OK, now tell me which release of Google you use now, which ones you've used recently, and which one you like best; and, by the way, how was the upgrade process? Oh, you don't know? There's no way to find out, you say?

    What the heck is WRONG with those people at Google? Don't they know that modern software development processes demand a disciplined release and planning methodology? There is NO WAY customers will put up with a product when the vendor just shoves new releases at them any time they feel like it, without even proper notification! And what customer is going to sign up with a vendor who won't commit to a future roadmap, with at least a year's worth of features laid out, tied to hard-commit release dates? That Google, they violate every rule of the game, there's no way they're going to make it as a software/service company!

    What, how can it be! NOOOOoooooo…. Google is the world's most valuable software company!!?? When they don't even have the most basic element of proper software methodology, releases?? I must be sleeping! This must be a nightmare!!

    Images

    Get over it!

    Here are the facts:

    • Classic project management has been a disaster for software development.
    • All the heavy-weight process things that people do to make things better … invariably makes them worse!
    • The classic big-bang release is the cornerstone of the temple of evil.
    • The classic little-bang release (a.k.a. "agile") is a brick in the temple of evil.
    • "Releases" are … stupid! … and not only that, they're … outmoded!

     There is life after "releases." It is a better life. The software is better. The users are happier. Go there. Enjoy it.

  • What is the Best Programming Environment?

    What is the best programming environment? Is it Microsoft C#? What about Java? On the other hand, there are the open source scripting languages: are they all about the same, or is python way better than php? While we're at it, how about databases and operating systems? Isn't it true that you really need Oracle if you want a truly scalable application? And if you're really serious, shouldn't you take a close look at DB/2?

    As a long-time techie who has the opportunity to work closely with a wide variety of software/hardware groups, and often has the chance to take a close look at yet more groups my firm is considering for investment purposes, I confront this question frequently. I also get it thrown at me, sometimes by anxious investors or business leaders. They are worried about the possibility of making the "wrong" choice. They are bombarded with conflicting advice, frequently from techies who are truly knowledgeable people and speak with authority and confidence. It's tough!

    OK, Mr. Smart Guy, dish it out! You've got inside information on all these efforts using the different tool sets. You see which ones are productive, and which are not. You see which scale and which can't. What's the answer?!?!

    The good news is, I do have the answer. And I'm going to tell you. But you have to sit through a story first.

    The scene is fifth grade. The playground was a competitive place for me. Running games were important. I was pretty fast, but not the fastest. I needed to get just a touch faster. After much begging, I finally got the new sneakers I had been pining for. The sneakers that would make me run faster, just like the ads said. I was real excited. I put them on and tried them out. Darn! It's true! I really can run faster in these sneakers. I would do little speed bursts, and was amazed at what a difference those sneakers made!

    Pro-keds-sneaker
    Then I went to the playground. I wore my new secret weapons and a smirk on my face. I felt no need to brag; I would let my amazing new speed do my bragging for me. Then the games began.

    Something was wrong. VERY wrong. HOW COULD THIS BE?? I just KNOW I run faster in these sneakers! But I'm not winning!? And with that experience I took a small step towards growing up…

    Thanks for sitting through that vignette from my childhood. Here's why it's relevant: programming environments are like sneakers, and many of the people who use them are like fifth grade boys who don't actually have to compete against other boys on the playground to find out how much difference those sneakers really make.

    Here's the answer to the original question: differences between sneakers (programming environments) are tiny compared to differences between kids (the skills, sophistication and raw horsepower of the people who use them). Put great shoes on weak, slow, unmotivated kids and it won't help them much; force strong, fast, passionate kids to wear crummy shoes and it won't slow them down much.

    This is not my "natural" way of thinking about this question. It is what scores of data points over many, many years have forced me to. The data points don't just come from things I've heard; they've come from things I know up close and personal. I could give loads of examples.

    That this conclusion about technology surprises many people tells us how isolated the field really is from normal human experience. Who, for example, would be surprised to hear that:

    • in baseball, the batter matters more than the bat
    • in art, the painter matters more than the paint or brush
    • in writing, the writer matters more than the word processing program

    Are there differences between the major programming environments? Yes. Can you "prove" that one is better than another for a particular task? Yes, vendors do this all the time. They want you assume that tools are like trains: all you have to do is pick the "fast" train and it will take you to your destination quicker than the slow one. But the reality is that tools are more like sneakers than trains — the tools are things capable people use to get their jobs done, rather than being machines that transport people to where they want to go.

Links

Recent Posts

Categories