Category: Medical practice & training

  • Job requirements for software engineers should stop requiring CS

    I've read thousands of job requirements for computer programmers over the years, and written or edited quite a number. I’ve interacted with hundreds of software groups and seen the results of their work. I’ve spent a couple decades cranking out code myself in multiple domains. There are a couple near-universal problems with job requirements that, if changed, would improve the quality of software groups and their productivity.

    Of course, it’s not just the job requirements and what the hiring people do – it’s also the managers, from the CEO down. They also have to not just support but champion the changes I describe. If they do, everyone will enjoy better results from the software team.

    In this post, I'm going to concentrate on just one of the issues: academic degrees

    A near-universal job requirement is an academic CS degree. When analytics, ML or AI is involved, the requirement is often “upgraded” to a Master’s or PhD.

    There are many capable programmers who have degrees of this kind. Often it doesn’t seem to hurt or hold them back much. But all too often it does! The more specialized the training, for example in project management or quality, the more likely the “education” the person has received is an active impediment to getting good work done.

    Here’s the simple, raw, brutal fact: Getting a degree in Computer Science does NOT train you to become an effective programmer in the real world. All too often, the degree results in the person performing worse than someone with self-training and apprentice experience.

    That is the fact. Surprised?

    Let’s do a little compare and contrast with medicine. Yes, I know that an MD is a graduate degree, while CS is often undergrad. Medical training has evolved to what it is now after decades and centuries of finding out what it really takes to help make people healthy. By contrast, CS degrees just started being granted 50 years or so ago, and are far from even trying to figure out what kind of training helps create good programmers.

    First let’s look at the training and testing:

    • You don’t even get into med school without taking the MCAT, a challenging test that takes over 7 hours that few do well on.
    • Once you’re in med school you take a four year course to get your MD.
      • The first two years are academic, including hands-on labs. Then you take the USMLE-1. If you don’t pass this challenging test you’re out. End of med school.
      • The second two years are clinical! You’re in hospitals, clinics and offices seeing patients under supervision. And you’re graded. And then you take the USMLE-2, which is harder than part 1 and has lots of clinical stuff. If you fail, you’re not an MD.
    • To practice, even as a general practitioner, you have to apply and be accepted into a Residency. Depending on specialty, this can be 3-7 years of mostly hands-on practice, under close supervision.
      • During your first year, you have to take and pass the USMLE-3. Fail and you’re out.
      • During your last year you have to take and pass the test specific to your specialty. Fail and you’re out.

    Here’s the equivalent of the training and testing in CS:

    • There is NO equivalent in CS. No entry testing. No exit testing. Just grades on courses determined by professors who usually pass everyone.

    A little compare and contrast between medicine and CS:

    • Medicine is taught by doctors who practice medicine
      • CS is taught by professors, most of whom have never practiced programming in the real world.
    • A large part of medical training is working with real patients with real problems, under the supervision of practicing doctors.
      • CS is primarily classroom teaching with textbooks and homework exercises. You have to write programs as exercises, but it’s completely artificial. There is nothing apprentice-like or truly clinical about it.
    • Medical training is led by doctors who are incented to produce great doctors.
      • CS training is led by academic PhD’s with no real-world experience who are incented to publish papers read by people like them.
    • Medical journals publish essential information for practicing doctors, giving advances and new discoveries.
      • CS journals are read by the tiny group of academics who publish in them. Practicing programmers pay no attention for good reason.
    • Bad doctors are fired for incompetence and barred from practicing.
      • CS graduates are rarely fired for incompetence. If CS graduates can’t program well, they usually shift into using their non-skills in “management.”
    • In medicine, best practices are increasingly codified. You rapidly fall under scrutiny for deviating.
      • CS grads seek out and follow fashions that are the software equivalent of blood-letting, enthusiastically promoting them and getting them adopted with disastrous results.
    • Hospitals are compared with each other in terms of results. It’s not hard to find which are the best hospitals.
      • Groups of CS grads make it impossible to make comparisons between groups, with the result that huge groups produce major disasters at great expense, while tiny groups of effective programmers perform 10X or more better.

    All this doesn’t make things uniformly wonderful in medicine. But it goes a long way towards explaining why software is so bad. It’s awful. The awfulness is so widespread that it’s rarely talked about!. If bridges fell down at 1/100 the rate that software projects fail, there would be a revolt! Instead, everyone in the industry just sighs and says that’s the way things are.

    You think things are great in software? Check out a couple of these:

    https://blackliszt.com/2015/09/software-quality-at-big-companies-united-hp-and-google.html

    https://blackliszt.com/2014/12/fb.html

    https://blackliszt.com/2014/02/lessons-for-software-from-the-history-of-scurvy.html

    The fact is, CS training leads to horrible results because Computer “Science” is roughly at the same level as medicine was when bleeding patients was the rule. See this:

    https://blackliszt.com/2019/11/computer-science-is-propaganda-and-computer-engineering-is-a-distant-goal.html

    Conclusion

    There are lots of things you can do to improve the results of hiring software programmers and managers. Here's how the usual interview process goes; here is specific advice about interviewing. There is a whole pile of advice in my book on software people. If all you did was drop the CS degree requirement, you would have taken a big step forward in quality improvement.

  • What Software Experts think about Blood-letting

    Software experts do NOT think about blood-letting. But ALL medical doctors thought about blood-letting and considered it a standard and necessary part of medical practice until well into the 1800's. They continued to weaken and kill patients with this destructive "therapy," even as the evidence against it piled high.

    The vast majority of software experts strongly resemble medical doctors from those earlier times. The evidence is overwhelming that the "cures" they promote make things worse, but since all the software doctors give nearly the same horrible advice, things continue.

    Blood-letting

    Blood-letting is now a thoroughly discredited practice. But it was standard, universally-accepted practice for thousands of years. Here is blood-letting on a Grecian urn:

    11

    Consider, for example, the death of George Washington, a healthy man of 68 when he died.

    GW death

    Washington rode his horse around his estate in freezing rain for 5 hours. He got a sore throat. The next day he rode again through snow to mark trees he wanted cut down. He woke early in the morning the next day, having trouble breathing and a sore throat. Leaving out the details, by the time of his death, after treatment by multiple doctors, about half the blood in his body had been purposely bled in attempt to "cure" him of his sickness!!! If he hadn't been sick before, losing half the blood in his body would have killed him.

    If you are at an accident and you or someone else is bleeding badly, what do you do? You stop the bleeding, because if you don't, the person will bleed to death. That's now. Then? You bleed the sick person because it's the universally accepted CURE for a wide variety of sicknesses.

    Bloodletting was first disproved by William Harvey in 1628. It had no effect. It remained the primary treatment for over 100 diseases. Leaches were a good way to keep the blood flowing. France imported over 40 million leaches a year for medicinal purposes in the 1830's, and England imported over 6 million leaches from France in the next decade.

    While blood-letting faded in the rest of the 1800's, it was still practiced widely, and recommended in some medical textbooks in the early 1900's. We are reminded of it today by the poles on barber shops — the red was for blood and the white for bandages; barbers were the surgeons who did the cutting prescribed by doctors.

    Blood-letting in software

    By any reasonable criteria, software is at the state medicine was in 1799, when everyone, all the experts, agreed that removing half the blood from George Washington's body was the best way to cure him.

    If you think this is an extreme statement, you either don't have broad exposure to the facts on the ground or you haven't thought about what is taken to be "knowledge" in software compared to other fields.

    I hope we all know and accept that the vast majority of what we learn and come to believe is based on authority and general acceptance. This is true in all walks of life. Of course not everyone believes the same thing — there are different groups to which you may belong that have widely varying belief systems. But if you're somehow a member of a group, chances are very high that you accept most things that most members of that group believes.

    This is no less true in science-based fields than others. The difficulty of changing widely-held beliefs in science has been deeply studied, and the resistance to change is strong. See for a start The Structure of Scientific Revolutions. I have described this resistance in medical-related subjects, and in particular showed how the history of scurvy parallels software development methods all too well.

    But at least, to its great credit, medicine has gone through the painful transition to demanding facts, trials and real evidence to show that a method does what it's supposed to do, without awful side-effects. That's why we hear about evidence-based medicine, for example, while there is no such thing in software!

    I hear from highly-qualified and experienced software CTO's that they are going to lead a transition of their code base so it conforms to some modern cool fashion. One of the strong trends this year has been the drive to convert a "monolithic code base" (presumed to be a bad thing) to a "micro-service-based architecture." When I ask "why" the initial response ranges from surprise to a blank stare — they never get such a question! It's always smiling and nodding — my, that CTO is with-it, no question about it.

    Eventually I get the typical list of virtues, including things like "we've got a monolithic code base and have to do something about it" and "we've got to be more scalable," none of which solves problems for the company. When I press further, it becomes obvious that the CTO has ZERO evidence in favor of what will be a huge and consequential investment, and has never seriously considered the alternatives.

    As is typical in cases like this, when you scan the web, you see all sorts of laudatory paeans to the micro-service thing, very little against it. Most important, you find not a shred of evidence! No double-blind experiments! No evidence of any kind! No science of any kind! What you also don't find is stories of places that have embarked on the micro-service journey and discovered by experience all the problems no one talks about, all the problems it's supposed to solve but doesn't, and the all-too-frequent declarations of success accompanied by a quiet wind-down of the effort and moving on to happier subjects. Because of my position working with many innovative companies, this is exactly the kind of thing I do hear about — quietly.

    Conclusion

    We've got a long way to go in software. While software experts don't wear white coats, the way they dress, act and talk exudes the authority of 19th century doctors, dishing out impressive-sounding advice that is meekly accepted by the recipients as best practice. No one dares question the advice, and the few who demand explanations generally just accept the meaningless string of words that usually result — empty of evidence of any kind. It's just as well; the evidence largely consists of "everyone does it, it's standard practice." And that's true!

    Software experts don't think about blood-letting. But they regularly practice the modern equivalent of it in software, and have yet to make the painful but necessary transition to scientific, evidence-based practice.

     

  • Lessons for Software from the History of Scurvy

    Software is infected by horrible diseases. These awful diseases cause painfully long gestation periods requiring armies of support people, after which deformed, barely-alive products struggle to be useful, live crippled existences, and are finally forgotten. Software that functions reasonably well is surprisingly rare, and even then typically requires extensive support staffs to remain functional.

    Similarly, sailors suffered from the dread disease of scurvy until quite recently in human history. The history of scurvy sheds surprising light on the diseases which plague software. I hope applying the lessons of scurvy will lead to a world of disease-free, healthy software sooner than would otherwise happen.

    Scurvy

    Scurvy is caused by a lack of vitamin C. It's a rotten disease. First you get depressed and weak. Then you pant while walking and your bones hurt. Next your skin goes bad,

    378px-A_case_of_Scurvy_journal_of_Henry_Walsh_Mahon
    your gums rot and your teeth fall out.

    Scorbutic_gums
    You get fevers and convulsions. And then you die. Yuck.

    The Impact of scurvy

    Scurvy has been known since the Egyptians and Greeks. Between 1500 and 1800, it's been estimated that it killed 2 million sailors. For example, in 1520, Magellan lost 208 out of a crew of 230, mainly to scurvy. During the Seven Years' War, the Royal Navy reported that it conscripted 184,899 sailors, of whom 133,708 died, mostly due to scurvy. Even though most British sailors were scurvy-free by then, expeditions to the Antarctic in the early 20th century were plagued by scurvy.

    The Long path to Scurvy prevention and cure

    The cure for scurvy was discovered repeatedly. In 1614 a book was published by the Surgeon General of the East India company with a cure. Another was published in 1734 with a cure. Some admirals kept their sailors healthy by providing them daily doses of fresh citrus. In 1747 the Scottish Naval Surgeon James Lind proved (in the first-ever clinical trial!) that scurvy could be prevented and cured by eating citrus fruit.

    JamesLind

    Finally, during the Napoleonic Wars, the British Navy implemented the use of fresh lemons and solved the problem. In 1867, the Scot Lachlan Rose invented a method to preserve lime juice without alcohol, and daily doses of the new product were soon standard for sailors, which is how "limey" became synonymous with "sailor."

    B_scurvy

    Competing Theories and Establishment Resistance

    The effective cures that had been known and used by some people for centuries were not in a vacuum. There were competing theories. Cures included urine mouthwashes, sulphuric acid and bloodletting. As recently as 100 years ago, the prevailing theory was that scurvy was caused by "tainted" meat. How could this be?

    We've seen this movie before. Over and over again. I told the story of Lister and the discovery of antiseptic surgery — and the massive resistance to the new method by the leading authorities at the time.

    Software Diseases

    This brings us back to software. However esoteric and difficult it may be, software is a human endeavor: people create, change and use software and the devices it powers. Like any human endeavor, some of what happens is because of the subject matter, but a great deal is due to human nature. People are, after all, people, regardless of what they do. Patients were killed for lack of antiseptic surgery — and the surgical establishment fought it tooth and nail. Millions of sailors were killed by scurvy, when a cure had been known, practiced and proved for centuries. Why would we expect any other reaction to cures for software diseases, when the "only" consequence of the diseases are explosive growth in the time, cost and risk to build and maintain software, which is nonetheless crappy and late?

    Is there a general outcry about this dismal software situation? No! Why would anyone expect there would be? Everyone thinks it's just the way software is, just like they thought scurvy in sailors and deaths after surgery were part of life. Government software screws up,

    Healthcare-gov-wait
    software from major corporations is awful,

    Hertz fail

    software from cool new social media companies is inexcusably bad. Examples of bad software can be listed for endless, boring, tedious, like forever lengths.

    Toward Healthy Software Development

    If I had spent my life in the normal way (for a software guy), I wouldn't be on this kick. But I didn't and I am on this most-software-sucks kick. Early on, I had enough exposure to large-group software practices to convince me that I wanted none of it. I'd rather actually get stuff done, thank you very much. Now, looking at many young software ventures over a period of a couple decades, the patterns have emerged clearly.

    I have described the main sources of the problems. I have described the key features of diseasefree software development. I have explained the main sources of the resistance to a cure, for example in this post. And I have no illusion that things will change any time soon.

    It will sure be nice when the pockets of healthy software excellence that I see proliferate more quickly than they are, and when an anti-establishment consensus consolidates and gains visibility more quickly than it is. In the meantime, there is good news: groups that use healthy, disease-free software methods will have a massive competitive advantage over the rest. It's like ninjas vs. a collection of retired security guards. It's just not fair!

  • What Can Software Learn from Steamboats and Antiseptic Surgery?

    Software is among the most advanced, rapidly changing fields of technology. Only the "kids" who grew up with the latest techniques seem to be able to master them. At the same time, really bad ideas spread through software groups like the plague; they take hold and resist cure, in spite of producing terrible results. How can we make sense out of a field that advances rapidly and resists change at the same time?

    History

    As I've pointed out, software people are strongly averse to learning about computer history. In some fields (e.g. physics), the very terms used are named after historical figures; in others, history is treated with reverence (e.g., Santayana: "Those who cannot remember the past are condemned to repeat it."); in software, by contrast, we use the phrase "that's history" to dismiss anything that happened in the past as obviously irrelevant to the present.

    I think studying history is the only way to understand the present, software included. I think we can understand the strange software phenomenon of rapid change combined with resistance to change by taking two examples from history: one in which new methods in technology were rapidly accepted by all concerned parties, and the other in which clearly superior new methods were resisted for many long years by the leading people in the field.

    Steamboats

    It would be great if software advances were adopted quickly, like the way steam technology rapidly overcame wind as a method for moving boats.

    The displacement of wind by steam is clearly laid out in T.J. Stile's excellent biography of one of the major figures in the transformation, Cornelius Vanderbilt. 449px-Cornelius_Vanderbilt_Daguerrotype2

    Vanderbilt started in business by running a sailing-boat "taxi" service from Staten Island to Manhattan. He transitioned into the rapidly emerging steam boat transportion business, not only as a captain and owner, but (surprisingly to me) as an engineer.

    The public took to the new steamboats quickly. The reason is clear: speed. There was, at the time, no quicker way to get from point A to point B if there were a water route between them. The speed of the boat was immediately obvious to the simple observer, and easy to verify by noting departure and arrival times. To prove whose boat was the fastest, there were races. 800px-Cornelius_Vanderbilt_(steamboat)

    Vanderbilt's steamboats were judged by a clear standard: whose was the fastest? The criteria were easy to measure.

    Antiseptic Surgery

    The benefits of antiseptic surgery, as introduced by Joseph Lister, were clear: instead of a large number of patients dying of infection after surgery, they would live. Ego clearly played a role in resisting the adoption of the new method. But, to be fair, there is another important factor.

    What made surgery different from steamboats? They were both major technical advances. They both involved major changes in what you did and how you did it — more so with boats than with surgery! So why did steam catch on quickly, even though they required whole new boats of radically different design and operation, while the antiseptic method was resisted for decades, even though it was subsidiary to the surgery itself, which was left largely unchanged?

    Boats and Surgery

    The fact that steamboats were faster than sailboats was easy and unambiguous to measure, while the surgery outcomes were difficult and ambiguous to measure.

    The time of each boat trip is easy to measure. It's just a time duration. When you watch two boats, anyone can see which one moves more quickly. By contrast, every surgery is different. The patient is different, the trouble being fixed is different, and the ultimate outcome may not be determined for weeks. Many surgery patients continued to die with antiseptic methods because it wasn't the only factor influencing the outcome. Furthermore, excellent surgeons who were dirty could save patients that would have been killed by crappy surgeons who happened to use antiseptic methods, since after all not every patient got infected.

    In retrospect, it's completely maddening that surgeons failed to be swayed by the arguments and evidence in favor of Lister's carbolic acid methods, and ego certainly played a role. But the case of the rapid acceptance of the more radical change to steam in boats makes it clear that something more than ego is at work here. Simply put, it is how comparable and measurable are the outcomes of the new technology? With steamboats, you can tell the difference in seconds with the naked eye, and verify it with a stopwatch. No arguments. With surgery, the cases are not clearly and unambiguously comparable, statistics are needed, and there is major variability. There is room for arguments.

    Software, steamboats and antiseptic surgery

    Is any given advance in software like moving from sailboats to steamboats, or is it more like adding antiseptic methods to surgery?

    That's easy: unlike straightforward competitions like races, every software project is different. In a race, the competitors take off from a starting line at the same time, and whichever crosses the finish line first is the winner. Simple! But in the real world of software, every project is different; you can always point to differences in requirements, conditions, deployment, or other things to explain why this project took more time and resources than that project. It sounds like software is kind of like surgery!

    Conclusion

    It is my personal experience and judgment that ego can play a significant role in explaining why many software groups stay mired in the same old methods, getting the same lousy results, year after year. But I think that if software projects were as comparable as transportation schedules, the evidence would simply force more rapid change, like it or not, on intransigent software groups. But because of how genuinely challenging it is to compare software projects to each other, it is at least understandable how only the most enterprising and eager-to-be-the-best software groups seek out and adopt the very best methods. 

Links

Recent Posts

Categories