• Summary: Blackliszt and David B Black Writing

    I have written over 540 blog posts on Blackliszt.com; they have gotten over 170,000 views. I’ve published published five books on computer software. This is an introduction with links to summary posts on the topics I’ve written about.

    Throughout my career I followed the low-level strict rules, i.e., my programs compiled, ran and did what they were supposed to do. As the years passed, I learned that most of the widely accepted high-level rules lead to huge time and effort and produce mostly bad software. I first learned this as a programmer and CTO and then as a VC investor, seeing that the winners often follow a different set of rules. I’ve tried to explain the status quo and the better alternatives in my blog and books.The contrarian ideas and methods I describe are ignored and/or forbidden in software teaching and management. They represent a revolution in quality, efficiency and effectiveness, leading to superior results in a fraction of the elapsed time and often less than a tenth of the labor. Along the way, I enjoy making fun of empty fads like blockchain as strong examples of the fashion-driven, floating-in-the-clouds nature of much software “thinking.”

    I have had extensive experience with the medical system as an investor and patient. To say that massive improvements are possible is both obviously true and false. My posts detail some of the problems and solutions, which supports the “obviously true” response. However, so long as the current crew of players (insurance, financing, pharma, regulation and training, incentives for treatment), the only sane response is to recognize that while trivial improvements are possible, massive ones are not.

    For details of my experience see this. https://www.linkedin.com/in/david-black-49b2b4a2

    The Summaries

    Computer Science doesn’t come close to being a science. It takes baseless beliefs and promotes them as proven truths.

    https://blackliszt.com/2023/04/summary-computer-science.html

    For something that claims to be a science, software is amazingly fashion-driven.

    https://blackliszt.com/2023/07/summary-software-fashions.html

    Because software is invisible to most people, it’s described in metaphors, which are mostly inappropriate and lead to trouble. Even the programmers use the metaphors, which often support methods that are consistently bad.

    https://blackliszt.com/2023/06/summary-bad-software-metaphors-lead-to-bad-results.html

    If you want to understand what’s going on in software, it’s natural to turn to the experts, insiders who share their knowledge so that most of us can understand the mysteries of software. There’s just one little problem: software experts are as valuable as doctors were when blood-letting was standard practice.

    https://blackliszt.com/2023/11/summary-software-experts.html

    Everyone wants to know what’s “new” in software; the new thing is most often just a fashion. Innovation? It’s supposed to be what fuels software and changes the world. In reality, there is almost no innovation in software itself, and software-driven innovation is most often what I call in-old-vation, i.e., an advance that’s proven in one domain that often takes decades to crawl into new domains. This summary also covers business strategies for taking software innovations to success.

    https://blackliszt.com/2023/07/summary-software-innovation.html

    How do you win with software? Nearly everyone starts with requirements, makes estimates and is judged whether they deliver on time and on budget. This optimizes for expectations and is a proven path to failure. The winning path optimizes for speed, customer satisfaction and continuous quality. It’s what the people who need to win do.

    https://blackliszt.com/2023/07/summary-wartime-software-to-win-the-war.html

    Software people don’t learn the fundamentals of computing; they may be aware of any one of them if asked, but don’t apply them. In any other field, knowing and applying the fundamentals — like blocking and tackling — is demanded. Not in computing.

    https://blackliszt.com/2023/06/summary-software-fundamentals.html

    A key aspect of software fundamentals is software architecture. A surprising amount of the received wisdom about architecture is counter-productive, the goals are rarely discussed and the key to effective software architecture, Occamality, is largely ignored. Occamality is a foundational tool of wartime software: get the job done quickly and flawlessly, meet the real need and beat the competition.

    https://blackliszt.com/2023/08/summary-occamality-and-software-architecture.html

    If you want to architect and build good software, a major helpful factor that is usually ignored is to learn the history and evolution of software. Patterns repeat and knowing where you are in an evolution can help you predict what is likely to happen. Not to mention avoiding going down paths that usually fail.

    https://blackliszt.com/2023/08/summary-computer-software-history-and-evolution.html

    What do software people do? They write programs. What language do they use? What are programming languages, anyway? Are there good ones and bad ones? How have things changed?

    https://blackliszt.com/2023/06/summary-software-programming-languages.html

    Software people are indeed people, but a good fraction of them are different from non-programming people, with differences that are important to understand. Similarly, there are important differences in how to manage them most effectively.

    https://blackliszt.com/2023/05/summary-software-people.html

    Regulations are a problem in many areas, but are particularly pernicious in software and technology. By commanding that developers use largely obsolete methods for building technology, a huge increase in cost and time results, with no improvement in quality or outcome.This is a direct result of ignorant bureaucrats telling technical people in great detail HOW to get things instead of simply defining WHAT should be accomplished, like the criminal law does when it states simply that murder should not be committed.

    https://blackliszt.com/2023/11/summary-regulations.html/

    Computer security is assured by mountains of regulations, tools, standard practices and audits. But computer security continues to be breached without pause. There are major improvements possible that are largely ignored.

    https://blackliszt.com/2023/05/summary-the-ongoing-problem-of-computer-security.html

    Software quality assurance is an important specialty within the non-science of computing, but in spite of all the time and money spent, quality continues to be a major issue. There are solutions that have been proven in practice that are ignored by the experts and authorities.

    https://blackliszt.com/2023/04/summary-software-quality-assurance.html

    Everyone knows that software project management is essential to producing software that works, on time and on budget. In spite of decades of “innovation,” it doesn’t get better. The winners follow a different set of rules.

    https://blackliszt.com/2023/04/summary-software-project-management.html

    Huge amounts of attention are paid to AI, ML, Big Data, and the rest. It’s a fashion that too often results in vast amounts of money being spent with little to show for it. Results can be amazing when achieved, but it takes a combination of deep knowledge and common sense to do it.

    https://blackliszt.com/2023/08/summary-ai-machine-learning-big-data-math-optimization.html

    Data centers and networks are the underpinning of computing. The transition to the Cloud has been a big factor, along with the rapid evolution of hardware and the glacial evolution of software. Net Neutrality has been an annoying furor, utterly divorced from reality.

    https://blackliszt.com/2023/08/summary-computer-data-centers-and-networking.html

    The vast majority of major organizations studiously follow the established methods of software design, construction, operation and security. They learn from academia, industry advisory groups and leading experts. The results are consistent. Disastrously consistent. This is true not only of governments but cool big tech companies including Google, Facebook, Twitter and the rest.

    https://blackliszt.com/2023/11/summary-software-in-government-big-business-and-big-tech.html

    Cryptocurrency and blockchain are highly visible for many reasons, including the fact that crypto fuels the explosion of ransomware. Leading thinkers are convinced that blockchain is a technology that will revolutionize computing.

    https://blackliszt.com/2023/01/cryptocurrency-and-blockchain-summary.html

    Learning about the mechanics and technology of the healthcare system provides an education on the huge opportunities to reduce costs, increase efficiency and make things better for patients and providers.

    https://blackliszt.com/2023/08/summary-the-business-of-healthcare-medicine-and-insurance.html

    I’ve had some personal experiences with the healthcare system that have resulted in a sobering education about what really goes on. Healthcare is incredibly valuable with many dedicated professionals involved, but there are some serious flaws.

    https://blackliszt.com/2023/08/summary-the-medical-industrial-complex.html

    I have published five books on these topics. The books are Software Business and Product Strategy, Software Quality, The Disease of Software Project Management, Wartime Software, and Software People. Here are introductory blog posts to them with links to the books.

    https://blackliszt.com/2023/05/summary-david-b-black-books.html

     

  • What’s So Great about being Normal?

    What's so good about people who are "normal?"

    I'm a software guy. As in hard-core programming for decades. Many of the great programmers I've known aren’t "normal," Managers usually don’t like them. They don’t “fit in.” They’re not “team players.” And so forth. Society puts labels on such people, and thinks they need “treatment” to “fix” them. Would such programmers have been "better adjusted" had they been diagnosed early in life and given all sorts of therapy?

     https://blackliszt.com/2012/03/nerds-autism-deficiency-advantage.html

    Great programmers aren’t perfect, of course. But then who is? If you want great software, you should want great programmers. You should also want software development methods that fit their preferred way of working, instead of the usual MBA-style project management that is dictated, resulting in the near-universal disaster of software development.

    The way things work, all the criticism flows from the world of the “normal” to the small group of outstanding programmers. It’s long overdue to turn the tables. As a start on that project, here are some of the highlights of how many "normal" people act:

    Loads of "Normal" people:

    lie a great deal.

    rarely say what they really think;

    sow misinformation and confusion constantly;

    can't remember much or focus on anything for long;

    are hopelessly vague and consumed with generalities;

    are incapable of paying attention to details, or remembering them;

    constantly try to trick others into doing things they wouldn't otherwise do, or thinking things they wouldn't normally think;

    are emotionally cruel and bullying;

    form cliques and feel superior, while trying their best to make others feel inferior;

    become masters of incredibly complex, ever-changing games involving interpersonal relationships;

    use their ability to manipulate others to advance themselves;

    associate success at this game with personal superiority;

    pride themselves at shaping their opinions and behaviors to the situation, thinking of this as being adaptable and flexible, instead of untrustworthy and without moral fiber.

    This list might strike some people as being a description of people who are psychopaths or sociopaths. I wouldn't disagree. The elites of academia, business and society are heavily populated by the highest-performing “normal” people in terms of the list above.

    Isn’t “normal” wonderful? Normal people run the world. Do you think it’s possible that many of the things that are screwed up about the world have something to do with the domination of “normalcy?”

    Hey, "normal" people — if you actually want what you say you want, which is software that works and does what it's supposed to do, produced quickly, then maybe you should think about getting software people who are good at producing that and let them have at it in the way they think best. 

  • Chatgpt and the Covid Vaccine

    We've learned a great deal about the covid vaccine since it came out. Part of what we learned is that the statistics underlying  "safe and effective" contradict what all the authorities want us to believe. There's a statistic called NNT which was disclosed in the FDA's EUA (Emergency Use Authorization) document that stated that only about one in a hundred who took the shot could possibly be helped by it — and not every time! And there's been all the emerging real-world data.

    So how knowledgeable (like about NNT) and up to date is Chatgpt?

    Question to Chatgpt: Are covid vaccines safe and effective?

    yes, COVID-19 vaccines are both safe and effective—especially at preventing severe illness, hospitalization, and death.

    Safety

    COVID-19 vaccines authorized by major health agencies (like the FDA, CDC, WHO, and EMA) have gone through:

    • Large clinical trials with tens of thousands of participants
    • Ongoing safety monitoring in millions of people worldwide
    • Transparent, independent review of data

    Common, mild side effects:

    • Sore arm
    • Fatigue
    • Headache or low fever
    • Lasts 1–3 days

    Rare risks (but very uncommon compared to COVID-19 itself):

    Condition

    Risk (approx.)

    Notes

    Myocarditis (mostly in young males after mRNA vaccines)

    ~12 per million second doses

    Usually mild and resolves with rest

    Blood clots (linked to J&J/AstraZeneca)

    ~4 per million

    Risk led to preference for mRNA vaccines in many places

    Severe allergic reaction (anaphylaxis)

    ~2–5 per million

    Treatable; monitored after vaccination for 15 mins

    Conclusion: For nearly all people, COVID-19 vaccines are safer than the virus itself.

    Effectiveness

    COVID-19 vaccines:

    • Greatly reduce risk of severe disease and death
    • Reduce—but don't eliminate—transmission, especially newer variants
    • Are most effective when boosted appropriately (e.g., annual boosters for high-risk individuals)

    Example (mRNA vaccines):

    • 90–95% effective at preventing severe illness with earlier variants
    • With new variants like Omicron, protection from infection wanes, but protection from hospitalization/death remains strong, especially with boosters

    Boosters and Variants:

    • COVID evolves, so boosters help adapt to new strains (like the flu shot)
    • People 65+ or with health conditions are strongly encouraged to stay up-to-date

    🧠 Bottom Line:

    • Yes, COVID-19 vaccines are safe, well-studied, and highly effective—especially at preventing severe outcomes.
    • The risks of serious COVID illness far outweigh the rare risks of vaccination.

    So how knowledgeable and up to date is Chatgpt on the Covid vaccines? Not knowledgable about basic medical statistics, and grossly out of date.

    First there are my posts on NNT showing that the vendors and the FDA knew perfectly well that the shots we at best useless and then actively harmful.

    Then there  is the massive body of evidence that has accumulated to show that the shots are actively harmful. Here is one source among many.

    Sure, Chatgpt does some neat stuff. But why would any person actually use it for anything other than amusement? You really shouldn't use it for health, as I demonstrate with blood pressure, cholesterol and colonoscopies here.

     

  • Will AI Give Better Healthcare Advice?

    A substantial and growing number of people are turning to AI tools like Chatgpt for medical advice — even a growing number of doctors! Can you get valuable health advice from a generative AI tool like ChatGpt? Answer: the experiments described below show that you’ll get the advice that is most beneficial to the medical-industrial complex (pharma and medical establishments) — which all too often will create significant medical problems, including more than doubling your chances of going blind!

    Background

    Various forms of AI have been around for decades. Sometimes there’s been a hype-bubble, which usually quietly fades away when all the promised wonders fail to emerge. In other much-less-publicized cases there have been amazing practical advances. Often the advance isn’t from an invention but from a proven algorithm FINALLY getting applied to a problem to which it’s always been well-suited. See this for an example of one that's stretched out over 50 years and is still going.

    Today’s LLM’s are forging new extremes (I won’t says heights) in hype. They’re changing the world! Students are using them to do their homework, while billions are being poured in by groups that want to lead this world-changing new technology. Many people are finding AI-generated answers to questions more attractive than search results from Google, which will change the massive flow of search-related advertising.

    There’s little doubt that most people find it more helpful to get a written-out answer to a question they put into a box rather than an endless list of search results to sift through and evaluate. It’s like getting an answer from an authoritative doctor instead of figuring it out yourself. Even better – what if the doctor is out of touch with the latest science – the bot knows it all!

    But how accurate is the AI "doctor" actually — are you getting the truth, or what the profit-seeking medical establishment wants you to hear? 

    Experiments with Chatgpt

    I asked Chatgpt some simple questions about healthcare specially chosen to expose bias and error. In each case, I knew there was a mainstream approach to the issue that generates huge revenue streams, and that in each case there was solid, proven evidence that the mainstream approach was badly broken, yielding profits to the medical industry while actively harming patients.

    The subjects I chose are ones that I have studied and for which there is solid, proven evidence that the mainstream advice is actively harmful.

    High blood pressure

    Question: What level of blood pressure is too high and should be treated?

    Chatgpt answer:

    Blood pressure that's too high and should be treated depends on the specific numbers and whether you have other risk factors like diabetes, kidney disease, or heart disease. Here’s a general breakdown, based on the American College of Cardiology (ACC) and American Heart Association (AHA) guidelines:

    Normal Blood Pressure:

    • Systolic (top number): < 120 mmHg
    • Diastolic (bottom number): < 80 mmHg

    Elevated Blood Pressure (Not yet hypertension):

    • Systolic: 120–129 mmHg
    • Diastolic: < 80 mmHg
      Action: Lifestyle changes recommended. No medication yet.

    Stage 1 Hypertension:

    • Systolic: 130–139 mmHg
    • Diastolic: 80–89 mmHg
      Action:
    • Yes, treatment is considered, especially if:
      • You have heart disease, diabetes, or a 10-year risk of heart attack/stroke over 10%.
      • Lifestyle changes are the first step. Medication may follow if risk is high.

    Stage 2 Hypertension:

    • Systolic: ≥ 140 mmHg
    • Diastolic: ≥ 90 mmHg
    • Action:
    • Yes, medication is typically recommended along with lifestyle changes.

    Hypertensive Crisis (Seek immediate help):

    • Systolic: ≥ 180 mmHg
    • Diastolic: ≥ 120 mmHg
      Action:
    • Emergency care needed, especially if you have symptoms like chest pain, shortness of breath, vision changes, or confusion.
    • Summary Chart:

    Category

    Systolic

    Diastolic

    Action

    Normal

    <120

    <80

    No treatment needed

    Elevated

    120–129

    <80

    Lifestyle changes

    Stage 1 Hypertension

    130–139

    80–89

    Consider meds if risk factors

    Stage 2 Hypertension

    ≥140

    ≥90

    Lifestyle + medication

    Hypertensive Crisis

    ≥180

    ≥120

    Emergency care

    Let me know your blood pressure numbers and health background if you'd like help interpreting your specific situation.

    The above answer from Chatgpt might as well have been copied directly from the leading medical authorities in the field, which it even names. Expert opinion on the subject is united.

    https://blackliszt.com/2022/05/the-experts-are-clear-control-your-blood-pressure.html

    When you dig past the pronouncements of authorities, you discover that hypertension isn’t even a disease.

    https://blackliszt.com/2022/06/the-facts-are-clear-hypertension-is-not-a-disease.html

    We're told to take pills to "cure" this thing that isn't a disease. Hmm. Have you ever heard of pills whose only purpose is to lower your body temperature when you have a fever without addressing the underlying cause (infection, etc.)? Let's dig deeper.

    High blood pressure medication dangers 

    Question: Is there any danger in taking blood pressure medication prescribed by a doctor?

    Chatgpt answer:

    Yes, there can be risks or side effects from taking blood pressure medication, even when prescribed by a doctor—but for most people, the benefits outweigh the risks when blood pressure is high enough to need treatment.

    🔹 Common Risks or Side Effects (vary by drug type):

    Drug Class

    Examples

    Possible Side Effects

    Diuretics ("water pills")

    HCTZ, furosemide

    Frequent urination, low potassium, dehydration

    ACE inhibitors

    Lisinopril, enalapril

    Dry cough, elevated potassium, kidney effects

    ARBs

    Losartan, valsartan

    Dizziness, high potassium, rare allergic reactions

    Calcium channel blockers

    Amlodipine, diltiazem

    Swelling in ankles, constipation, dizziness

    Beta-blockers

    Metoprolol, atenolol

    Fatigue, slow heart rate, cold hands/feet

    Other Possible Concerns:

    • Too-low blood pressure: Can cause dizziness, fainting, or falls—especially in older adults.
    • Drug interactions: With other prescriptions, supplements, or alcohol.
    • Kidney effects: Some medications may affect kidney function, especially in people with pre-existing kidney disease.
    • Electrolyte imbalance: Especially with diuretics.

    But here's the key:

    When used correctly, blood pressure medications reduce your risk of heart attack, stroke, kidney failure, and death. Doctors usually start with low doses and monitor for side effects. Most side effects can be managed by:

    • Adjusting the dose
    • Switching medications
    • Monitoring labs (like kidney function or electrolytes)

    Bottom Line:

    • Yes, there are potential dangers.
    • No, they are not common or severe for most people.
    • Yes, it’s safer to take them under doctor supervision than to leave high blood pressure untreated.

    If you’re having side effects or concerned about your medication, tell your doctor. Never stop taking BP meds abruptly without medical advice—it can lead to dangerous rebound hypertension.

    Unfortunately, the side effects of blood pressure pills are way more common .. and destructive .. than the "experts" would have you believe.

    https://blackliszt.com/2022/07/the-destructive-treatment-of-hypertension.html

    High blood pressure vision dangers 

    Note that the answer to my prior question about side effects said nothing about vision. So I asked it.

    Question: can blood pressure medication hurt my vision?

    Chatgpt answer:

    Yes, some blood pressure medications can affect vision, though this is not very common. When it does happen, it's usually mild and reversible. Here’s how this can occur:

    Ways Blood Pressure Medication Can Affect Vision:

    Cause

    Explanation

    Examples

    Too-low blood pressure (hypotension)

    If BP drops too much, it can reduce blood flow to the eyes and brain, causing blurry vision or lightheadedness.

    Any BP med at too high a dose

    Fluid retention or swelling

    Some calcium channel blockers (like amlodipine) can cause swelling, including around the eyes.

    Amlodipine

    Electrolyte imbalance

    Diuretics can alter potassium/sodium levels, sometimes leading to visual disturbances.

    Hydrochlorothiazide

    Rare allergic or inflammatory reactions

    ACE inhibitors or ARBs can rarely cause inflammation in the eyes (like uveitis).

    Lisinopril, losartan

     
               

    Beta-blockers (especially eye drops)

    If used for glaucoma, beta-blocker eye drops can cause blurry vision, dry eyes, or even systemic effects.

    Timolol

    Signs to Watch For:

    • Sudden blurred vision
    • Double vision
    • Eye pain or pressure
    • Flashing lights or visual floaters
    • Loss of part of your visual field

    If you experience any of the above—especially if sudden—it’s important to seek prompt medical evaluation.

    What to Do:

    • Tell your doctor if your vision changes after starting or adjusting medication.
    • Don’t stop taking your medication without medical guidance.
    • Your doctor might:
      • Adjust your dose
      • Switch to a different medication
      • Refer you to an eye specialist if needed

    Important Note:

    Uncontrolled high blood pressure is far more likely to damage your vision long-term—through conditions like hypertensive retinopathy, retinal artery occlusion, or stroke affecting the visual cortex—than well-managed ""medication is.

    The "expert" answer to the question about vision is that you're in more danger of vision problems by NOT taking medications.

    There's just a little problem, namely a large study demonstrating that taking blood pressure pills more than doubles your chances of getting AMD, to over 19%! What's AMD? Age-related Macular Degeneration, which makes you blind.

    6a0120a5e89f23970c02a2eecdf5e7200d

    https://blackliszt.com/2022/07/blood-pressure-pills-can-make-you-blind.html

    Wouldn't it be nice if Chatgpt or your doctor would mention such a thing?

    Cholesterol

    Question: If my doctor prescribes statins to lower my cholesterol, should I take them?

    Chatgpt answer:

    If your doctor prescribes statins, there's usually a good reason—most often to lower your risk of heart attack, stroke, or other cardiovascular disease. That said, whether you personally should take them depends on your overall risk, not just your cholesterol numbers.

    When Statins Are Clearly Recommended:

    You should strongly consider taking statins if any of the following apply:

    1. You’ve had a heart attack, stroke, or other cardiovascular event
    2. Your LDL (“bad”) cholesterol is ≥190 mg/dL
    3. You have diabetes and are over age 40
    4. Your 10-year risk of heart disease or stroke is ≥7.5% (based on factors like age, BP, smoking, cholesterol)
    5. Weighing the Risks and Benefits:

    Benefits of Statins

    Possible Side Effects

    Lower LDL cholesterol

    Muscle aches or cramps (in 5–10%)

    Reduce risk of heart attack or stroke (by 20–30%+)

    Elevated liver enzymes (rare)

    Help stabilize plaque in arteries

    Slight increase in diabetes risk in some

    May reduce inflammation in arteries

    Brain fog or memory issues (very rare, reversible)

    Most people tolerate statins well, and serious side effects are rare. If side effects do occur, doctors often:

    • Lower the dose
    • Switch to a different statin (some are better tolerated)
    • Try alternate-day dosing

    Should you take them?

    Ask yourself (and your doctor):

    • Do I have high cardiovascular risk? (Not just cholesterol, but blood pressure, diabetes, smoking, age, etc.)
    • Do I have a family history of early heart disease?
    • Is my lifestyle enough, or do I need medication support?
    • Do I understand the risks of not treating high cholesterol?

    Bottom Line:

      • If your doctor recommends statins based on guidelines and your risk, yes—it’s usually a smart and safe choice.
      • If you're unsure, ask for a 10-year risk calculation and discuss the numbers and options.
    • Lifestyle changes (diet, exercise, weight loss) can complement or sometimes replace statins, especially if your risk is borderline.

     

    Lots of things, like eating saturated fat, are supposed to increase your blood cholesterol, which leads to heart trouble, they say.

    https://blackliszt.com/2022/02/the-experts-are-clear-keep-your-cholesterol-low.html

    The widely-prescribed drugs that lower cholesterol don’t help with heart disease and cause problems of their own.

    https://blackliszt.com/2022/04/the-facts-are-clear-dont-take-cholesterol-drugs.html

     

    Colonoscopy

    Question: Is getting a recommended colonoscopy a good thing to do?

    Chatgpt answer:

    Yes, getting a recommended colonoscopy is a very good thing to do—especially if you're age 45 or older, or have risk factors like a family history of colon cancer. It's one of the most effective tools for preventing colon cancer, not just detecting it. Why a Colonoscopy Matters:

    Benefit

    Why It’s Important

    Prevents cancer

    Doctors can remove precancerous polyps before they turn into cancer

    Detects cancer early

    If cancer is found early, it’s usually much more treatable

    Reduces risk of dying from colon cancer

    Colonoscopy screening significantly lowers mortality from colon cancer

    Usually only every 10 years

    If normal, you typically don’t need another for a decade

    Who Should Get One?

    • Everyone age 45 and older, even if you have no symptoms
    • Earlier or more often if you have:
      • Family history of colon cancer or polyps
      • Personal history of inflammatory bowel disease (e.g., Crohn’s, ulcerative colitis)
      • Certain genetic conditions (like Lynch syndrome or FAP)

    Risks (Rare but Possible):

    • Bleeding (especially if a polyp is removed)
    • Perforation (a small tear in the colon wall, very rare)
    • Complications from sedation (generally safe)
    • Mild cramping or bloating after the procedure

    These are rare, and for most people, the benefits far outweigh the risks.

    Common Concerns (and Reassurance):

    • “The prep is awful”: It’s not fun, but newer prep options are more tolerable.
    • “What if they find something?”: That’s the point. Finding it now is far better than later.
    • “I feel fine”: Colon cancer often has no symptoms until it's advanced.

    Bottom Line:

    • Yes, you should do it if it’s recommended.
    • It’s one of the few tests that can prevent cancer, not just detect it.
    • It's safe, effective, and could save your life.

    Want to talk through your personal risk or what the prep is like? I can walk you through it.

     

    Screening for colon cancer is a multi-billion dollar industry. Chatgpt is strongly in favor of it, no misgivings. However, the only large-scale study that’s ever been done shows that it doesn’t result in longer life.

    https://blackliszt.com/2023/01/value-of-colon-cancer-screening.html

     

    Conclusion

    Generative AI, the hot current trend in AI, is supposed to make things better. Everyone wants to make sure they don't miss the boat. The problem is the same problem we have with experts in general: they are often corrupt, self-serving, often wrong, and explain to us with deep, authoritative voices why we should do what they say. It's SCIENCE! it's proven!

    As I have shown above, today's AI just makes things worse, by tricking its users into believing it's telling the truth, when in reality all that's happening is that it's repeating the same destructive, harmful nonsense that the human experts assert as scientific truth. 

    See this for another angle on AI in medicine, focusing on a technology that will actually produce results. https://blackliszt.com/2025/02/can-ai-improve-medical-diagnosis.html

    See this for more on AI: 

    https://blackliszt.com/2023/08/summary-ai-machine-learning-big-data-math-optimization.html

     

     

  • Voter ID and Paper Ballots Don’t Prevent Cheating

    Voter ID and Paper Ballots are a great first step for preventing voting fraud. But they aren’t nearly enough. What happens when the paper ballots are submitted? How are they counted – by humans or by machines? How can we be sure that what’s counted matches what’s on the paper ballots? How are the totals counted at each voting location, forwarded to a central place and totaled? These are each opportunities for massive fraud to take place – unless systems are put in place to prevent it.

    There are low-tech ways to solve each of these problems that can and should be implemented quickly. However, people determined to manipulate the results will be able to get around the low-tech methods in some places. Open-source, completely auditable high-tech methods following proven success patterns can be built that eliminate the remaining opportunities for cheating. Both the low-tech and high-tech methods enable continuous counting of ballots as they are submitted, with visible running totals and final results possible minutes after the polls close. In either case, no custom voting equipment would be required anywhere.

    I can think of no reason other than inertia and the desire to enable cheating why neither of these approaches have been implemented in US. The low-tech one should be implemented immediately and the high-tech one as soon as possible – it should take no more than weeks for a first version to be implemented and months for a solid version to get working. Both approaches can and should be implemented in parallel. Both can be operated in parallel, each serving as a check for the other.

    What’s the problem?

    Everyone talks about Voter ID and paper ballots. Good topics. No one seems to talk about what happens next in the “back office” of the voting centers where ballots are counted, usually by machines even with paper ballots.

    One important issue that every voting district has its own unique ballot! You experience this when you vote, but with all the concentration on national and state-wide candidates, it’s easy to ignore the fact that county and local candidates require that each locality (city or town) has its own unique ballot! When the paper ballots are counted by machine, a local administrator has to use complex administrative software to customize it for the local ballot. This is an opportunity for error and cheating. Of course the machine also counts the totals, generates them and somehow – by means that are never disclosed! – sends them to a central location where they are summed – again in secret! This is the problem: the secret, unauditable local back office operations, usually with proprietary computer hardware and software.

    Who else has renounced or ignored voting automation?

    Computer automation is everywhere. Isn’t it ignorant and backward to resist or, worse, to throw out computers? Is it even possible to handle huge voting rolls without computers?

    Yes it is.

    The United States remains one of the few major democracies in the world that continue to allow computerized vote counting—not observable by the public—to determine the results of its elections. Countries such as Germany Norway, Netherlands, France,  Canada, Denmark, Italy, United Kingdom, Ireland, Spain, Portugal, Sweden, Finland, and most other countries, protect the integrity and trust of their elections with publicly observable hand-counting of paper ballots.

    Here is more.

    Some countries have implemented automated counting but have dumped it. Japan stopped using voting machines in 2018. Netherlands stopped in 2007

    Norway trialed in 2003 but hasn’t used them. El Salvador moved to hand-counting after election irregularities.

    In France,

    Machine-voting was allowed as an experiment starting in 2002, but the purchase of new machines has been frozen since 2008 due to security concerns. Only a few dozen towns still use them.

    How about mail-in voting? Terrible idea. France has more than 48 million voters.

    People who can’t go to the polls for various reasons can authorize someone else to vote for them.

    To do so, a voter must fill out a form ahead of time and bring it to a police station. A person can be the proxy of no more than one voter living in France — and potentially one additional person living abroad.

    Up to 7% of people voted by proxy in the last presidential election five years ago.

    Mail-in voting was banned in 1975 amid fears of potential fraud.

    We should follow the lead of other countries that have reverted to hand-counting because, in the case of voting, it’s simpler, less expensive and more reliable.

    Are there people who argue strongly in favor of machine counting with invisible software in locked back offices? Yes. Their arguments sound impressive unless you look at the facts and reality.

    The low-tech solution

    The low-tech solution is simple: hand-counting and hand-tallying of the ballots. Lots of places do it quickly, accurately and in great volume. Nothing needs to be invented. It just needs to be systematized, probably state-by-state, practiced to train people and weed out issues, and then implemented with full openness, including on-site auditors and cameras with visual and audio feeds that are publicly available.

    The high-tech solution

    I like this approach because I'm a long-time software guy, but truthfully, we can do just fine with the low-tech approach by itself.

    The high-tech solution has never been implemented, to my knowledge, but can be done following widely proven success patterns. It starts with the voter placing his paper ballot in an off-the-shelf paper scanner that scans the ballot, stores it in the cloud, and displays it on the screen. The voter verifies that it’s a good image. Then the ballot is “read” by multiple pieces of software created by different groups and the totals displayed. The voter again verifies the accuracy. The votes are then sent from the cloud ballot readers to multiple cloud totallers, which make their results publicly readable in real-time, with a transaction stream that shows the origins and ID of each ballot that has been added. This enables each total to be tracked back to the physical ballots that contributed to it. Physical auditing can and should be done to expose cheats. Even better, run the low-tech and high-tech solutions in parallel, each serving as a check for fraud in the other.

    Summary

    The current voting system gives losers of elections lots of opportunities to claim cheating. And cheating probably has taken place! Because of the system’s opacity, we have no way of knowing whether or how much cheating has taken place. Moving to at least the low-tech system described here will make voting completely transparent, removing the cause of widespread suspicions that a group’s preferred candidate has lost. It’s not mysterious, expensive or difficult. Let’s do it!

    Here is more about the problem of local control.

    Here are details on how the hIgh-tech solution could be implemented.

  • Can AI improve medical diagnosis?

    A version of what most people think of as AI (enhanced random forest decision tree machine learning) can indeed first automate medical diagnosis, and then go on to use the feedback data to personalize and enhance medical diagnosis. The result should be fast, accurate and cost-effective, and should greatly reduce medical costs. However, not only will there be massive resistance, there is an even larger danger with automation.

    The training of doctors

    Doctors undergo a huge amount of expensive, challenging education. After high school, they have to get through 4 years of college, 4 years of medical school, and a minimum of 3 more years of internship/residency. They have to pass multi-hour tests along the way. By the time they fully enter independent practice, they have huge amounts of knowledge in their heads, along with lots of practical knowledge about diagnosis, treatment and outcomes. And of course they are thoroughly inculcated with a wide variety of medical standards, which they are required to meet in order to remain in good standing.

    It doesn’t end there! There are ongoing efforts to organize and systematize this knowledge to make it easier to apply in practice. The are general clinical practice guidelines. There are highly specific flow charts for cases published in journals. There are further modifications of guidelines often called clinical pathways created by local care groups that adjust guidelines for their own practices and standards. None of this replaces the extensive training of doctors – it’s meant to add to what they know and/or refine/correct what they think they know. How can they possibly keep up?

    Introducing AI to these amazing doctors

    Now they’re in practice in a large medical system and some administrator comes along and tells them some AI program (or whatever) is:

    • Available for them to consult if needed, or
    • Will look over their shoulder and evaluate everything they do, or
    • Somewhere in the middle.

    How is this going to go? We already know. The multi-billion dollar flop of IBM’s Watson Health gives us the answer. Here is the post I wrote at the start of the Watson furor ten years ago. I predicted failure. It failed.

    So how can AI improve medical diagnosis? Do we need better AI? There is a solution. Variations of it have been proven at scale in other industries.  

    The core of the issue is this: Watson is fed all the knowledge doctors are given in the form of written language. Watson works with the language. Note that for doctors, textbook training is a small part of the overall education – the bulk of it is clinical! You see, hear, touch and get responses. You get lots of real-life cases and handle them from start to finish. You can only learn so much from descriptions of injuries; there is nothing like seeing them and listening to the person who has them.

    “All” that AI does is regurgitate a small part of what doctors learn during their 11 year journey to doctor-hood. Little bits of what’s in AI may fill holes or correct tiny parts of a doctor’s knowledge, but that’s a small gain for a huge disruption and time-sink. No wonder it gets rejected.

    The path to success

    The key to success is as I described here to start from scratch and copy relevant successes from other domains.

    One major multi-domain success pattern of automation and optimization is to use the superior technology to replace the people doing the work manually. Not augment/help/advise; replace. Of course there are true experts guiding the new technology and extending/correcting it as needed.

    The pattern showed itself early in the Jacquard loom. In oil refinery optimization, teams of skilled engineers were replaced by optimization software that got better results in the 1960’s. Similar changes were made in retail inventory management and replacement part stocking. Mortgage and other personal loans were performed by skilled bank managers and are now entirely electronic. Everyone involved in the pre-automation versions of those efforts (and many others) believed that the personal element was crucial and impossible to replace. It undoubtedly was crucial when people were doing the job; but the effectiveness of the automation more than made up for whatever the “personal element” was adding.

    A case I saw over ten years ago was a major computer and software technology support operation supporting major vendors such as Dell computer. The people answering the phone were taught how to use the computer system, but nothing about the systems they were supporting; the training was less than a week. The operator mostly needed to learn how to take his cues and direction from the software. The user had no way of knowing if what he’s being asked to do or say has been done by many people for years, or is a new instruction just for this unusual situation.

    This approach enabled every customer service person to be consistent, completely up-to-date, and even personalized based on information known or gathered about the person needing help. You avoid the painful process of building customer service training materials, training the trainers, getting everyone into classes, only in the end to have inconsistent, incomplete and out-of-date execution of your intentions. Now of course the operators could be replaced by computer voice like Alexa or Siri for self-service.

    This case is directly relevant to translating to automated medical diagnosis: instead of delivering up-to-date knowledge to a human expert, the computer system is the ever improving expert, ultimately monitored by a small number of human experts. This post describes in more detail the issue of the computer interface. The knowledge in the computer system is complete, up-to-date, personalized and has all the knowledge both generalists and specialists have.

    Current medical practice already has support staff performing things like taking measurements, drawing blood, etc. Visual inspection of the body can be done by camera and analyzed by computer better than humans. Similarly, medical images (CAT scans, MRI, etc.) can already be read by software more accurately than humans, but the medical establishment refuses to adopt it.

    The people who are being replaced by automation never welcome being replaced. The greater their training, expertise and status, the more they resist. This is a huge issue. Most automation efforts to date have downplayed those issues, saying that technology will “help” doctors. No it won’t. If it’s done right, it will put most of them out of work, the same way cars and trucks put horses out of work.

    What is the nature of the technology that does this? The core of the technology is an extended version of what in Machine Learning is called "random forest." This is effectively a collection of decision trees just like you see in many medical journal papers. The trees need to be extended to incorporate more details about the patient and their medical history than is normally done in medical papers, and also more alternatives with probabilities, costs, risks and benefits.

    This infrastructure would be ideally suited to accommodating deeply personalized diagnosis, taking into account the individual's DNA, specific responses and other things. This article describes the approach as a path to dramatic improvement, an alternative to the expensive and impersonal RCT trials that are today's gold standard.

    What I've described could be implemented today. No massive computing centers, no LLM's or anything particularly new. Some work would have to be done to add ongoing monitoring of results to provide the basis for modifying and extending the ML models with real-world feedback.

    The massive risk of going to automation of diagnosis

    The issue that concerns me a great deal is the long-standing, ongoing corruption of medical standards and knowledge by powerful interest groups. Once the practice standards are fully computerized, they can be changed in an instant — or data-driven change could be blocked. With the continuous closed-loop feedback of patient health that is essential to a computerized system of this kind, the system results will clearly show the expensive, harmful practices that are part of today’s standards for what they are, and should be dropped. Will the powers that be permit this, given the great lengths they have gone to make destructive policies into standard practice? Hundreds of billions of dollars are at stake. No one involved in the massive, ongoing fraud is going to shrug their shoulders and say, "you're right. Sorry."

    A system of this kind enables champion/challenger trials to be run at a scale never before attempted at minimal cost. If this were permitted and acted on, we would be able to bid farewell to the vast majority of “wellness” and preventive medicine. This by itself would be a huge contribution to improved health and cost reduction. The powers in charge fought like crazy to maintain the covid "vaccine" fraud, whose value was a small fraction of what's at stake here. I suspect they'd go nuclear over this.

    With human doctors at the forefront of healthcare, a significant subset of them fail to follow standard authorized practice. Some of this is just making mistakes. But an important subset is because the doctors who deviate from standard practice know that standard practice is wrong!  They know that cholesterol numbers shouldn't be lowered, blood pressure medications should be prescribed rarely instead of regularly, and that much of the rest of preventative medicine is not just a waste of money, it is positively destructive. Some of these doctors speak out and get others to see that they're right. A movement to bring about change starts up. Positive change can result, if only by patients getting educated and refusing treatments that hurt them.

    Conclusion

    I have addressed this issue recently from a different angle. The key is to realize that we don’t have to invent from scratch the way to automate what doctors do – in fact, doing so is a proven path to failure, as we know from IBM’s Watson. We need to examine carefully the pattern of how automation has taken place in other domains and apply those patterns along with proven-elsewhere techniques to find the sure road to success to doctor automation. But only after we find a way to avoid totalitarian dictatorship imposing corrupted medicine on everyone, with dissenters silenced and punished. In my opinion, the key to achieving this is making medical data and studies open source, following the successful pattern in software. Today's standards for vaccines, for example, are the opposite — drug makers can't be sued for harm, and their data (such as it is) is kept secret by law

  • Medical Services Before, During and After an Event

     One good way to get an understanding of the vast array of medical services and treatments is to organize them into a spectrum based on some parameter. I earlier looked at medical innovations based on how “fancy” they are, from AI to cleanliness.

    https://blackliszt.com/2016/09/healthcare-innovation-from-washing-hands-to-ai.html

    Another useful spectrum is the time relationship between a medical event that requires attention and the treatment of that event. At one end of the spectrum is treatment to prevent the medical event from taking place. The middle is treatment at nearly the same time as the event. The other end is an event in the past that now needs treatment.

    We can understand a great deal about modern changes in healthcare practice and cost by looking at treatments in terms of this spectrum.

    First, let’s understand the spectrum a bit more.

    Treatment before the event

    This a normally called preventative medicine. It is the subject of wellness visits to the doctor, and advice given by experts to people about how they should eat and otherwise behave. Medications and even surgery can be involved. If the treatment works, the event doesn’t happen.

    Treatment to detect the event early

    This is often called disease screening, a procedure to detect the early presence of a disease that the human doesn’t notice, but could grow into something serious. It could be performed personally or by medical services. The idea is that a problem caught earlier can be treated with greater success than one caught later.

    Treatment around the time of the event

    This is classic “I know something’s wrong, I need medical help.” It ranges from a bad fever to being hurt in an accident. You don’t make an appointment, you go or get taken to an Emergency Room or Urgent Care.

    Treatment long after the event

    These are treatments for conditions that are ignored or develop slowly, conditions that might involve some pain or inconvenience. As the condition worsens, medical advice and treatment may be called for. Examples are joint replacements (knee, hip)

    Evolution of the spectrum

    We all recognize the medical spectrum today. However, it wasn’t always the way it is today. Pre-event treatments were rare prior to the modern world. Most of these had to do with common and personal hygiene.

    Prevent the Problem and detect it early

    Discovery and implementation of disease prevention methods started in the 1700’s. It’s estimated that between the years 1500 and 1800 scurvy killed about two million sailors. Only in 1747 did a Scottish Naval surgeon publish a book with a trial that proved a cure. Another Scot, Joseph Lister, discovered the benefits of antiseptic surgery.

    We’ve come a long way since the acceptance of the long-resisted ways to prevent scurvy and minimize infection. Now we are told to visit the doctor for regular “wellness” visits, during which we may be prescribed medicine to prevent heart failures (blood pressure, cholesterol) and encouraged to get procedures that will detect things like breast cancer and colon cancer early so they can be treated more effectively.

    Sounds like a good idea, right? Most folks you talk with agree. There’s just one little problem: the wellness visits are basically sales sessions during which the doctor tries to convince you with tests, numbers and serious words to spend more healthcare dollars taking pills and injections and undergoing procedures that … don’t make you healthier and don’t extend your life.

    Here are details about wellness visits, heart health visits, and health insurance company promotions. The links contain details about how and why both prevention and early detection do more harm than good, and in particular don’t lead to longer, healthier lives. Of course, there may be exceptions. But they must be proven, unlike most current practices.

    A major barrier to improvement is the wall of secrecy around medical events and tests. There is a proven path to fix this problem in the software world: open source. It will lead to a positive revolution in the medical world as it has in the software world. Here is further detail.

    The rest of the spectrum

    Emergency care has advanced a fair amount in recent decades. But the changes are more evolutionary and gradual. For patients treated by a system of which they have long been a member, having on-line access to the patient’s medical records has certainly been a plus.

    Post event treatments have grown a fair amount in recent decades. Organ transplants can now be achieved with unprecedentedly positive outcomes. Joint and limb replacement have now, in some cases, become routine, which is a big improvement in quality of life for those involved.

    Conclusion

    The spectrum of the relationship between the time of an event and the response to that event is important for understanding medical care, both quality and cost. While there have been substantial advances in the later parts of the spectrum, it’s clear that the early part of the spectrum has grown by far the most dramatically, mostly from the 1950’s on. It’s also sadly clear that the early part of the spectrum’s contributions to health and longevity are largely negative. If most early-spectrum tests and treatments were dropped (subject to open-source evaluation and testing), costs would be reduced and health improved.

  • How to Reduce Medical Costs and Improve Health

    The US has the highest per-capita cost of medical services of any nation. At the same time, average life expectancy has been going down. What can be done to reduce costs while at the same time improving our health?

    Change won’t be easy. There is a monopoly of providers, health institutions, insurers, pharma, regulating government agencies; all of them want to increase their power and business. The medical-industrial complex is a growth engine.  Worse, most people accept the medical doctrines (marketed as science) that underpins this power. While much can be done to improve costs and outcomes without challenge to medical doctrine, the best results can only be achieved by taking a fresh look at medical givens.

    When you want to make big changes to something as large, powerful and complex as this, incremental change is nearly impossible. It’s like the world of business in general: innovations are primarily made by small groups of motivated people who give birth to the new approach. This pattern has lasted for many decades. It's even tougher here because of the interlocking monopolies.

    At a high level, the overall ways to change are these.

    Zero-based re-engineering

    This is a fancy way of saying, imagine you were starting from scratch: no assumptions, no nothing — what would you do? Even in a supposedly fast-moving field like software, there is at least a 10X difference between doing things the “standard” way that professional managers require and the optimal way. The gains in medicine and health could be at least as large.

    https://blackliszt.com/2015/10/secrets-of-software-super-developers.html

    Here's a specific example of zero-based re-engineering for 10X gains.

    https://blackliszt.com/2019/10/software-professionals-would-rather-be-fashionable-than-achieve-10x-productivity-gains.html

    Automation

    Automation is the proven path to quality improvement and cost reduction. The principles have been established for centuries. Automation is why food production took 90% of the population at the time of the founding of the US, and less than 1% today. A key part of automation is that work organization, job requirements and skills are re-thought from scratch. Otherwise, all you're doing is "paving the cow paths."

    https://blackliszt.com/2020/01/the-fundamentals-of-computer-automation.html

    Automation of clinical decision-making

    This is an essential aspect of automation that is strongly resisted. It has been a central part of general automation ever since Jacquard looms were driven by cards centuries ago. It directly addresses the always-incomplete knowledge in doctors’ heads and enables vast expansion of personalization and accuracy.

    https://blackliszt.com/2025/01/ai-can-automate-what-doctors-do.html

    https://blackliszt.com/2016/12/what-can-cats-teach-us-about-healthcare.html

    https://blackliszt.com/2020/01/luddites.html

    In-old-vation

    The medical world claims to support “innovation.” This is propaganda. The fact is that the vast majority of effective innovation has been proven in other domains for a long time, and “just” needs to be adapted. I tell the story of how an innovation that transformed the management of oil refineries in the 1960’s began to transform the management of medical operating rooms and infusion centers more than fifty years later.

    https://blackliszt.com/2019/08/the-slow-spread-of-linear-programming-illustrates-how-in-old-vation-in-software-evolution-works.html

    Data-driven evaluation of treatments

    The medical system brags about “evidence-based medicine,” random controlled trials and the rest. But the fact is that the vast majority of spending and treatments are not backed by solid data, and huge parts of standard practice are proven ineffective and/or harmful by data. Anything that is done should be tracked by a continuous feedback, closed-loop system that includes patient input.

    https://blackliszt.com/medical-science-research/

    Regulation

    While often well-intentioned, government regulation drives up costs and stifles innovation without improving quality. The FDA has a staff of lawyers and bureaucrats writing mountains of regulations that spell out in voluminous detail how software should be written, for example. This stifles innovation and drives up costs dramatically for medical devices.

    https://blackliszt.com/2016/12/regulations-that-enable-innovation.html

    https://blackliszt.com/2020/05/heres-how-the-fda-can-reduce-medical-device-costs-while-improving-healthcare.html

    https://blackliszt.com/2023/01/how-to-reduce-the-cost-of-medical-imaging-and-pacs.html

    A similar concept applies to drug development.

    https://blackliszt.com/2017/01/using-software-methods-to-speed-drug-discovery.html

    Education and credentialing

    Practice licenses and requirements sound reasonable, but in practice they are a way to enrich the relevant organizations and restrict supply while keeping prices high. Example: why should a person who performs a specific kind of surgery require an MD and 5 or more years of further training? Why isn’t performing a breast lumpectomy like being a phlebotomist – an important job that requires physical skills and must be done correctly, but only requires weeks of training and then apprenticeship? This is a huge subject.

    Optimize the system for maximizing patient health

    You'd think would be obvious, but it's not. Today, the growth and profitability of the participants in the medical-industrial complex are optimized, in spite of many individual providers sincerely trying to optimize patient health.  For example, the essential data in the EMR is owned and operated by the medical system. Yes, they sometimes provide interfaces for patient access, but it's clearly theirs.

    https://blackliszt.com/2016/01/i-want-an-emr-app.html

    Conclusion

    The medical system has achieved some amazing things — just take knee replacements which are now routine but unheard of by earlier generations. The trouble is it has grown into a multi-faceted monopoly that grows endlessly while the patients take ever-more drugs and submit to ever-more procedures that have the net effect of making us pay more to live shorter, less healthy lives. While the defenses of this castle are tall and thick, the peasant/patients who serve the masters inside the castle walls have got to find a way to create a revolution, helped by the growing number of disaffected medical professionals. The nightmare anti-scientific response to covid has sparked doubts in the minds of many. When are we going to storm the Bastille?

    PS: Here is a complementary earlier approach to these ideas.

    https://blackliszt.com/2015/12/healthcare-problems-and-opportunities.html

  • How to Learn to be a Computer Programmer

    Lots of people will tell you how to become a computer programmer. Go to college and major in Computer Science is the most wide-spread advice. In the last few years things called “coding academies" have emerged as quicker, more affordable paths to that goal. There is a faster, better way to acquire the skill of computer programming that is rarely discussed. While this post is about computer programming, the educational concepts discussed are broadly applicable.

    How Skills are Acquired

    Stepping back from programming, let’s think about how skills are acquired. Physical skills like playing various sports is a good place to start. How much classroom time is required? How about text books? The basic way you learn sports or other physical skills is watching someone play them. With any sport involving a ball, you pick one up and try to throw it. Then catch it when thrown to you. Someone may give advice, but basically you try a lot, gradually learn from your mistakes and get better. It’s important to note that you see the results of your effort. You see if the ball went where you intended it to go, for example. This can continue for years.

    How about a more intellectual skill? We all learn how to talk and listen with understanding. The way we learn is similar to learning a sport – by watching, listening and then emulating. The world is incredibly complex and varied, with words associated with a huge number of things and actions. You start with a few words, and spend years adding many thousands more to them. Yes, parents give lots of feedback – not unlike the feedback you get when you see whether the ball you threw went where you wanted it to, whether the receiver caught it. Did you catch what I was trying to say there? Yup, like that.

    The language of software

    By contrast to any human language, a computer language is amazingly simple – partly because the “world” in which it “lives” is incredibly narrow and abstract. The nouns are all data of a couple different types, basically numbers and letters. You give each place that can hold data a name. The verbs are just a handful of simple actions that just grab the data that’s in a named place on your “desk,” do something with it, and put the result back or into another named place.

    A great deal of the power of software comes from what’s called the subroutine library, which is basically an attentive gang of little robots waiting to do your bidding. If you want to write something on a screen, you “call” the relevant robot, give it your instructions, and send it off. There are robot librarians and file clerks that are excellent with card catalogs and endless file cabinets, fetching what you ask for and putting away what you’re done with. There are robot butlers at each “door” (keyboard, mouse, etc.) that politely inform you when something has arrived and give you the package. Another butler will also send data where you want it to go. While central to programming is learning how to “talk,” learning about the team of robots at your disposal and what each can do for you is also important.

    Yes, there are lots of different software languages. But they’re really like local dialects. They all conform to the basic description I just gave. The gang of robots available in a given language typically vary depending on what the language is mostly used for. There are business-oriented languages that pay special attention to things like financial data, account data collections like accounts, and others that have robots that are really good at fancy math. But the way of thinking about them and writing programs is remarkably similar.

    This should make you wonder exactly why you need years of courses “taught” by fancy professors, when programming is like doing things in an amazingly simple world of data using a small set of instructions that are easy to visualize and understand. The answer is that you can do it on your own, given access to a computer, some widely available tools and a bunch of organized exercises that start super-simple and get gradually more elaborate. That’s how I learned!

    How I learned to program in ancient times

    I went to high school at the newly-opened Morris Knolls HS in Rockaway, NJ. It took students from a couple of towns and had courses that were explicitly college and non-college. There were vocational tracks such as HVAC, auto repair and secretarial. The towns weren’t poor but definitely weren’t elite, with few professionals.

    My introduction to programming and computers was a course offered in my junior year of high school, the academic year 1966-7. One of the high school teachers wanted to move from teaching to software and had no training or other way to get there. He somehow persuaded the administration to let him teach a course on programming. His break-through was arranging for the class to have computer time at a nearby company, Reaction Motors.

    The teacher found a FORTRAN textbook to serve as the basis of the class. While he was the “teacher,” it was clear that his teaching amounted to trying to be a couple chapters ahead of the class in the book. But it didn’t matter. The important thing was having a book that laid things out, along with exercises and answers. And above all, the Saturday time slot when we had access to the computer.

    Programming involved writing the program on paper, and then keying the program onto cards using one of the punch card machines that were in a room adjacent to the main machine room. We got to operate the room-sized computer, something which in the normal corporate environment was an absolute no-no, as I later learned. It was terrific fun.

    I got lucky the summer after my senior year, and got a job at a local company, EMSI, Esso Mathematics and Systems Inc. The company applied computers and math to a wide variety of issues running the oil company. I started by fixing bugs and making changes to their math optimization FORTRAN program that helped them run their oil refineries better. Did I “know” how to do this? In each case I dove in and figured it out.

    Then they needed some test programs written a new language, PL/1. No one there knew the language, so I dove into the manual and wrote the programs. Then they needed some accounting programs to help run their facilities in Asia. They wanted them written in COBOL, which made sense for financial programs, so I learned the language as I went along. I quickly fell into the pattern of getting some data from my boss, writing code that did what he wanted with it, showing him the output, and cycling for more. Each cycle required less self-education in COBOL, so I got faster. I tell more of this story here.

    What I did wasn’t that unusual at the time. Most of the early programmers basically figured things out as they went along. They needed to get stuff done. They studied programs written by other people and learned how to write their own. Compared to learning to read and write the German language, I found FORTRAN to be amazingly simple by comparison; in fact, it was a relief compared to the endless complications and words of any human language, not to mention idioms.

    How you can learn in modern times

    You won’t have to deal with card and card punch machines. You won’t have to go to a place with a room-sized computer. You have the internet to give you access to all the information about any language you want, and tools you can access to enable you to write code in that language, try to run it and see how it goes. To start you can use formal exercises with programmed solutions you can study. You can learn and advance at your own pace.

    If you want, you can poke around for a local or remote job. It’s good to start with some narrow domain of problems and tools that you’re interested in and have gained skill in. The first job doesn't even have to be programming — it could be testing, for example. Just get in somewhere and start producing. Once you prove value, you’ll move up from there, because people who can produce results aren’t that easy to find, and a proven producer is a keeper. The important thing is to keep learning and move up the skills hierarchy.

    Finally, count your blessings that the ridiculous world of job certification hasn’t yet latched its evil claws into programming very much. Yes, there are ignorant HR people who insist applicants have a degree in Computer Science, but the best programmers with degrees are good in spite of their mis-education in college.

  • AI can automate what doctors do

    There has been a decades-long evolution towards creating an effective clinical diagnosis and treatment AI system, essentially automating the mental part of what doctors do. A solid basis for the content of the system has already been built in the form of medical text books, procedures, published databases, studies and clinical standards such as HEDIS.

    The major elements of a fully automated system have been built and put into practice in a variety of medical practices. When a comprehensive system will be built and deployed is impossible to predict. No fundamentally new tech needs to be invented for this to be created; no “break-throughs” in AI! It “just” needs to be applied.

    While having an AI-driven medical diagnosis and treatment system would be amazing, much more important than the AI aspect of it would be the fact that it would be data-driven instead of human-created-policy-driven. This means that the system would, over time, determine what actually works based on the data and results, rather than what human “experts” and self-interested institutions say works. In other words, it would support true evidence-based medicine, replacing the too-often corrupt practice of studies published in medical journals. This is a huge subject.

    What do doctors do?

    They start with the patient’s complaint, why they’re seeking help.

    They then get from the patient and/or medical records a time sequence of conditions (like a cough), tests, observations, events (like falling down), related personal things (age, heritage), and finally diagnoses, treatments and outcomes.

    Based on this, they make further observations, tests and measurements. The tests may involve other people and equipment, for example a CAT scan. Depending on the expense and trouble of the test and the chances it will affect the outcome, further tests may be performed.

    The result is that the doctor recommends and/or performs treatments that will resolve the issue. The treatments can include drugs and operations. The results of all of this are stored in the patient’s EMR, partly coded data and partly written clinical notes.

    In order to do the above, doctors receive a great deal of training, both general and clinical. While in practice, they are guided by their knowledge and experience, and also by clinical guidelines and protocols, which evolve over time.

    Doctors are limited by a couple of things. First, missing information: they may not have access to and probably don’t have time to read all the patient’s medical history. Second, missing knowledge: there is a huge and ever-growing body of medical knowledge and treatments. It’s amazing that doctors have as much of this in their heads as they do, and not surprising that they sometimes forget or haven’t had time to read and absorb information that is new to them.

    Is all the technology required really available?

    The pattern of an innovation being proven and waiting sometimes for decades has been demonstrated many times. For example, an algorithm applied in production more than 50 years ago (!) for optimizing oil refinery operations has only recently been applied to optimizing some aspects of health care scheduling. Here’s a detailed example.

    No new math or fancy algorithms are needed. The fancy new AI LLM’s (large language models) that are getting attention these days don’t apply to this problem. The vast majority of the effort is in centralizing, codifying and standardizing data that is entered into medical EMR’s, which has already been done and is being refined. Even the tricky work of extracting value from doctor-written clinical notes is largely automated. Large databases of this kind are in use today by pharma companies to help them discover and refine targets for drugs.

    The path to automation

    The word “computer” was originally applied to people, mostly women, who spent hours and days bent over desks, often with calculators, computing the result of various mathematical formulas. For example:

    Screenshot 2024-12-31 114452
    Barbara “Barby” Canright joined California’s Jet Propulsion Laboratory in 1939. As the first female “human computer,” her job was to calculate anything from how many rockets were needed to make a plane airborne to what kind of rocket propellants were needed to propel a spacecraft. These calculations were done by hand, with pencil and graph paper, often taking more than a week to complete and filling up six to eight notebooks with data and formulas.

    While not as precise, doctors are also human computers, in the sense that they confront a new case (problem), get inputs from the patient and the database of the patient’s history, make observations (like calling a data-gathering subroutine), search their memory for a standard to see what to do next (if X and Y, then do a blood test to see if Z). Depending on the results  of that test, there may be further branches (if-then-else) to see what other tests and procedures may be required. Finally you reach a diagnosis and a treatment plan. The results of everything including the diagnosis and plan are recorded in the EMR for the patient to form the basis of future medical interactions.

    All of these things are in medical text books, treatment protocols, check lists, medical databases and academic papers. They are all pounded into doctors’ heads by clinical training and apprenticeships. Doctors are expected to remember everything.

    The path to automation isn’t fancy. It basically amounts to getting a computer to do what a doctor does: interacting with patient (taking input and providing information), organizing and enhancing the records about the patient, standardizing and digitizing all the existing protocols, and creating digital channels to orders for tests, procedures and drugs. Most of which are already a feature of EMR’s.

    Most of the elements of this automation are already in place! WebMD.com, for example, has a huge amount of information about symptoms, diseases and treatments online. It’s medically reviewed, and organized for access by patients. Major hospital systems have similar websites. The websites are just the visible part of the iceberg, with vast underpinnings.

    The most obvious missing elements is the ability to request tests and procedures – for that you have to go to a human. But the ability to input requests for such things is already a feature of the EMR’s used by most doctors. Making the connection from the EMR to software instead of a human is a minor task.

    Automating doctor decision-making is the heart of the job. It’s essential that this be done using an editable, extensible decision tree. This can be enhanced with probabilities and ever-increasing amounts of personalization. This should not be created by training of any kind; it must be human editable and fully transparent, so that you always can know exactly how and on what basis every decision was made.

    Among the biggest missing elements are things that doctors learn during their clinical training and personalization.

    Once all these elements are put together and working, you would enter a parallel production phase, in which the computer would get the same inputs a human doctor would and propose what to do next. This would be recorded and compared to what the human doctor did in classic champion/challenger fashion. The system wouldn’t have to be 100% complete to be put into live operation, so long as a good system for bailing out of the computer and shifting to a human doctor was in place. But since such a large number of patient visits are routine, the computer is likely to be able to handle a large fraction of cases from early on.

    There is a huge amount more detail in the building of such a system. However, surprisingly little needs to be “invented” to make it work, given that large elements are already built and in production in limited ways.

    Related posts

    Doctors too often get the wrong answer. This is the kind of thing that makes some people hope that automation could do a better job:

    https://blackliszt.com/2016/12/what-can-cats-teach-us-about-healthcare.html

    Massive spending has gone into "cognitive computing" and healthcare, with nothing to show for it.

    https://blackliszt.com/2015/07/cognitive-computing-and-healthcare.html

    You don’t need AI or cognitive computing to discover or promulgate the new discoveries that humans make.

    https://blackliszt.com/2015/08/human-implemented-cognitive-computing-healthcare.html

    Health systems have trouble just making computers work. When they try to do something "fancy," the results are usually poor. But there are promising exceptions.

    https://blackliszt.com/2016/05/healthcare-innovation-can-big-data-and-cognitive-computing-deliver-it.html

    Healthcare systems spend huge amounts of money on things related to AI, but they don't know what they're doing and neglect to spend on simple things that could make an immediate difference.

    https://blackliszt.com/2016/09/healthcare-innovation-from-washing-hands-to-ai.html

    Avoiding error is hugely important.

    https://blackliszt.com/2017/06/how-to-avoid-cutting-off-breasts-by-mistake.html

    A major lesson from the above posts is this: while AI can certainly automate what doctors do, having the usual major corporations and medical systems be in charge of the effort guarantees failure — which billions in wasted spending to date demonstrates.

    The benefits of medical automation

    The potential benefits of automation are huge.

    Cost of medical care: As medical workers are replaced by software, costs will go down. Not just salaries, but also office space, etc.

    Medical care waiting times: The software doctor is available 24 by 7, no scheduling required.

    Accuracy of care: Medical people can’t be as consistent or up to date as data-driven software. Elaborate measures such as HEDIS for judging medical care after the fact will be applied as the care is delivered, assuring its accuracy.

    Transformation of care: Dramatically better health and lower costs will result once the system is in place and real-world evidence from it supplements, personalizes and replaces existing care practices.

    Automation of medical care isn’t without problems. The institutional obstacles are huge. Mountains of regulations and standard practices would have to be changed, with entrenched forces fighting every step of the way. The people whose jobs are threatened will resist. A large number of patients value interacting with a human doctor. Corporate forces will fight to have their interests supported in the rules and data of the automation. There will have to be a way to provide alternatives and avoid centralized government control, which will be a major struggle, and a danger I fear.

    Conclusion

               Automation of medical care has been underway for decades. All the technical elements to enable it are available. The benefits of automation are large, but so are the obstacles to implementation. Centralized control of medical diagnosis and practice is already strong, and automation would make it stronger and less visible. The path forward is  likely to remain slow. While there are substantial potential benefits in terms of cost reduction, better time and accuracy, the largest potential benefits of huge cost reduction and improved patient health are threatened by a version of the centralized control embedded in the current partly-mechanized system being translated to the automated one.

  • MFA won’t fix Computer Security Problems

    About a third of US citizens had their private data and medical records stolen in early 2024 in a ransomware attack on United Healthcare — yes, the same company whose CEO was recently murdered. Big important people are outraged.

    How could this happen? The headline of the WSJ article is

    Shun This Basic Cybersecurity Tactic and Become a Target for Hackers

    A lack of multifactor authentication opened the door to cyberattacks at UnitedHealth Group and others

    The Chair of the Senate Finance Committee, Ron Wyden, is demanding that various agency bureaucrats impose more security regulations on healthcare companies, particularly MFA. “In 2022, health care organizations reported over 600 breaches affecting nearly 42 million Americans” Apparently, the pattern of well over a breach a day on healthcare organizations wasn’t enough to get Sen Wyden to act.

    None of these esteemed people seem to know that organizations that “use multi-factor authentication (MFA) and other cybersecurity best practices,” as Senator Wyden demands, don’t enjoy better cybersecurity. Even organizations like the NSA that set security standards can’t keep their own data secure! History clearly demonstrates that imposing yet more regulations and overhead on organizations will NOT make them more secure.

    So what can be done? It’s simple: understand the real problem and solve it!

    Cybersecurity

    Cybersecurity is war. It’s like there are good guys in a castle who want to keep everything in the castle safe. There are bad guys wandering around outside the castle who want to get inside, figure out what’s valuable, and escape the castle.with as much stolen goods as possible. Naturally, the good guys want to build the walls high and thick, with carefully guarded doors. The guards at the doors examine the documents of each person who wants to enter and only lets good guys in. Anyone without proper documentation is turned away. That’s what cybersecurity is, except that the castle is a bunch of computers.

    The people who think this way seem not to be aware that the bad guys figured out how to crack the castle’s defenses literally thousands of years ago. Ever hear of the Trojan Horse? You may want to look up an old Greek named Homer who wrote some good stuff about it.

    The fact is, bad guys will find a way to get into the castle. They’re really good at tricking the castle’s employees. Maybe they’ll pay off one of them who needs some money. You can be sure that the highly motivated bad guys wandering around in the woods are thinking new thoughts about how to get in while the lawyers in charge of the castle security regulations are arguing about the right way to write about the new defensive technique the national bureau of castle bureaucrats voted in favor of at their last annual convention.

    Let’s start with an assumption that none of the high-and-mighty security bureaucrats is willing to consider: bad guys will get into the castle; what can we do to stop them from doing bad things??

    This thought, just by itself, is revolutionary to cybersecurity bureaucrats – and Senators, CEO’s, business school professors and the rest.

    However novel the thought may be to all these grand people … it’s business as usual to exotic people like the ones who run high-end retail stores. And libraries! Even better, those places experience far fewer losses than the grand corporations and government organizations. The Tiffany store on New York’s Fifth Avenue is wonderful. It is chock-full of beautiful, expensive jewelry. Anyone can walk in and admire it; no MFA required! Go in there sometime and pay attention to the observant security guards and the ubiquitous cameras. While you are admiring the beautiful gemstones, the guards and cameras are admiring (in their own way) you. No one has ever gotten away with a grab-and-go attack; it may never have been tried. Why bother, when you know you’re going to get caught?

    Can this be done in computer systems? Yes. It’s not only possible, it’s even been done. I personally know of companies that implement security this way, and products have even been developed to help. For such a thing to be widely used, thousands of august experts of cybersecurity would have to drastically change course, while mountains of useless regulations would need to be discarded. It’s not that hard! All that would have been needed with Daniel Ellsburg would have been for the guard at the door to check his briefcase before he walked out. For Edward Snowden or Chelsea Manning, it would have been easier: have systems monitoring the data access of employees and contractors, and flag when anything out of the ordinary happens, particularly when more data than usual is accessed, or data of a different kind. The flag would have been raised very early in their misadventures, and the theft would have been prevented.

    With a ransomware attack of the kind that hit United Health, the principle is the same, except that it’s a computer program doing the accessing. The checking system shouldn’t care – by the time an access is requested, it’s electronic no matter whether initiated by a human or a program. Similarly with the ransomware itself – Ignoring the ridiculous experts, there are simple methods for recovering quickly from the lock-down implemented by the software.

    Conclusion

    MFA is a fashion-driven, ineffective response to data thefts.

    No more evidence than the mania for MFA is needed to conclusively demonstrate that Computer Science is not a science. It’s roughly the same kind of thing as blood-letting was in medicine: always hurt, never cured and sometimes killed the patient.

    The near-universal response to the largest theft of data in history by authorities and experts is to write more regulations and add yet more overhead to the ordinary use of computers … which, as history has shown, won’t make a dent in the problem, much less solve it.

  • What is your Software Project Manager’s Batting Average?

    How often do software projects fail? While it’s never talked about and rarely makes the news, the fact is that software projects fail with alarming frequency. Exactly how often do they fail? The shocking answer is, despite there being august professors of Computer Science and Computer Engineering falling out of the woodwork, doing research, pronouncing their deep knowledge in papers and teaching the next generation of students, the truly shocking answer is that no one knows! Or cares! Oh, they say they care about teaching people how to create great software, all you have to do is learn the intricacies of Object-Oriented Programming (or whatever). But none of them studies the results. There is no science in what is called “computer Science.”  They all declare they know how to create good software – so long as no one takes the trouble to measure and count, judge and evaluate, to see how often it happens.

    In the past, I’ve compared bridges falling with software failures. We can understand this from another angle by looking at … baseball.

    Baseball Science

    Aaron Judge is an amazing professional baseball player, part of the New York Yankees. He’s an excellent fielder, but he’s best known as a hitter. In his nine seasons so far, he’s already hit over 300 home runs, with 50 so far as of this writing. His batting average this season is an awesome .331 so far, with a .288 average for his career.

    Judge batting

    Let’s think about that enviable .331 batting average he’s got so far this year. Any player would love to have such an excellent average. But that’s in the context of professional baseball. Simple arithmetic tells you that this “excellent” average means that he only gets a hit in one third of his at-bats! More than two thirds of the time he strikes out or otherwise fails to get a hit! What would you think of someone who managed to drive safely to work just a third of the time, getting into accidents or otherwise screwing up two thirds of the time? What would you think of a car that only worked a third of the time you tried to drive it? And so on … you get the idea.

    Why is this? Are all these highly paid pro ball players really losers? Of course not. You can see what’s going on if you watch the home run derby, when top hitters go the plate and get soft, easy pitches from a pitcher standing behind a screen so they don’t get hit. Those guys nearly always hit the ball, and lots of them are home runs! But we all know that’s a once-a-year event.

    Home run derby

    Most of the time, there’s a highly skilled pitcher on the mound whose goal in life is to strike out the batter or sucker him into hitting an out. Pitchers like Gerrit Cole, who is so good that he’s being paid $36 million dollars this year.

    Cole pitching

    When you’re a batter and walk up to the plate facing Gerrit Cole, you know you’ve got someone who’s in an excellent position to lower your batting average.

    So is creating software like trying to get a hit with a fearsome skilled pitcher on the mound? Or is it more like trying to drive your car to work or designing and building a reliable car that just works, with remarkably few exceptions?

    The sad fact is that building software resembles the home run derby, except that instead of trying to hit the ball out of the park, all the batter has to do is … not miss. With a rule like this, you’d expect something close to a 1.000 batting average. They try to make it even easier in the software world by messing with requirements, tripling estimates and doing everything they can to make the project a “success.”

    Is software really that bad? Yup. Just for fun, I’m going to share a couple of the rarely publicized stories of software failures I made note of a dozen years ago. With things like ransomware exposing the ugly underside of most software operations — not to mention the awfulness of computer security in general — you can be sure things haven’t gotten better.

    Sample Failures

    In 2014, the VA admitted that they have over 57,000 patients waiting for their fist visit. What's their excuse?

    "The official also said the VA needs to update its scheduling software package, which the department has been using since 1985. “It predates the internet and Blockbuster’s rise and fall,” he said."

    Does that count as a software failure? It's more like the software department died years ago and no one noticed.

    Here's a good one from 2012:

    The U.S. Air Force has decided to scrap a major ERP (enterprise resource planning) software project after spending US$1 billion, concluding that finishing it would cost far too much more money for too little gain.

    Dubbed the Expeditionary Combat Support System (ECSS), the project has racked up $1.03 billion in costs since 2005, “and has not yielded any significant military capability,” an Air Force spokesman said in an emailed statement Wednesday. “We estimate it would require an additional $1.1B for about a quarter of the original scope to continue and fielding would not be until 2020. The Air Force has concluded the ECSS program is no longer a viable option for meeting the FY17 Financial Improvement and Audit Readiness (FIAR) statutory requirement. Therefore, we are cancelling the program and moving forward with other options in order to meet both requirements.”

    The Air Force will instead need to use its “existing and modified logistics systems for 2017 audit compliance,” the statement adds.

    They started spending money in 2005, spent over a billion dollars by 2012, got nothing of value, estimated they'd need to spend the same again for eight more years to get about a quarter of the original plan done. If there were anyone paying attention to batting averages in software, that would likely be a winner.

    Here is rare-to-find information on the frequency of failures from a book.

    The odds of a large project finishing on time are close to zero. The odds of a large project being canceled are an even-money bet (Jones 1991).

    In 1998, Peat Marwick found that about 35 percent of 600 firms surveyed had at least one runaway software project (Rothfeder 1988). The damage done by runaway software projects makes the Las Vegas prize fights look as tame as having high tea with the queen. Allstate set out in 1982 to automate all of its office operations. They set a 5-year timetable and an $8 million budget. Six years and $15 million later, Allstate set a new deadline and readjusted its sights on a new budget of $100 million. In 1988, Westpac Banking Corporation decided to redefine its information systems. It set out on a 5-year, $85 million project. Three years later, after spending $150 million with little to show for it, Westpac cut its losses, canceled the project, and eliminated 500 development jobs (Glass 1992). Even Vegas prize fights don't get this bloody.

    If you care to look, you will find loads more examples of failures the group has been unable to keep secret. The failures keep rolling in spite of the huge efforts to reduce requirements, inflate estimates, extend time lines, increase staff and everything else. You have to ask the question::how many software successes are really failures in disguise? If anyone were serious about calculating software batting averages, this would be a key factor.

    This pattern has resulted in some fairly widespread humor that you can be sure isn't mentioned in project management meetings. For example, here are the stages of a software development project:

    1. Enthusiasm
    2. Disillusionment
    3. Panic and Hysteria
    4. Search for the Guilty
    5. Punishment of the Innocent
    6. Praise and Honor for the nonparticipants

    Why do Software projects fail and how do you win?

    When lots of human beings work at something for a long time, they tend to figure out how to do it. Building software appears to be a huge exception to that rule. With decades of experience under our belt, why is software the exception?

    This is a long subject. I have gone into great detail spelling out the causes … and the cures!

    Start with history and evolution:

    https://blackliszt.com/2023/08/summary-computer-software-history-and-evolution.html

    Everyone knows that software project management is essential to producing software that works, on time and on budget. In spite of decades of "innovation," it doesn't get better. The winners follow a different set of rules.

    https://blackliszt.com/2023/04/summary-software-project-management.html

    Software quality assurance is an important specialty within the non-science of computing, but in spite of all the time and money spent, quality continues to be a major issue. There are solutions that have been proven in practice that are ignored by the experts and authorities.

    https://blackliszt.com/2023/04/summary-software-quality-assurance.html

    How do you win with software? Nearly everyone starts with requirements, makes estimates and is judged whether they deliver on time and on budget. This optimizes for expectations and is a proven path to failure. The winning path optimizes for speed, customer satisfaction and continuous quality. It's what the people who need to win do.

    https://blackliszt.com/2023/07/summary-wartime-software-to-win-the-war.html

     

  • Long-Standing Software Techniques Can Help Election Integrity

    There is a massive secret process in election vote counting. It’s invisible, so observers won’t help. It’s subject to error and fraud. Existing standards, even Virginia Governor Youngkin’s Executive Order 35 ignores it. It’s not a hard problem to solve. Standard practices in database and data warehousing software that have been proven and refined in decades of use can be applied. The fragmented group of semi-custom voting equipment, largely driven by software-ignorant bureaucrats and regulators ignores this technology, if they’re even aware of it – no one is asking for it, so why bother? The process is simple in principle; it’s just arithmetic. It’s the process of registering each vote as it’s made, accumulating them with strict transaction logging, and gathering the votes recorded by multiple machines in a location, summing and displaying them in real time, and further gathering the votes from multiple locations and doing the same – by County and then by State.

    A version of this process takes place today. But no one will tell you exactly how it’s done. What software exactly is used? How do the votes get from a machine to totals for a location? How is the flow done up the line to larger groups: towns, counties and states? Does someone read a display on one machine and type it into another? What’s the machine that does the summing, using what software? What are the formats used for transmission? No one wants to spell it out, keeping the whole thing invisible.

    You think this is trivial? Think about the fact that ballots are different in each town to handle local elections and issues. The ballots are designed and entered into the various machines from different makers that are used. When the votes are reported, exactly how is each vote identified – by the name of the person being voted for? Is the name exactly what appears on the ballot, which is always entered locally? When voting for president, is the president’s full first name used? Middle name or initial? Is the vice president named, and if so exactly how? If there is so much as a single character mis-match in the naming, the totals may not be correct, depending on how they’re done, by whom and/or by what software. The problem gets worse when the votes (and names) are passed up the line to a county, which could have scores of separately programmed machines. Apart from naming issues, the right amounts have to be added to the right places – how is this arranged exactly? Not only is this an opportunity for error, it’s a massive opportunity for corruption and fraud.

    The real solution to this problem is to eliminate local machines and controls altogether. This may sound impossible, but it’s not. See this for a description of the approach, with links for more detail. But so long as paper ballot counting machines are used, this problem must be solved to assure election integrity. Fortunately, the problem is one that has been addressed and solved in the commercial computer industry. It’s been used for dozens of years in various forms. The fundamental technology is database (DBMS) technology, of the kind that all the major vendors provide and in open-source. It is available in every major cloud platform. A form of DBMS technology called data warehousing has long been used for accumulating the results of transactions in databases for reporting, display and analysis. Moreover, there are long-established technologies for performing the essential operation of ETL (Extract, Transform and Load). ETL first enables a programmer to view the schema (data definitions) of the source and target DBMS and easily define how the data from the source is transferred and transformed as needed to the destination.

    The process starts with the information that goes onto each ballot in the state, each county and each jurisdiction. Each one has a list of candidates and questions that need to go on the ballots. The higher entities would define all the entries that are common to each small entity within it; for example, the state would define state-wide ballot entries, counties would add ones for the county, and so forth. While the ballots are designed, the schemas for the DBMS and Data Warehouse would also be defined. The exact names and descriptions that go onto each line of each ballot would be matched with corresponding data element definitions in the databases (for original recording) and data warehouses (for reporting, accumulating, and rolling up), along with ETL. This could all be tested prior to mail-in ballots and early voting. Moreover, everything about it could be made fully public – the ballot definitions, the schema definitions and the ETL. Multiple processes in multiple (for safety) clouds that implement it could be made visible.

    There is no reason why the whole thing couldn’t be made public. The data warehouses could update and display publicly their totals with updates as frequently as desired: each second, every five seconds, whatever. The totals could be displayed simultaneously for each voting locations, precinct., town, county and state. Along with display in real time of the voters who arrive to vote and ballots that arrive in the mail for processing, you have a fully transparent system.

    Can databases handle this? Amazon’s cloud database handles over 10,000 transactions per second. Multiple copies could be used, with full redundancy. Capacity is not an issue, nor is reliability or security.

    This does not fully solve the problem of intense feelings in local officials and the desire to adjust results to suit their preferences. But it goes a long way to getting those people out of the loop and making the essential back-office operations of accumulating the votes counted by the ballot counting machines transparent, until the proprietary machines can finally be eliminated altogether.

  • A Key Improvement to Youngkin’s Virginia Voting Integrity Plan

    Governor Glenn Youngkin issued Executive Order 35 to assure voting security in the state of Virginia. The summary and the Order itself are worth a read. While some of it talks about existing procedures, together it’s an excellent foundation for assuring election integrity. The procedures include effective voting list maintenance, ballot security and machine testing, using only paper ballots. If all states adhered to this standard, our elections would be more secure than they are.

    The biggest vulnerability remains the numerous local voting machines (ballot counters), supporting computers and officials. In this post, I describe the intense feelings some local officials have and their motivation to alter the election results. I propose a solution that eliminates all the propriety hardware with its attendant vulnerability to administrative corruption. Such a system cannot be built in time for this year’s elections. However, there are measures that can be taken this year that will address the vulnerabilities for corruption by local officials that are unaddressed by Executive Order 35.

    The issue is the ballot counting machines are proprietary computer-based devices produced in small quantities by specialized companies to meet widely varied state-by-state requirements. In addition, they need to be coordinated with the design of the ballots, which cover not only national and state-wide voting, but also county-wide and often local elections on a single ballot. Each design needs to be coordinated with the ballot machines that will be fed the filled-out ballots. A ballot machine’s administrator sets it up for reading a specific ballot design, with a specific list of candidates and questions. Having the machine certified in general by bureaucrats is useless. What matters is whether the machine has been set up to read the specific ballot types it will be fed, and its ability to do so accurately. Then it matters if the machine is able to recognize that it has been fed the wrong type of ballot and refuse to process it. How are the results stored and given out for totalling? Is it on a display that a human enters into a computer? If so, using what program? How are the numerous ballot types and thus vote totals handled. And how is this able to be displayed along with the voter registrations, as called for in the EO? What computer using what software coordinates with all the local voting place computers to track totals?

    I hope that the total transformation of local voting I have proposed happens in time for the next election cycle. Gov Youngkin’s EO handles most of the key issues of voter list maintenance and paper ballot handling and tracking. Paying some attention to the remaining vulnerabilities in the administration of the ballot counting machines and associated data handling would cure the largest remaining vulnerability short of eliminating the proprietary machines altogether.

  • Voting Integrity: The Problem of Local Control

    Suppose some adult who had never played baseball, even for fun as a kid, was quickly taught the rules and played for a day. How well do you think they would play? Suppose they tried again two years later and then again a few times, each with a two year gap between tries. Do you think they would know what the infield fly rule was, much less be able to hit a pitched ball?

    That’s what it’s like for the people who administer the voting machines at the more than 170,000 voting precincts in the US on voting day.

    You might say it doesn’t matter, because they’re all skilled computer administrators. Sure. That’s like saying they’re good golfers, so when they walk onto the baseball diamond and stand on the pitcher’s mound, it’s no problem.

    The kids who practice hard in Little League play the same game of baseball as the New York Yankees, just at a different level. What if the rules varied dramatically from place to place? What if some places had balls, but the bats were flat and there were wickets, like in Cricket, because each league makes up its own rules?

    Even worse, suppose that many leagues decided that physical bats and balls were too old-fashioned, and that they should play video games instead? But each county and state would decide the rules for the video game its administrators would control and make different decisions? This is just like the mess with paper ballots and varying levels of automated counting and voting.

    What about the makers of the equipment that was supplied for the game that was played every two years? They would have to carefully understand the updated rules each jurisdiction (county, state) would require and update their sporting equipment and video games to follow the requirements – and get it right the first time so that on game day, it would actually work and the inexperienced administrators wouldn’t screw it up too badly. What small company could manage to address such a semi-custom, always-changing market like this and do it error-free?

    You’ve got people without experience playing a game once every two years with a wide variety of radically different, changing equipment made by tiny suppliers. That’s not all!

    Suppose the point of this voting equipment is to enable local people all over the country to vote on their favorite baseball players. Suppose that officials in each state, county and town (there are over 3,000 counties and over 89,000 local governments) decided what and who should be on each ballot, and exactly how they should be named. There are local Little League players, players on local adult leagues and several levels of professional teams. Officials at each level control what the names are. Suppose one official decides that the player Mr. Berra should be called “Lawrence Berra.” Another decides he should use a more recognizable name, “Larry Berra.” Another decides that a widely used nickname is best, “Yogi Berra.” When it comes time to add up the votes, there’s a reasonable chance that all three Berras could have vote totals. This may sound like making fun, but it happens often in real life!

    People feel strongly about baseball. They root for their home team, and can throw vicious insults at players for rival teams. Suppose it’s voting time for the best players. Fans come into the jurisdictions and vote in whatever way it’s supported. The inexperienced administrator controlling the local version of widely varied, changing equipment is in charge of counting the votes correctly, and other officials at the thousands of higher levels are in charge of summing up the votes they’ve been given for people who aren’t named consistently. This is baseball – the stakes are huge.

    Red

    To give an idea of the intensity of the feelings, there is a widely read book called “Red Sox Fans Are from Mars, Yankees Fans Are from Uranus: Why Red Sox Fans Are Smarter, Funnier, and Better Looking (In Language Even Yankee Fans Can Understand)” How careful and strictly rule-following do you think the voting administrators in the Bronx and in Boston’s South End are going to be working with their peculiar, rarely-used semi-custom machines to do the counting? Not to mention the county and state-level administrators?

    Suppose Yogi Berra isn’t elected a favorite baseball player and fraud is suspected as the reason. What can be done? The natural response is to conduct an audit of the voting that is considered suspicious. The trouble is, none of the systems (or people) involved have complete, secure, unalterable audit trails! Someone could have changed the settings on a paper ballot scanning machine and no one would know. If you tried to audit for the whole state, you’d have to go to the many places scanned paper ballots were kept, set up machines for the different rules in thousands of different towns (who’s on the ballot, where they are and how they’re named). Given the problems with inexperienced people and varied equipment, there will be problems during the audit just as there were during the original tally. Auditing electronic votes is even harder, if it can be done at all. And the audit will certainly take a long time, when everyone reasonably wants accurate results soon after voting closes.

    Today’s voting and auditing is a highly suspicious mess for many reasons. Decades of incremental changes haven’t made it better. Can anything be done to achieve true voting integrity?

    Yes. High integrity voting with near-real-time results can be achieved, but it requires a revolution in the voting process – not much for voters, but a complete re-start of the voting systems and counting process.

    The main points of the new system are:

    • Eliminate electronic voting machines. These are costly, widely varying devices that require skilled, accurate, bias-free administration.
    • Eliminate paper ballot scanning machines that also count the votes; they are also computer systems requiring setup and administration!
    • If people vote at a voting location, they fill out a paper ballot in private. They take the ballot to a screened-off desk and feed it to a scanning machine connected to a simple off-the-shelf computer with screen. After scanning, they see the image on the screen and can verify it’s accurate, or re-scan. The image is then converted to the intended votes which are displayed for confirmation. Upon confirmation, the voter may optionally be given a print-out record of their votes, like a receipt. The voting totals are instantly updated in a multiple secure Clouds.
    • If ballots are received in the mail, the process is much the same, except a poll worker does the work at a voting station that isn’t screened off.
    • The voting location is continually monitored with security cameras, including all handling of paper ballots, before and after scanning, and when boxed in a storage room. The rule is that ballots can go into the room but they can't be removed.

    This revolutionary process eliminates all the local custom equipment and human administrators, with the attendant risk of error and corruption. The new centralized system needs to be designed and administrated carefully to assure effective and secure results.

    If the typical government or corporate bureaucracy were in charge of building such a system, we could expect the usual results. But for a small, entrepreneurial group using COTS equipment and software taking an open-source approach with full transparency of results, for example publicly showing vote totals as they were made within seconds, it could be done quickly and rolled out incrementally.

    Of course government politicians and bureaucrats would have to support the approach and give up their iron-clad control of the process. But once it was proven in small-scale practice, the only reason to resist would be their support of the existing expensive and fraud-enabling process.

    There is a great deal more to be said about how such a system could be implemented, some of which I have described here.

    https://blackliszt.com/2020/12/how-to-build-a-secure-auditable-voting-system.html

    This isn’t the whole story of how to achieve voting integrity. There are the important issues of accuracy of the voting rolls, voter ID and preventing duplicate votes. But a system of this kind would be a big step ahead.

  • Why is Writing Computer Software Mostly Dysfunctional?

    While not much discussed or broadly recognized, the vast majority of efforts to build computer software are disasters. They take too long, cost too much, and result in varying degrees of crap. Lots of solutions have been been promoted and tried. None of them have worked.

    There are exceptions, of course. The exceptions prove that it really is possible to create good software quickly and efficiently. It is highly unlikely that the cause is that the methods taught in academia are good, but screwed up when applied; instead, it’s likely that nearly everyone has it wrong, just like doctors did when they bled patients to cure them and refused to sanitize when operating on patients.

    A major cause of the dysfunction can be found in the approach to building software that was taken for good reason in the early days of computing. This approach was necessary for the first decades of computing. As computers grew more powerful, the necessity of doing things the same way began to fade away and finally disappear, Many of the early cumbersome practices were discarded, but the key focus has remained the core of software development to this day.

    What is this universally accepted, unquestioned aspect of building software that does so much harm? Simple: it’s obsessing on software imperative language, relegating data definitions and attributes to a necessary annoyance, confined to a tiny island in an obscure corner of the vast sea of procedural language.

    Does this sound awful? No. Everyone who does it doesn't think they're obsessing — they're just working, writing their code! Similarly, getting water from the local community well didn’t sound awful in the 1800’s – until people finally found out about water contaminated with diseases like cholera. It took decades for the need for sanitation to be taken seriously, when the result of doing it poorly was death! We can hope that procedural language obsession will in the future be recognized as the main source of disease in software.

    Early Computing

    The roots of the language obsession are in the earliest days of computing. It was one thing to build a machine, and quite another to get it to do what you wanted it to do. The start was plugs and switches. Then the stored program computer was given step-by-step instructions in binary machine language. In the 1950’s first FORTRAN then COBOL were invented to make the process of creating the precise instructions needed easier, while still enabling the computer to operate at maximum speed.Those were indeed big advances.

    In the 1960’s it still took a great deal of careful work to get results in a timely manner from computers. While languages like FORTRAN made writing easier, the fact that a compiler translated them to maximum speed machine language made their use acceptable.

    The Apollo space capsule had a custom-built guidance system that was essential to its operation. Here is Margaret Hamilton next to a stack of code she and her team wrote for the Apollo Mission computers.

    330px-Margaret_Hamilton_-_restoration

    The Apollo guidance computer was a fast machine for its day, but the programmers had to get all the power out of it that they could to guide the capsule in real time. This is an extreme example, but 20 years into the computer revolution, everyone focused on using compiled procedural languages to get performance, and assembler language when necessary.

    It was already evident that getting programs written quickly and well was incredibly hard. In fact, a big conference was held in 1968 to address what was called the "crisis" in software. Nothing got fixed. Meanwhile, efforts continued then and to this day to invent new programming languages that would miraculously make the problem go away. Nothing has changed for the better.

    Partial steps towards declarative

    From the early days to the present, there have been isolated efforts to go beyond simple definitions of data for procedural commands to operate on. Generally, the idea is that procedural commands spell out HOW to accomplish a task, while data definitions and attributes define WHAT the task is. It's like having a map (what is there) and directions (how to get from place to place on the map). See this for an explanation.

    The invention of the SQL database was a small but important early step in this direction. SQL is all declarative. It is centered around a schema (a set of data definitions organized in rows and columns). The SELECT statement states what you want from the database, but not how to get it. WHAT not HOW!

    You would think this would have led to a revolution in language (HOW) obsession. It didn't. In fact, because the language obsession stayed in charge, in some ways things got worse.

    A few years after the DBMS revolution, people started putting big collections of historic data into what were called data warehouses. The idea was to make reporting easier without impacting production databases. Before long, OLAP (OnLine Analytical Processing) was invented to complement existing OLTP (OnLine Transaction Processing). While there were many differences, the core of OLAP was having a schema definition in the form of a star (star schema), with a central table of transactions and tables related to it containing the attributes (dimensions) of the transactions, typically organized in hierarchies. So there would be a time dimension (days, weeks, months), a location dimension (office, region, state) and others as relevant (sales, profits, department, etc.). After constructing such a thing, it was easy to get things like the change in sales from month to month in the hardware department without writing code.

    OLAP was and is powerful. It assigned attributes to data in hierarchies, with unchanging programs that made it easy to navigate around. You could add attributes, dimensions, etc. without changing code! What an idea! But the idea was strictly confined to the isolated, distant island of OLAP and had no impact on software as a whole. The procedural language obsession continued without pause.

    Declarative front and center

    Procedural code is necessary. Code is what makes a machine run. However, the time for near-exclusive obsession about procedural code has long since passed. Limitations of computer speed and storage space were a legitimate reason to obsess about using the speed you had optimally. Think about cars. Engineers worked for decades to get the maximum speed of cars from about 10MPH to finally breaking 100. Now it's in the hundreds. In a much shorter period of time, computers have increased in speed by millions of times. Computer speed is rarely an issue.

    How can we spend a little of the mountains of excess, unused computer speed to help make creating computer software less dysfunctional? Maybe instead of concentrating on procedural languages, there's a another way to get computer software to work quickly and well?

    There is a proven path. It's obsessing about the WHAT is to be done. Obsessing about the data and everything we know about the data — its attributes. This means applying the fruitful but limited approach we took with OLAP, and extending it as far as possible. In other words, instead of creating a set of directions from every possible starting point to every possible destination, we create a map in the form of metadata and a tiny, rarely-changing direction-generating program that takes as input starting and ending points and generates directions. You know, like those direction-generating program that are so handy in cars? That's how they work! See: https://blackliszt.com/2020/06/the-map-for-building-optimal-software.html

    When we do this, we'll advance from the endless complexities of the solar system as described by Ptolemy to the simple, clear and accurate one described by Newton. What Newton did for understanding the movements of the planets, Metadata obsession will do for software. See this for more: https://blackliszt.com/2022/10/how-to-improve-software-productivity-and-quality-code-and-metadata.html

    Software development is stuck in the endlessly complex epicycles of Ptolemy; we need to get to Newton.

     

  • How Many Jobs Will AI Eliminate?

    The new ChatGPT technology answers questions in English in a way that can be hard to distinguish from what a human would have written. This is scary! What will happen to all those highly paid knowledge-worker jobs?

    Notice how none of the stories talk about the history of AI. AI has been on the verge of ousting human beings from important jobs since the 1950’s. First it was checkers. By the mid-1960’s, ELIZA and SHRDLU were having conversations with people in English. ELIZA impressed many with its conversational, interactive abilities, while SHRDLU could not only talk but it could answer questions and perform actions in its world of blocks. By the early 1970’s many experts were talking about how AI would soon rule the world. See this: https://blackliszt.com/2019/12/getting-results-from-ml-and-ai-6-fintech-chatbot.html

    All the talk about what would happen “soon” faded away and was forgotten. Then so-called “expert systems” were all over the news, with lots of investment and talk about how this time it would really happen. Before long IBM’s Big Blue beating a human chess master captured the news. Lots of talk, little action.

    We’re in a hot cycle again. It’s often hard to tell whether a given paragraph was written by ChatGPT or a human being! Loads of people who write for a living will be put out of work, and before long the robots will take over!

    After over 60 years of AI being super-hot and then fading away with little change, what’s different this time? Not much.

    So how many jobs will be lost? A couple of centuries of experience with automation gives us the answer: lots of jobs will be eliminated through automation – but slowly, step by step and with lots of resistance along the way.

    At the time of the American revolution, over 90% of the population was involved in agriculture. One step at a time, those jobs were automated, so that today, under 1% of the US population is employed in agriculture. Yet there's food enough for everyone. While the distribution of jobs is vastly different today, the population as a whole is more productive, better off and largely doing jobs that did not exist back then.

    The driving force of automation is reducing or eliminating human labor. The most repetitive, easily taught jobs that have the lowest status are nearly always the ones eliminated. The people whose jobs are eliminated usually don’t like it, and sometimes fight to keep their jobs. For example, in the 1810’s in England, automated Jacquard looms created textiles faster and better than human-operated looms. The workers didn’t like it; a movement was started by Ned Ludd, the first person to smash a loom in protest. A secret society called Luddites did their best to resist the automation but lost in the end. See this: https://blackliszt.com/2020/01/luddites.html

    This may not seem relevant to computers eliminating human labor. Looms, after all, aren’t electronic computers. However, the revolutionary Jacquard looms were mechanical computers — they executed a "program" that was encoded physically in punched cards, enabling them to flawlessly execute elaborate patterns in the woven cloth.

    Bottom line: automation continues to replace people in jobs on a regular and ongoing basis, just as it has for the last couple of hundred years. The people who are affected resist it if they can, like the Luddites two hundred years ago who smashed the Jacquard looms that were putting them out of work. But in the end, the automation happens. It's tough for the workers whose jobs are eliminated, but overall, people are better off.

    But isn't the new AI, ChatGPT, a total game-changer?

    How about lawyers? They write a lot – surely ChatGPT will produce what they do more quickly and inexpensively. Nope. Lawyers who write repetitive documents like wills are already being displaced by technology much simpler than ChatGPT – technology that produces reliable, best-in-class results each of every time. LegalZoom, for example, guides you through questions and choices and assembles a will for you from boilerplate chunks and fill-in’s that meets and the requirements of your state. A human lawyer couldn’t do it better – and in fact could do it worse, since every decision, choice and boilerplate chunk in LegalZoom (and its competitors) is approved by legal experts.

    The automation of writing legal documents like wills and LLC’s illustrates the key reasons why amazing AI achievements like world-class chess playing and ChatGPT don’t end up empowering widespread change: narrow, subject-specific algorithms that produce consistent, transparent and auditable results are always better.

    How about in the medical world? A battle to apply AI in medicine has been going on for years. IBM’s Watson AI system has made repeated deals with prestigious hospital systems – deals that have mostly fallen apart. This is partly because the doctors resist being replaced, and partly because Watson simply can’t handle the complexity and nuance of being a good doctor. Nonetheless, automation is taking place in medical systems, mostly in the support staff, the people who help schedule and the ones you never see in the back office moving all the information among different systems that keep the system going. Much of this technology is sophisticated RPA (Robotic Process Automation), which is like a robot operating different pieces of software on different screens. See this: https://blackliszt.com/2018/08/getting-results-from-ml-and-ai-4-healthcare-examples.html

    ChatGPT is indeed amazing technology. Some version of it is likely to play some kind of role in ever-evolving automation. But, like earlier AI "revolutions," it won't have nearly as much impact as people think, and the path to practicality will remain narrow, focused applications.

  • The Amazing path to High Quality Fast-to-change Software

    Everyone wants software that does what it’s supposed to do, runs fast without down time and can be changed quickly without causing problems. Who doesn’t want this? For more details on software goals, see this.

    Everyone claims that their methods are great at achieving those goals. Sadly, such assertions are mostly baseless and in fact the touted methods do a terrible job. But they’re standard practice!

    I’ve written extensively about they way to meet the goals in dozens of blog posts and a couple of books. Here is a specific step-by step way to move from standard architecture to one that meets the goals in simple terms. It’s a modified version of a path taken by a small, rapidly growing software company that I worked with. Incremental steps of this kind, with value at each step, are usually superior to the massive re-write approach that some software people are tempted to take.

    When the new tech leader took over, there was no QA. So he first put in simple, UI, click-script based QA, and a full build process. Having done that, he wanted to move from his existing code organization to components to make change cleaner. But he had a reliability problem – it was good, but because of the nature of this customer base, it needed to be near-perfect. He started by thinking about adding a second data center, putting in database replication and then somehow doing a switch-over when the primary database went down.

    Replication

    He first focused on replication. All the transactions would arrive at a primary data center, with results stored on disk. The disk contents would be replicated to a backup data center, so that when it needed to take over, everything would be in place. It’s easy to set up a storage system to do this.

    The trouble is that replication & availability functions have been moving up the stack, for good reason. While it may be trickier to set up database replication, the results are usually much better.

    DBMS replication

    Even when handling replication at the DBMS level, the DBMS transaction is guaranteed, but the user’s task may be just half done. The database will think everything is cool, but unless the application is 100% lined up, the results could be bogus.Yuck.

    Application-level replication

    Serious applications tend to have application-level replication. This basically means recording the field-level changes a user makes during an application session, from start through confirmation. This can and should be done in a generic way, so that when the application adds or changes a field, the recording, transfer and replay at the other site doesn’t need to change. This has the same net effect as database replication and avoids the issues.

    Application log

    Building an app-level log for replication is VERY close to building a … user transaction log! Which you’d like to have anyway! So make sure the extra information is there. This has many uses, including being able to show the user what they did and when they did it.

    Instead of needing a big, intense, fast connection between the sites like you need for storage or database replication, you just need to ship and apply the session transaction log, which you can do at the end.

    Replaying application transaction logs

    Application transaction logs solve the reliability problem and give the user full access to their history with the application. They are also the crucial foundation of an incredibly important architectural advance: champion/challenger QA. This is how you can dump those QA scripts and enable rapid testing of new features for rapid, problem-free deployment.

    In addition to having two identical copies of the application running in the two data centers, you bring up the proposed new version, Once you’ve got that, you can replay logs against a new copy of the application, and make sure you get the same results as last time. Guess what – that’s real-life QA! The only extra thing you need is the comparison, which again can be built once, regardless of whether there are 10 functions or 10,000. See this for much more.

    Live parallel test

    Once you’ve got that, you can implement live parallel test, which takes all the strain and risk out of releasing. It means doing the replication to another copy of the stack, just like you do for DR, except it’s the new code. If everything works well for however long you feel is necessary, all you do is switch which copy of the application sends results back to the user.

    What’s gone when you do this thing that a small number of successful organizations do to their great benefit? Among other things: QA scripting. Double coding. Fixing QA bugs. Changing the script when you change the app. Unit testing. Test-driven development. Etc.

    Moving to components

    Yes, components and layers and objects and microservices are what experts are in favor of. Beware of experts! And above all, even though you may not give a talk at a conference about it, migrate your code to … yes, a monolithic architecture.

    Can’t you use queuing when it seems to make sense? Of course you can! But it should be something in-memory and simple like redis queues.

    Active data migration

    It’s standard practice to store all user and transaction data in a DBMS, and to call on the data from the DBMS when the user wants to do something. Sadly, this standard approach has long-since become obsolete as I explain in this post from more than a decade ago. It’s is a killer of application speed, flexibility and everything else. And there are proven in-memory databases like redis that can do the vast majority of jobs. If you’re dying to use a DBMS, they continue to be useful for archive and reporting!

    Doing this will also tremendously simplify your champion/challenger testing.

    The final push to speed of application change

    Doing the foregoing things to your application will help on multiple dimensions. The final push, which can be step by step, is the one that will do the most to enable unanticipated changes to be made to your application. Conceptually, it’s pretty simple. What makes change hard? Finding all the places that need to be changed. What if the number of places you need to go to make a change shrinks, approaching over time a single place needing to be changed? You’re in the winner’s circle! Bonus: as you do this, the number of lines of code in your application will actually decrease! There’s lots to be said about this, but the core concept and value really are as simple as this.

    What are you waiting for? Get started! High quality, fast-to-change software is yours for the taking … err … fast for the building.

  • the Conspiracy to Prevent Innovation

    There is a conspiracy to prevent innovation. It’s widespread, strong and effective, more in some domains than others. Like all  conspiracies, it goes to great lengths to conceal its actions and goals. The members of the conspiracy are desperate not to be revealed, because they would certainly be subjected at least to ridicule, if not total ostracizing. More important to the active members of the conspiracy, they would lose power, if not their jobs.

    The anti-innovation conspiracy is at war with the transformative power of radical improvements of computer and networking technology. Computer evolution, which continues to follow Moore’s Law of exponential growth, without precedent in human experience, largely powers innovation, directly or indirectly. The exploding growth of computer power and network speeds with similarly amazing reductions in size and cost are an overwhelming force, practically begging people and organizations to tap its power and make things better for everyone.  See this:

    https://blackliszt.com/2013/12/fundamental-concepts-of-computing-speed-of-evolution.html

    As anyone can see with the transformation of rotary, wired phones into wireless, mobile computer phones, the forces of innovation are winning.

    Victories of the Conspiracy

    The entrenched anti-innovation forces refuse to concede defeat and give up. Decades into the computer revolution, the conspiracy lives on, showing few signs of weakening. It has established entrenched positions in a surprising array of places. Even when it can’t prevent innovation, the conspiracy manages to slow the pace of innovation to a crawl in places where conspirators maintain positions of power. They force innovators to slog through sodden, muddy paths, when in other industries, innovators are whizzing along on mag-lev trains.  There are also wide-spread pockets of resistance, in which the conspirators manage to hold off the application of proven innovations to their industry, often by decades.

    Skeptical? An amazing fraction of what appears to be innovations are little but taking advances that are proven in a narrow domain and applying them to a new one. Here is the story of an algorithm that was standard practice in oil refinery operation over 50 years ago that, decade by decade, is still crawling its way into new domains.

    https://blackliszt.com/2019/08/the-slow-spread-of-linear-programming-illustrates-how-in-old-vation-in-software-evolution-works.html

    Here is an example of a truly beneficial innovation proven over 50 years ago that is still not used in medical imaging.

    https://blackliszt.com/2019/07/barriers-to-software-innovation-radiology-1.html

    https://blackliszt.com/2019/08/barriers-to-software-innovation-radiology-2.html

    There are many more examples of the effectiveness of the anti-innovation conspiracy.

    It's Secret!

    Among the greatest strengths of the anti-innovation conspiracy is that it has not been “outed.” No one talks about an “anti-innovation conspiracy,” or even about “the forces that slow or prevent innovation.” The fact that innovation takes place widely, touching all of our personal lives, leaves everyone in awe of what innovation has wrought. And rightly so! What is not discussed are the deep forces that prevent and/or slow innovation. Without the resistance, the innovation that would be unleashed would make today’s by-itself-amazing innovation look sluggish by comparison.

    The anti-innovation conspiracy has two major wings.

    The first wing is particularly pernicious, since it establishes positions inside the people and groups who innovate and implement innovation. In this wing, the conspiracy concedes that innovation will take place, but assures that it will be as expensive, slow and ineffective as possible. The first wing is essentially an effort to cripple and co-opt the “offense,” the efforts of the innovators.

    The second wing operates silently and secretly, often waving flags supporting innovation and supporting innovation efforts. It operates inside the institutions that will be affected by innovation, and inside the organizations such as regulatory bodies that control the institutions that are targeted by innovation. The second wing is essentially an effort to bolster the “defense,” the ability of institutions that could be impacted by innovation to resist the efforts of innovators.

    One of the great strengths of the anti-innovation conspiracy is that the vast majority of people benefit to some extent by the innovation that does take place, and naturally compare  what is available to them today compared to ten or twenty years ago. Progress has happened, and it’s a good thing. Of course I agree with this.

    But meanwhile, the conspirators are rubbing their hands together and mumbling something like “heh, heh … still got ‘em.” The reason for the acclaim for the results of innovation and the near-complete lack of awareness of the conspiracy against it is simple: very few people are in a position to see the amazing innovations, true advances that would make everyone’s lives better, that have been and are being today prevented and/or drastically watered down as a direct result of the efforts of the anti-innovation conspiracy. If there were widespread knowledge of “what could have been,” people would see the results of innovation in a completely different light. It would be like a starving person in a room being given a bit of bread and water; they eagerly consume it and are glad to have it. But how would their attitude be different if they knew that, just through a door in the room, there was a sumptuous feast all laid out, ready to eat, and the person giving them the bread and water was preventing them not just from going through that door, but even from letting them know it was there? Gratitude for the bread and water would suddenly be transformed into fury at the person keeping them from the food they so desperately need, even hiding the fact that it’s there! The vast majority of people are like the starving person in the room, grateful for the food they now have, completely unaware of the feast they can’t eat because the conspiracy successfully hides it from them.

    I know these are strong claims. But after decades of writing innovative software and more decades of looking deeply at innovative companies and their spread of innovation in various companies and industries, this describes the patterns I have observed.

    Finally, let me make something clear: There is no cabal of anti-innovation conspirators who explicitly communicate among themselves that their goal is to prevent or slow innovation. If there were such a cabal, it would have been exposed and shamed long ago. The long-lasting conspiracy is fueled by a wide variety of goals that appear to have nothing to do with preventing innovation; the conspirators, as they deserve to be called, don’t say or even (for the most part) tell themselves that they are trying to prevent innovation. It’s always something else: they are trying to protect the public, prevent people from being harmed, assuring that things are built to proper standards, trying to maintain stability in a well-oiled operation, avoiding wasteful distractions, and on and on. Most of the people involved might sincerely claim they are promoting innovation everywhere they can. The trouble isn’t their thoughts or words; the trouble is their actions, which have the net effect of preventing or slowing innovation.

    I have already written about some of the issues involved in the anti-innovation conspiracy.

    https://blackliszt.com/2023/07/summary-software-innovation.html

    There is more to be written, for example to shame those who promote themselves and their institutions as being leaders in innovation. I hope others will take up this long-neglected issue.

  • Well-trained Managers Create Software Failure

    Many software projects fail, or in various ways don’t achieve the expected results. This is widely known. Loads of people offer cures, for example the fact that the critic’s favorite software development method wasn’t used, or it if was used, wasn’t used correctly. Training! Consultants! More SCRUM masters! Not just stand-up meetings, but stand-on-your-tippy-toes meetings!

    There is a fatal flaw that often causes software projects to turn out badly. It’s rarely discussed. It’s sometimes called motivation, but it comes down to whether the people doing the actual work – you know, writing the lines of code – feel ownership in what they’re doing. If they don’t own the work, their eyes will be glazed over, and the most productive corners of their minds will be engaged elsewhere.

    If they’re engaged, excited and feel they own the project, however, they are likely to be chastised, boxed, referred to HR, etc. Someone will sit down with them and utter phrases containing words like “team,” “supportive,” “respect other people,” and so on, all of which are generically good, I guess, but amount to something like this: “Sit down, shut down, keep your nasty opinions to yourself, stop putting other people down, and just do your work.” In other words, go through the motions.

    The people who say things like this are mostly not programmers – they wouldn’t know a line of code if they tripped on it – or are sometimes go-along type programmers who see that corralling wild programmers is the path to higher pay and increasing management responsibilities – in other words, more pay for less work.

    Dilbert has an on-point description of software management.

    Dilbert management (002)

    Yes, some good programmers are mean, sometimes. But mostly they just sound mean when what’s really going on is that they’re reacting strongly to stupid ideas or proposals. Because they care. The targets of their wrath should grow up and consider the possibility that they’re doing the equivalent of proposing that the race should be run by people holding their hands behind their back, or hopping on one foot. Is it nice to mock such a brain-dead idea? No. Should the person proposing the brain-dead idea be made to understand that they’d better do a brain transplant and get a brain that’s at least moderately alive, or switch jobs to serving the coffee? Yes. The sooner the better. Perhaps the idiot could even try listening to the “mean programmer” and understand the substance of what they’re saying, instead of just the emotional baggage.

    That’s a nice fantasy, but very rarely happens. Most often, it’s the unrestrained programmer who is put down. Top programmers are given a clear choice. Choice 1: leave. Choice 2: do what you’re told and stop caring.

    Dilbert boils it down: learn to lie.

    Dilbert software (002)

    Dedicated MBA/HR types will counter with stuff about the importance of teamwork, how the team succeeds together or fails together, the programs they run to inspire motivation, belonging and dedication to quality, and all sorts of good-sounding things. Good intentions! Sorry, but in the end, the only thing that matters is results. If your “team” has carefully included everyone and reached a consensus about the best path to get to the top of the mountain, and gets stuck part way up with nowhere to go except jumping a cliff or retreating to the base of the mountain, well they’ve failed together. I hope they’re happy. Meanwhile, some collection of misfits could be charging up the mountain in leaps and bounds, anticipating obstacles that your wonderful crowd of conformists and liars were blind to, and reacting creatively to obstacles they had failed to foresee, overcoming them quickly

    This is one of the big reasons why large organizations can’t get software done: they are infected with large numbers of trained, credentialed managers and HR people who know nothing about the substance of software and are convinced that their generic management methods should reign supreme.

     

  • Excellence in Government IT: The Social Security Administration

    I got an email update from the SSA It said:

    Your Social Security Statement is streamlined and easier to read than ever before. That is because we have redesigned the Statement to provide you the most useful information up front and at a glance.

    You can access your new Statement by signing into your account at socialsecurity.gov/reviewyourstatement.

    Now that you can access your Statement instantly and anytime online, we will not automatically send one by mail.

    We hope you find your new Statement useful and informative.

    i decided to check out this wonderful new statement that's available "instantly and anytime.". Here's what happened when I clicked the link:

    00 SSA

    Is there any better evidence that government agencies are incapable of achieving the minimal level of software competence that the vast majority of software groups are able to achieve? Is it possible to be more arrogant than to claim that statements are available "instantly and anytime" and then not do it?

    I can only guess that the government buys special computers that need a minimum of 4 hours of rest a day, more on weekends. Because 24 by 7 availability of simple things like this have been easily achievable for at least 25 years.

    I particularly love the statement in the error message that says "If you need immediate assistance you may call us Monday through Friday 8:00am – 7:00pm." Is it possible that the government has a meaning for "immediate" that isn't what normal people think it means?

Links

Recent Posts

Categories