Author: David B. Black

  • Twitter can improve software quality by losing most of its engineers

    Twitter has fired boatloads of software engineers and more are jumping ship. Most people predict that losing all those engineers will lead to software disaster. But then, most people don’t know much about software and don’t realize what a disaster Twitter software has been for years. With some intelligent leadership, Twitter's software could dramatically improve while dispensing with 90% of their engineers.

    Twitter Customer Losses

    First, let's note that the situation has users really worried. Customers are bailing the ship:

    T1

    Here's some up-to-date analysis:

    T2

    While some say the massive customer losses are due to the new CEO, others are sure that the loss of so many excellent Twitter software engineers will seal the deal.

    Twitter Software Quality

    Those cool internet software companies – their top people wear hoodies or whatever they feel like. They must be fabulous programmers, right?

    Nope. Facebook, for example, produces amazingly bad software. See this and the included links for details on how bad it is – and how they hide as much as possible from their users.

    Facebook, Twitter and the rest aren’t better than “normal” software companies, except in how rich they made their founders. Internet software is a horror show, as I document here.

    I dove into Twitter in particular years ago to see just how bad it was. I found and documented inexcusable failures, which they went to great lengths to disguise.

    I tested doing searches for tweets with Twitter’s own search engine, and documented random patterns of tweets being dropped out or included from searches just days apart. See this and this.

    What this means, among other things, that the basis of “trending on Twitter” is based on bad Twitter data – the kind produced by those wonderful, oh-so-cool Twitter engineers.

    Twitter hides the awfulness of its terrible software from you

    Note that the errors I documented were NOT associated with an error message. They were cases where the right answer (for example search results) could be determined and compared to what Twitter provided. That’s what I did, and found that Twitter blithely would state “here’s your answer,” and then provide an answer that was demonstrably false. It would have been more honest had they given an error message instead. But no — that would have been honest; Twitter engineers, while terrible at building software that actually works, have become masters of the simpler, nefarious job of masking bad software.

    Even with such errors, Twitter would still admit failure from time to time, showing this image:

    T3

    Since then Twitter has gotten better at hiding their errors. Following the pattern of bad search results, they give you your feed of tweets and you merrily scroll down, reading, retweeting, liking, and whatever. They have learned to almost never admit failure. The fail whale shown above is history. How do you know if you’re getting all the tweets from the people you follow, and in the right order? You don’t! In fact, there is loads of anecdotal evidence that part of how their engineers spend their time is figuring out how to manipulate your feed for a variety of reasons — including masking the results of their inability to do it the right way — instead of just giving you a common-sense, complete ordered feed.

    Fail whale? I haven’t seen it in years. They can have dozens of servers crash and burn and no one will know the difference. What Twitter does is just pull together your search results or feed based on whatever servers are still limping along with whatever the pathetic Twitter software will give them, and give you whatever crap it has. How will you ever know it’s incomplete, missing important things, etc.? Unless you’re an obsessive crazy person like me and run tests, you’ll never know.

    So what do all those thousands of software engineers do all day long except attend meetings while relaxing on comfy bean bags and digesting all the free food the company provides? Obviously not much in the way of useful programming.

    The alternative

    Small teams of smart, motivated programmers typically out-perform teams of hundreds and thousands of people with titles of “software” employed in big bureaucracies – including in “cool” companies. The concept is simple: most programming organizations are like people who build bridges in times of peace. They take thousands of people years to build. When you build bridges in war, like the bridge over the Rhine in World War 2, you have to build it in a DAY, while under enemy fire. And it has to work. Small, under-resourced groups of programmers operate in wartime mode, getting things DONE while groups of thousands continue plodding away at requirements planning. See this. For more, see this.

    I see this all the time in my work of evaluating small, innovative start-ups. The small companies don’t have enough time or money to do things the “right” way. They have to get things done. Fast! The ones that do succeed.

    A great example from another field is the author of the first important dictionary of the English language, Samuel Johnson. He produced excellent work, arguably 200 times more productively than the big committee trying to do the same for the French language. See this for the story.

    What Twitter does isn’t hard! And what it does hasn’t fundamentally changed for at least a decade! Those thousands of engineers couldn’t make it work well nine years ago when I ran tests, and all they’ve managed since then is do a more polished job of hiding the errors and bad answers.

    Whether Elon Musk or someone like him is in charge, the best thing for Twitter and its customers would be for most of the programmers to be shown the door, for the remainder to get with the program of making things actually work and work well, or join their former colleagues as ex-employees. If they really get it going, some programmers who are actually good and, get ready, WANT to write code, GOOD code, may seriously consider joining a re-invented Twitter engineering group.

    It might actually happen, since Mr. Musk is violating the most sacrosanct rule of management: getting involved with the people on the front lines of actually doing the work! There's even evidence that he seems proud of his disgusting behavior.

    T4

    Who does he think he is??!!

  • How to Cure AMD Macular Degeneration

    AMD is the leading cause of visual impairment in the US. It hits many people as they age and causes vision to worsen to the point of blindness.

    First the bad news: all I have is some evidence and common-sense logic that there MAY be a cure. Now the good news: there is a never-refuted major study that shows that people who take pills to reduce their blood pressure get AMD at more than twice the rate of those who don't, and there is anecdotal evidence (needs more study to confirm) that stopping taking the pills causes the AMD to reverse (not stop or slow down — reverse) its progression. Do you think that's worth digging into? And at least letting everyone who takes BP medications that there is a good chance that at least 10% of them will needlessly go blind with AMD?

    Blood Pressure Pills

    All the medical authorities are united in the importance of fighting the “silent killer” of blood pressure that’s too high, i.e., hypertension. I’ve described in detail that what doctors call ‘essential hypertension” is NOT a disease.

    Pills to lower blood pressure are the mostly widely prescribed pills in the US, with over 100 million people supposedly cursed by the “disease” of hypertension.

    I started taking them eight years ago as a small part of my fight against a cancer that I had.  Last year I started experiencing symptoms that could have been evidence for heart problems. I had extensive testing and did research on my own. My heart was good but I discovered that my symptoms were side effects of BP pills, widely reported by people but ignored by doctors. I stopped taking the pills and my symptoms faded away to nothing. I tell the story here.

    Blood Pressure Pills and AMD

    Then I wondered if the pills could possibly have something to do with my AMD that was first diagnosed about three years ago. I studied hard and came up with nothing. Big authorities said that having "uncontrolled" hypertension could cause it along with "bad" diet. In a story I tell here, I discovered here’s a never-refuted medical study published by the American Academy of Ophthalmology and sponsored by the National Eye Institute (part of NIH) showing that taking those pills greatly increases the risk of going blind.

    The study remains on-line but is shockingly difficult to find. I found only one eye doctor group that mentions it. You would kinda think that they would recommend getting off of BP pills, don't you think?

    Blood Pressure Pills and AMD

    I had detailed pictures of my macula clearly showing the drusen that are the things in the eye that hurt vision in AMD. I just went to the same doctor, who took another set of careful pictures so we could compare them to the ones from a year ago, while I was still taking BP meds. I stopped taking the pills a bit more than six months ago.

    Below are pictures my doctor took of a section of the left eye. (You may have to click the picture to see it all, particularly the wavey drusen on the right.) The top picture is from a year ago and the bottom a couple days ago. The drusen are the wavey parts of the two white curves on the right.

    Left

    As she said, the central drusen got larger but the ones in the periphery diminished significantly.

    Generally speaking, drusen either stay the same size or they grow. Mostly grow. That's why AMD is progressive and no one has found a way to make it stop or slow down, much less reverse course. Here's an example of drusen shrinking. Why?

    If you smoke, your chances of getting lung cancer are high. If you stop smoking, the chance of getting cancer goes down. If you're an alcoholic, your chances of terrible liver damage go up the longer you drink. If you stop drinking, your liver usually stops getting worse and often gets better. Is it reasonable to think that if taking BP pills doubles your chances of getting AMD as demonstrated by the Beaver Dam study, that stopping taking the pills would result in good things happening with AMD? We need a study to prove it, but it's a reasonable assumption. It's particularly reasonable given the proven fact that taking BP pills more than doubles the chance of getting AMD as you age.

    Conclusion

    I thought blood pressure pills were benign, something you probably had to take when you got older to lengthen your stay on earth. It's what the whole medical establishment and nearly every doctor says. What's a visit to the doctor without taking your BP?

    They're wrong. The evidence that they're wrong is available, but they're no more willing to change than they were in the cases of antiseptic surgery or blood-letting.

    That the horrible side-effects of BP pills are universally denied by doctors and bad enough. But making you blind? If there were any conscience in the medical establishment, they would defy the pharma companies and immediately create studies to validate this. Serious mining of the RWE in centralized medical charts that should show the relevant data points would be a good start.

     

  • How to Improve Software Productivity and Quality: The Common Sense Approach

    I talked with a frustrated executive in a computer software company. I was about to visit their central development location for the first time, and he wanted to make sure I asked sufficiently penetrating questions so that I would find out what was “really” going on.

    He explained that while he had written software, it was only for a few years in the distant past, and things had changed a great deal since his day. His current job in product marketing didn’t really require him to get into any details of the development shop, and in fact he preferred to stay out of the details for several reasons: (1) he was completely out of date with current technology and methods; (2) he didn’t want his thinking constrained by what the programmers declared was possible; (3) it was none of his business.

    He had developed a keen interest in what was going on in the software group, however, because he realized that it had a dramatic effect on his ability to successfully market the product. His complaints were personal and based on his own experience, but they were fairly typical, which is why I’m recounting his tale of woe here.

    The Lament

    The layman’s lament was an interesting mish-mash of two basic themes:

    • I’m not getting the results I need. There are certain results that I really need for my business. My competitors seem to be able to get those results, and I can’t. Basically, I want more features in each release, more frequent releases, more control and visibility on new features, fewer bugs in new releases, and the ability to make simple-sounding changes quickly. Our larger competitors seem to be able to move more quickly than we do.
    • I think the way the developers’ work is old-fashioned, and if it were brought up-to-date, I would get the results I need. What they do seems to be “waterfall,” with lots of documentation that doesn’t say a lot. There must be something better, along the lines of what we used to call RAD (rapid application development). They only have manual testing, nothing automated, and they tell me it will be years before they can build automated testing! And shouldn’t they be using object-oriented methods? Wouldn’t that provide more re-use, so that things can be built and changed more quickly? They have three tiers, but when I want to change something, the code always seems to be in the wrong tier and takes forever. They’re talking about re-writing everything from scratch using the latest  technology, but I’m afraid it will take a long time and there won’t be anything that benefits me.

    Basically, he was saying that he wants more things, quicker, and better quality. He also advanced some theories for why he’s not getting those things and how they might be achieved, but of course he couldn’t push his theories too hard, because he lacked experience and in-depth knowledge of the newer methods. He even claimed, in classic “the grass is greener” style, that practically everyone accomplishes these things, and he was nearly alone in being deprived of them – not true!

    The usual dynamics of a technology group explaining itself to “outsiders” was also at work here – if you just listen to the technology managers, things are pretty good. The methods are modern and the operation is efficient and productive. There are all sorts of improvements that could be made with additional money for people and tools, of course, but for a group that’s been under continual pressure to build new features, support cranky customers and meet accelerated deadlines with fewer resources, they’re doing amazingly well. The non-technology executives tend to feel that this is all a front, and that results really could be better with more modern methods and tools. The technology managers, for their part, feel like they’re flying passenger planes listening to a bunch of desk-bound ignoramuses complain about their inability to deliver the passengers safely and on-time while upgrading the engine and cockpit systems at the same time. These people have no idea what building automated testing (for example) really takes, they’re thinking. The non-technology people don’t really want to talk about automated testing, of course – they’re the ones taking the direct heat from customers who get hurt by bugs in the new release, and aren’t even getting proposals from the technology management of how this noxious problem can be eliminated. Well, if you can’t tell me how to solve the problem (and you should be able to), how about this (automated testing, object-oriented, micro-services, etc.)??

    It goes on and on. The business executives put a cap on it, sigh, maybe throw a tantrum or two, but basically try to live with a situation they know could be better than it is. Inexperienced executives refuse to put with this crap, and bring in new management, consultants, do outsourcing, etc. Their wrath is felt! Sadly, though, the result is typically a dramatic increase in costs, better-looking reporting, but basically the status quo in terms of results, with success being defined downwards to make everything look good. The inexperienced executive is now experienced, and reverts to plan A.

    The technology manager does his version of the same dance. The experienced manager tries to keep things low-key and leaves lots of room for coping with disasters and the unexpected. Inexperienced technology managers refuse to tolerate the tyranny of low expectations; they strive for real excellence, using modern tools and methods. Sadly, though, the result is typically a dramatic increase in costs, better-sounding reports, but basically the status quo in terms of tangible results. The new methods are great, but we’re still recovering from the learning curve; that was tense and risky, I’m lucky I survived, that’s the last time I’m trying something like that again!

    The Hope

    The non-technology executive is sure there’s an answer here, and it isn’t just that he’s dumb. He keeps finding reason to hope that higher productivity with high quality and rapid cycles can be achieved. In my experience, the most frequent (rational) basis for that hope is a loose understanding of the database concept of normalization, and the thought that it should enable wide-spread changes to be made quickly and easily. Suppose the executive looks at a set of functionally related screens and wants some button or style change to be applied to each screen. It makes sense that there should be one place to go to make that change, because surely all those functionally related screens are based on something in common, a template or pattern of some kind. What if the zip code needs to be expanded from five digits to nine? The executive can understand that you’d have to go to more than one place to make the change, because the zip code is displayed on screens, used in application code and stored in the database, but there should be less than a handful of places to change, not scores or hundreds!

    But somehow, each project gets bogged down in a morass of detail. When frustration causes the executive to dive into “why can’t you…” the eyes normally glaze over in the face of massive amounts of endless gobbledy-gook. What bugs some of the more inquisitive executives is how what should be one task ends up being lots and lots of tasks? With computers to do all the grunt work, there’s bound to be a way to turn what sounds, feels and seems like one thing (adding a search function to all the screens) actually be one thing – surely there must be! And if everything you can think of is in just one place, surely you should be able to go to that one place and change it! Don’t they do something like that with databases?

    There is a realistic basis for hope

    I’ve spent more of my life on the programmers’ side of the table than the executives’, so I can go on, with passion and enthusiasm, about the ways that technology-ignorant executives reduce the productivity, effectiveness and quality of tech groups, not to mention the morale! The more technical detail they think they know, the worse it seems to be.

    That having been said, the executive’s lament is completely justified, and his hope for better days is actually reasonable (albeit not often realized).

    What his hope needs to be realized is there to be exactly one place in the code where every completely distinct entity is defined, and all information about it is stated. For example, there should be exactly one place where we define what we mean by “city.” This is like having domains and normalization in database design, only extended further.

    The definition of “city” needs to have everything we know about cities in that one place. It needs to include information that we need to store it (for example, its data type and length), to process it (for example, the code that verifies that a new instance of city is valid) and to display it (for example, its label). The information needs to incorporate both data (e.g. display label) and code (e.g. the input edit check) if needed to get the job done. This is like an extended database schema; a variety of high-level software design environments have something similar to this.

    It must be possible to create composite entities in this way as well, for example address. A single composite entity would typically include references to other entities (for example, city), relationships among those other entities and unique properties of the composite entity (for example, that it’s called an “address.” This composite-making ability should be able to be extended to any number of levels. If there are composites that are similar, the similarity should be captured, so that only what makes the entity unique is expressed in the entity itself. A common example of this is home address and business address.

    Sometimes entities need to be related to each other in detailed ways. For example, when checking for city, you might have a list of cities, and for each the state it’s in, and maybe even the county, which may have its own state-related lists.

    The same principle should apply to entities buried deep in the code. For example, a sort routine probably has no existence in terms of display or storage, but there should usually be just one sort routine. Again, if there are multiple entities that are similar, it is essential that the similarities be placed in one entity and the unique parts in another. Simple parameterization is an approach that does this.

    Some of these entities will need to cross typical software structure boundaries in order to maintain our prime principle here of having everything in exactly one place. For example, data entities like city and state need to have display labels, but there needs to be one single place where the code to display an entity’s label is defined. Suppose you want a multi-lingual application? This means that the single place where labels are displayed needs to know that all labels are potentially multi-lingual, needs to know which the current language is, and needs to be able to display the current language’s label for the current entity. It also means that wherever we define a label, we need to be able to make entries for each defined language. This may sound a bit complicated at first reading, but it actually makes sense, and has the wonderful effect of making an application completely multi-lingual.

    In order to keep to the principle of each entity defined once, we need the ability to make relationships between entities. The general concept of inheritance, more general than found in object-oriented languages, is what we need here. It’s like customizing a standard-model car, where you want to leave some things off, add some things and change some things.

    There’s lots more detail we could go into, but for present purposes I just want to illustrate the principle of “each entity defined in one place,” and to illustrate that “entity” means anything that goes into a program at any level. By defining an entity in one place, we can group things, reference things, and abstract their commonality wherever it is found, not just in a simple hierarchy, and not limited to functions or data definitions or anything else.

    While this is a layman’s description, it should be possible to see that IF programs could be constructed in this way, the layman’s hope would be fulfilled. What the layman wants is pretty simple, and actually would be if programs were written in the way he assumes. The layman assumes that there’s one way to get to the database. He assumes that if you have a search function on a screen, it’s no big deal to put a search function on every screen. He assumes that if he wants a new function that has a great deal in common with an existing function, the effort to create the new function is little more than the effort to define the differences. He assumes that look and feel is defined centrally, and is surprised when the eleventh of anything feels, looks or acts differently than the prior ten.

    Because he has these assumptions in his mind, he’s surprised when a change in one place breaks something that he doesn’t think has been changed (the infamous side-effect), because he assumes you haven’t been anywhere near that other place. He really doesn’t understand regression testing, in which you test all the stuff that you didn’t think you touched, to make sure it still works. Are these programmers such careless fools that, like children in a trinket shop, they break things while walking down the aisle to somewhere else, and you have to do a complete inventory of the store when the children leave?

    Programs are definitely not generally written in conformance with the layman’s assumptions; that’s why there’s such a horrible disconnect between the layman and the techies. The techies have a way of building code, generally a way that they’ve received from those who came before them, that can be made to work, albeit with considerable effort. They may try to normalize their database schemas and apply object principles to their code, but in the vast majority of cases, the layman’s assumption of a single, central definition of every “thing,” and the ability to change that thing and have the side-effects ripple silently and effectively through the application, does not exist, is not articulated, not thought about, and is in no way a goal of the software organization. It’s not even something they’ve heard talked about in some book they keep meaning to get to. It’s just not there.

    I assert that it is possible to write programs in a way that realizes the layman’s hope.

    I’ve done it myself and I’ve seen others do it. The results are amazing. It’s harder to do than you would ideally like because of a lack of infrastructure already available to support this style of writing, but in spite of this, it’s not hard to write. Moreover, once the initial investment in structure has been made, the ability to make changes quickly and with high quality quickly pays back the investment.

    The main obstacle for everyone is that there is tremendous inertia, and the techniques that provide a basis for the hope, while reasonable and achievable, are far out of the mainstream of software thinking. I have seen people who have good resumes but are stupid or lazy look at projects that have been constructed according to the “one entity – one definition” principle and simply declare them dead on arrival, complete re-write required. But I have also encountered projects in domain areas where there is no tradition at all in building things in this way in which the people have invented the principles completely on their own.

    The “principle of non-redundancy” has far-reaching technical consequences and ends up being pretty sophisticated, but at its heart is simple: things are hard to do when you have to go many places or touch many things to get them done. When the redundancy in program representation (ignoring for the moment differences between code, program data and meta-data) is eliminated, making changes or additions to programs is optimally agile. In other words, with program representation of this type, it is as easy and quick as it can possibly be to make changes to the program. In general, this will be far quicker than most programs in their current highly redundant form.

    The layman’s hope that improvements can be made in software productivity, quality and cycle is realistic, and based on creating a technical reality behind the often-discussed concepts of “components” and “building blocks” that is quite different from the usual embodiment.

    I have no idea why this approach to building software, which is little but common sense, isn't taught in schools and widely practiced. For those who know and practice it, the approach of "Occamality" (define everything in exactly one place) gives HUGE competitive advantages.

  • How to Improve Software Productivity and Quality: Code and Metadata

    In the long-fought war to improve software programming productivity, there have been offensives on many fronts, but precious little genuine progress made. We give our programmers the fanciest, most high-tech equipment imaginable – and it is orders of magnitude faster and more powerful than the equipment available to earlier generations – but this new equipment has made only marginal difference. While relieving programmers of the burden of punch cards helps, the latest generation of programmers are not getting the job done much better than their comparatively low-tech predecessors.

    Form and Content

    As usual, most efforts to improve the situation focus on one of two general approaches: form or content.

    People who focus on form tend to think that the process of getting software written is what’s important. They talk about how one “methodology” is better than another. They think about how people are selected, trained, organized and motivated. As you might imagine, the spectrum of methodologies is a broad one. On one end of the spectrum is the linear approach, which starts from the generation of business requirements for software, and ends with testing, installation and ongoing maintenance. On the other end of the spectrum is the circular, interactive approach, in which a small working program is built right away, and gradually enhanced by programmers who interact closely with the eventual end-users of the program. There are any number of methodologies between these two extremes, each of which claims an ideal combination of predictable linearity with creative interactivity.

    People who focus on content tend to think that the language in which programs are written and the structure and organization of programs are what’s important. Naturally, they like design and programming tools that give specific support to their preferred language and/or architecture. There tends to be a broad consensus of support at the leading edge around just a couple of languages and structures, while the majority of programmers struggle with enhancing  and maintaining programs originally written according to some earlier generation’s candidate for “best language” or “best architecture.” At the same time, many of those programmers make valiant attempts to renovate older programs so that they more closely resemble the latest design precepts, frequently creating messes. Regardless of the generation, programmers quickly get the idea that making changes to programs is their most time-consuming activity (apart, of course, from never-ending meetings), and so they focus on ways to organize programs to minimize the cost of change. This leads to a desire to build “components” that can be “assembled” into working programs, and naturally to standards for program components to “talk” with each other.

    Lots of effort, not much progress

    The net effect of all these well-intentioned efforts, on both the form and content sides, has been a little bit of progress and a great deal of churn. Having some agreed-on methodology leads to better predictability and generally better results than having none; it seems that having a beat to which everyone marches in unison, even if it’s not the best beat, leads to better results than having a great beat that many people don’t know or simply refuse to march to. What is the best methodology in any one case depends a great deal on both how much is already known about the problem to be solved and how smart and broadly skilled the participants in the project are. The more qualified the people and the less known about the target, the more appropriate it is to be on the “interactive” end of the spectrum; think highly qualified and trained special forces going after a well-defended target in enemy territory – creativity, teamwork and extraordinary skills are what you need. The more ordinary the participants and with better-understood objectives, the more appropriate being somewhere towards the “linear” end of the spectrum; think of a large army pressing an offensive against a broad front – with so many people, they can’t all be extraordinary, and you want coordinated, linear planning, because too much local initiative will lead to chaos.

    As to content, while it is clear that the latest programming languages encourage common-sense concepts like modular organization and exposing clear interfaces, good programmers did that and more decades ago in whatever language they were writing, including assembler, and bad programmers can always find a way to make messes. And even if you use the best tools and interface methods, incredible churn is created by frequent shifts in what is considered to be the best architecture, practically obsoleting prior generations. Probably the single biggest movement over the last several decades has been the gradual effort to take important things about programs that originally could be discovered only by inspecting the source code of the program and making those things available for discovery and use by people and programs, without access to the source code. The first major wave of this movement led to exposure of much of a program’s persistent data, in the form of DBMS schemas; the other major wave of this movement (in many embodiments, from SAA to SOAP to microservices) exposes a program’s interfaces, in the form of callable functions and their parameters. This has been done in the belief that it will enable us to make important changes to some programs without access to or impact on others, and thus approach the dream of programming by assembling fixed components, like building blocks or legos, into completed programs or buildings. The belief in this dream has been affected very little by decades of evidence that legos don’t build things that adults want to live in.

    I believe that while considerations of form are incredibly important, and when done inappropriately can drag down or even sink any programming project, there is little theoretical headway to be made by improvements in methodology. I think most of the relevant ideas for good methodology have already been explored from multiple angles, with the exception of how to match methodology to people and project requirements – no one methodology is the best for each project and each collection of people.  But even when you’ve picked the best methodology, the NBA players will always beat the high school team, as long as the NBA players execute well on the motivational and teamwork aspect that any good methodology incorporates. With methodology we’re more in need of good execution and somehow assembling a talented and experienced team than we are of fresh new ideas.

    Ptolemy, Copernicus, Kepler, Newton

    Content, however, is another story altogether. I think our current best languages and architectures will look positively medieval from the perspective of a better approach to content. I think there is a possible revolution here, one that can bring about dramatic improvements in productivity, but which requires an entirely new mind-set, as different as Copernicus and Ptolemy, as different as Einstein and Newton.

    Having said that, let me also say that the “new” ideas are by no means completely new. Many existing products and projects have exploited important parts of these ideas. What is mostly new here is not any particular programming technique, but an overriding vision and approach that ties various isolated programming techniques into a unified, consistent whole. Kepler’s equations described the motion of planets with accuracy equal to Newton’s – in fact, you can derive one set of equations mathematically from the other; but Newton provided the vision (gravity) and the tools (calculus) that transformed some practical techniques (Kepler’s equations) into the basis of a new view of the world. That’s why physics up to the end of the nineteenth century was quite justifiably characterized as “Newtonian” and not “Keplerian.”

    Ptolemy and Newton looked at the same set of objects; Ptolemy picked the closest one, the earth, to serve as the center of thinking, while still incorporating the rest.

    1

    His main goal was to describe the motions of what you could see, focused on the matter. Copernicus noted that things got simpler and more accurate if you picked one farther away, the Sun, to serve as the center of things.

    2

    Kepler made things better by noticing the curves made by the planets. Newton then took the crucial step: instead of focusing on matter, he focused on energy (gravity in this case), and wrote an equation describing how gravity works,

    1

    which creates changes in the location and velocity of matter. In programming, we have clear equivalents of matter and energy: matter is data (whether or not it is persistent), and energy is instructions (lines of code, regardless of the language it’s in). In COBOL, for example, this division is made explicit in the data division and the procedure division. In modern languages the two are more intermixed, but it remains clear whether any particular line describes data or describes an action to take (an if statement, assignment statement, etc.).

    Now, in spite of what you may have been taught in school, Ptolemy’s method works. In fact, it would be possible (though not particularly desirable) to update his approach with modern observations and methods, and have it produce results that are nearly identical to those possible today; in fact, only when you get to relativity does Ptolemy’s method break down altogether — but remember that the effects of relativity are so small, it takes twentieth century instruments to detect them. But no one bothers to do this, because the energy-centered approach developed by Newton and refined by his successors is so much simpler, cleaner and efficient.

    Similarly, there is no doubt that the instructions-centric approach to programming works. But what comes out of it is complicated, ugly and inefficient. The Newtonian breakthrough in programming is replacing writing and organizing instructions (and oh, by the way, there’s some data too) as the center of what we do with defining and describing data (and oh, by the way, there are some instructions too). The instruction-centered approach yields large numbers of instructions with a small amount of associated data definitions; the data-centered approach yields large numbers of data definitions and descriptions operated on by a significant body of unchanging, standard instructions and small numbers of instructions specifically written for a particular program. In the instruction-centered approach, we naturally worry about how to organize collections of instructions so that we can write fewer of them, and arrive at concepts like objects, components and class inheritance (a subclass inherits and can override its parent’s methods (instructions)). In the data-centered approach, we naturally worry about how to organize collections of data definitions so that we can have the minimal set of them, and arrive at concepts like meta-data, standard operations (e.g. pre-written meta-data-driven instructions enabling create, query, select, update and delete of a given collection of data) and data inheritance (a child data definition, individual or group, inherits and can override its parent’s definitional attributes). In short, we separate out everything we observe into a small, unchanging core (like Newton's gravity) that produces ever-changing results in a diverse landscape.

    We shift our perspective in this way not because it enables us to accomplish something that can’t be accomplished by the current perspective, but because it proves to be cleaner, simpler, more efficient, easier to change, etc.

    Instructions, data and maps

    Only by understanding the details of this approach can it really be appreciated, but the metaphor of driving instructions and maps is appropriate and should make the core idea clear.

    Suppose your job is to drive between two locations, and the source and destination location are always changing. There are two general approaches for giving you directions:

    1. Turn-by-turn directions (the instruction-driven, action-oriented approach)
    2. A map, with source and destination marked (the data-driven, matter-oriented approach)

    The advantage of directions is that they make things easy for the driver (the driver is like the computer in this case). You pick up one step, drive it, then pick up the next, drive it, and so on until the end. You don’t have to think in advance. All you have to do is follow directions, which tell you explicitly what to do, in action-oriented terms (turn here, etc.).

    The problem with directions come in a couple of circumstances:

    • You have to have a huge number of directions to cover all possible starting points and destinations, and there is a great deal of overlap between sets of directions
    • If there is a problem not anticipated by the directions, such as an accident or road construction, you have to guess and get lucky to get around the problem and get back on track.

    A map, on the other hand, gives you most of the information you need to generate your own directions between any two given points. The map provides the information; you generate the actions from that information. With a direction-generating program and some parameters like whether to use toll roads, a generic program can generate directions on the fly. With a program like Wayz, real-time changes due to updated traffic information can even be generated.

    The Wayz program itself isn't updated very often — it's mostly the map and road conditions. Same thing with a program written using this approach; the core capabilities are written in instructions, while the details of the input, storage, processing and output are all described in a "map" of the data and what is to be done with it.

    Conclusion

    It's pretty simple: programming today largely follows the method of Ptolemy, resulting in an explosion of software epi-cycles to get anything done. Attempts to keep things easily changeable sound promising but never work out in practice.

    The way forward is to focus instead on what there is and what is to be done with it (data and metadata), with a small amount of Wayz-like code to take any user or data stream from its starting point to its destination with easy changes as needed.

  • Cryptocurrency and Crime

    Cryptocurrency (Bitcoin, Ethereum and the rest) is fueling a new kind of crime wave. Computers and networking are the lawless continent on which criminals go wherever they want, going into factories, stores and homes, stealing data in massive amounts to sell and use to enable more crime. That crime continues to grow. Bitcoin, the software built on computers and networks, has added the element of anonymous payments to and between criminals. Criminals world-wide have been inspired by this near-instant, secret way to pay and accept money to ratchet up existing crimes and invent new ones.

    Why do big, important people continue to deny a problems exists? As discussed in a recent WSJ article, this crime-enabling menace needs to be confronted head-on.

    Burglary

    Burglary is when a criminal steals something without a confrontation with the owner, for example breaking into your house when you’re away and taking your valuables. A great deal of cyber-crime has been burglary, things like hacking your computer system and stealing data. But then how do you sell the data? More important, how will you collect your money from the criminals who buy it?

    Enter Bitcoin. The buyer can be anywhere in the world. They can be of any nationality, used to using any currency. Once an agreement has been made, payment is simple, fast and untraceable. The buyer and seller don’t need any direct contact. Any currency can be converted to Bitcoin to send, and converted to any currency on receipt. Or left in Bitcoin to use in other criminal enterprises. Bitcoin hasn’t transformed the huge field of criminal data, but it sure has greased the wheels.

    Robbery/Ransomware

    Robbery is worse than burglary. It’s when a criminal confronts you on the street, points a gun at you and says something like “your wallet and jewels or your life.” Most people do what the robber says and hope to live another day. The new wave of cybercrime is robbery a.k.a ransomware: not just sneaking into your computer but encrypting everything and “tying your computer up” until you pay the ransom.

    Ransom attacks on computers have always existed, but they were fairly rare, because there was no way the robber could collect the victim’s money without revealing himself. Then Bitcoin came along. Bitcoin enables anyone to buy it from an exchange like Coinbase and then send it to the criminal’s anonymous Bitcoin address. The criminal, who could be anywhere, then has your money and may, if he feels like it, release your computers from their electronic shackles.

    There wasn't much ransomware a decade ago. Then came Bitcoin.

    “eCrime – a broad category of malicious activity that includes all types of cybercrime attacks, including malware, banking trojans, ransomware, mineware (cryptojacking) and crimeware – seized the monetization opportunity that Bitcoin created. This resulted in a substantial proliferation of ransomware beginning in 2012…

    Bitcoin exchanges provided adversaries the means of receiving instant payments while maintaining anonymity, all transacted outside the strictures of traditional financial institutions.”

    Then came a new generation of locking technology, 2048 bit private key. This led to a shift away from spraying malware to millions of little computers to infecting, locking and ransoming big institutions, Big Game Hunting.

    The criminals evolve quickly. They are generations ahead of the largely inept bureaucrats with huge budgets following security regulations that are typically obsolete by the time they are issued.

    As a result, ransomware attacks were everywhere in 2021 and continue growing.

    Double-extortion ransomware attacks rise: On average, a new organization becomes a victim of ransomware every 10 seconds worldwide.

    Here is more and a recent example.

    From suitcases of cash to Venmo for Criminals

    Illegal national and international weapons trafficking has always existed. So has human trafficking. Likewise importing and selling addictive drugs like heroin. These are all human horrors.

    For some strange reason, the people who import and sell innocent young girls want to be paid in cash. Lots of it. Same thing with fentanyl. It’s inconvenient and dangerous, carrying around huge stacks of hundred dollar bills! Bitcoin changes the game. Bitcoin is like Venmo for the criminal class only better. No records. No annoying banking regulations and reports sent by banks to snoopy government agencies. Computer-to-computer transfer. Yes there’s a record that a transfer of Bitcoin took place – but ZERO record of from whom or to whom.

    On the other hand…

    Cryptocurrency utilization is exploding, most of it unrelated to criminal activity. It is certainly true that crypto-related crime has grown; one respected vendor reports it nearly doubled from 2020 to 2021, reaching an all-time high of $14 billion. That same vendor reports even more dramatic growth of overall cryptocurrency transactions, which was more than five times in the same period. As the vendor says: “Transactions involving illicit addresses represented just 0.15% of cryptocurrency transaction volume in 2021 despite the raw value of illicit transaction volume reaching its highest level ever. As always, we have to caveat this figure and say that it is likely to rise as Chainalysis identifies more addresses associated with illicit activity and incorporates their transaction activity into our historical volumes. For instance, we found in our last Crypto Crime Report that 0.34% of 2020’s cryptocurrency transaction volume was associated with illicit activity — we’ve now raised that figure to 0.62%.”

    Supporters or crypto are also quick to point out that fiat currency is also used by criminals, so no one should be surprised that crypto is used by them.

    Conclusion

    Cryptocurrencies are widely discussed. "Bitcoin Billionaires" are in the news; hosts of ordinary people hope to be like them. The crypto industry sponsors reports and generally promotes the idea that the criminal use of crypto is minimal and going down. Which it is, as a share of all crypto transactions. As we know from the growth of ransomware attacks, the use of crypto by criminals is in fact increasing.

    It should be illegal for any regulated exchange to enable sending to or receiving from any address that fails to have full KYC and other identity disclosure with it. There are lots of exchanges that operate internationally for the criminals to continue using, as they will.

    Cryptocurrencies are an amazing technical achievement. Computers and networking already provide rich ground for criminal activity; Bitcoin added a safe-for-criminals international payment method that has fueled computer-based crime.

    Note: this was originally published at Forbes.

  • NNT for Benefits and for Harms

    In a previous post, I described the difference between relative risk (efficacy), absolute risk and the related concept of NNT (number needed to treat). In that post I focused on the NNT to get the benefit of the treatment. In this post I will focus on the essential other half of NNT: the NNT to be harmed.

    I will mostly focus on the direct harms of the treatment itself. However, in some cases, there are harms that come from other actions taken to treat or avoid a medical problem. Sometimes the harms can be large. The study of these indirect harms is not as advanced in the scientific literature as the direct harms, but given how large the scale of the indirect harms can be, they should be made standard. practice.

    NNT for Harms

    NNT is a simple way to understand how probable a given outcome is likely to be in absolute terms.

    Sometimes there aren't any harms, as in this meta-analysis of over 240,000 patients in 18 studies.

    11

    What's important to note is that the researchers looked not only for the benefit of fever reduction, but also for the harms that had been suspected for one of the treatments.

    Here is one where the NNT for harms is crucially important — because the treatment that is supposed to prevent heart attacks caused more of them than it prevented!

    22

    The case above illustrates an important aspect of NNT: it should cover (if appropriate) multiple possible benefits and multiple types of harms.

    Just because NNT harms outweigh benefits for a treatment doesn't mean that medical practice responds appropriately. For a long time, high blood cholesterol was thought to cause heart attacks. Statins became widely prescribed to lower the number. But now it is scientifically proven that blood cholesterol should not be lowered and therefore statins should not be taken. In spite of the fact that NNT harms are strong with no benefits, it remains standard practice for doctors to prescribe statins to lower the cholesterol level to meet now-disproven standards..

    Sadly, this raises the issue of conflicts of interest and transparency in scientific research, and the readiness of the medical profession to update practices when the science demonstrates that it should. It's even trickier when a pharmaceutical company conducts studies to prove that a drug it developed has important benefits and minimal harms.

    NNT Harms for covid vaccination

    The FDA's EUA (Emergency Use Authorization) Issued in December 2020 for Pfizer's covid drug claimed 95% effectiveness, and listed minor side effects which lasted just a couple days. The FDA gave full approval for the drug in August 2021.

    The full approval document stated that "the vaccine was 91% effective in preventing COVID-19 disease." No explanation was given for the reduced effectiveness. Unlilke the EUA document, the absolute numbers of infections were not disclosed, therefore giving a highly misleading impression of how likely any person who got the vaccine would be helped by it, implying that 90% of the vaxed would be protected vs. the actual number of under 1%.

    For harms, most of the minor harms of the EUA were repeated. However, they disclosed that myocarditis and pericarditis were suffered by young males: "Available data from short-term follow-up suggest that most individuals have had resolution of symptoms. However, some individuals required intensive care support. Information is not yet available about potential long-term health outcomes." Sadly, they provided no data, no NNT for Harm.

    I have yet to find good numbers for NNT Harms for covid. This should be easy, but as it turns out, the vast majority of the relevant data is secret. Yes, secret by approval of the FDA.

    However, I've dug into a couple of issues based 100% on published scientific data. For example, I found a paper published in April 2021 in the New England Journal of Medicine on Vaccine Safety in Pregnant Persons. The paper showed that the mRNA vaccines were safe for pregnant people to receive. Here is Table 4 from the original paper, showing that there were 104 spontaneous abortions out of 827 vaccine recipients, about 12%, which is within a normal range.

    Table 4

    Here is the footnote to the last column, about the numbers of people involved.:

    Foot 1

    A correction was published in October 2021 in the same journal, after the FDA's full approval had been issued. A casual reading of the correction, including the summary and abstract, makes it seem as though nothing significant was changed.

    Here is Table 4 in the corrected paper:

    Table 4a
    The number of spontaneous abortions remained at 104, but the totals and percentages were dropped. The explanation is found in the footnote:

    Foot 2

    The footnote leaves the impression that nothing can be concluded. However, returning to the footnote in the original paper, we read "…based on 827 participants … who received a Covid-19 vaccine … A total of 700 participants (84.6%) received their first eligible dose in the third trimester…" So 700 participants could not have had spontaneous abortions, since all those took place in the first 20 weeks of pregnancy.

    The arithmetic leads us to 827-700=127 participants were vaccinated earlier, and 104 of those participants had spontaneous abortions. The vast majority. This is clearly something that the authors should have pointed out and explained. Maybe my logic here is wrong.

    This leads us to wondering what should happen:

    What should happen

    First of all, the authors should have made clear the implications of their correction. If indeed the data shows that spontaneous abortions were excessive, they should have said so, and promised further study to confirm.

    Second, data about medical treatments of all kinds, including drugs, should be fully open source, the way some software is. That way, others could do the job that the authors of the study failed to do. The developer of the drug should open its data to the public, just like the source code to software like Linux is 100% open for copying, testing and use. This by itself will solve many problems. It will also enable problems to be surfaced quickly, so that a minimum of people are hurt by the problems. If drug makers were truly interested in safety and effectiveness, they would welcome the additional scrutiny.

    Conclusion

    NNT is an essential measure for treatment effectiveness. Every time a treatment is proposed to a patient, NNT should be part of the discussion. Certainly NNT for benefits is important — that's the whole point of the treatment. But NNT for harms is regularly left out of the discussion. Instead, it should be brought to the forefront.

  • Better Software and Happier Customers with Post-hoc Design

    What can you possibly mean by "post-hoc design?" Yes, I know it means "after-the-fact design," using normal English. It's nonsense! First you design something. Then you build it. Period.

    Got your attention, have I? I agree that "post-hoc design" sounds like nonsense. I never heard of it or considered it for decades. But then I did. Before long I saw that great programmers used it to create effective high-quality, loved-by-customers software very quickly.

    The usual way to build software: design then build

    The way to build good software is obviously to think about it first. Who does anything important without having a plan? Start by getting requirements from the best possible source, as detailed as possible. Then consider scale and volume. Then start with architecture and drill down to design.

    When experienced people do architecture and design, they know that requirements often "evolve." So it's important to generalize the design anticipating the changes and likely future requirements. Then you make plans and can start building. Test and verify as you drive towards alpha then beta testing. You know the drill. Anything but this general approach is pure amateur-hour.

    I did this over and over. Things kept screwing up. The main issue was requirements "evolution," which is something I knew would happen! Some of the changes seemed like they were from left field, and meant that part of my generalized architecture not only failed to anticipate them, but actually made it harder to meet them! Things that I anticipated might happen which I wove into the design never happened. Not only had I wasted the time designing and building, the weren't-needed parts of the design often made it hard for me to build the new things that came along that I had failed to anticipate.

    I assumed that the problem was that I didn't spend enough time doing the architecture and design thinking, and I hadn't been smart enough about it. Next time I would work harder and smarter and things would go more smoothly. Never happened. How about requirements? Same thing. The people defining the requirements did the best they could, but were also surprised when things came along, and embarrassed when things they were sure would be important weren't.

    After a long time — decades! — I finally figured out that the problem was in principle unsolvable. You can't plan for the future in software. Because you can't perfectly predict the future! What you are sure will happen doesn't, and what you never thought about happens.Time spent on anything but doing and learning as you go along is wasted time.

    The winning way to build software: Build then Design

    Build first. Then and only then do the design for the software you've already built. Sounds totally stupid. That's part of why I throw in some Latin to make it sound exotic: "Post-hoc design," i.e., after-the-fact design.

    When you design before you build, you can't possibly know what you're doing. You spend a bunch of time doing things that turn out to be wrong, and making the build harder and longer than it needs to be. When you build in small increments with customer/user input and feedback at each step, keeping the code as simple as possible, you keep everything short and direct. You might even build a whole solution for a customer this way — purposely NOT thinking about what other customers might need, but driving with lots of hard-coding to exactly what THIS customer needs. Result: the customer watches their solution grow, each step (hopefully) doing something useful, guides it as needed, and gets exactly what they need in the shortest possible time. What's bad about a happy customer?

    Of course, if you've got the typical crew of Design-first-then-build programmers, they're going to complain about the demeaning, unprofessional approach they're being forced to take. They might cram in O-O classes and inheritance as a sop to their pride; if they do, they should be caught and chastised! They will grumble about the enormous mountain of "technical debt" being created. Shut up and code! Exactly and only what's needed to make this customer happy!

    When the code is shown to another customer, they might love some things, not need some other things and point out some crucial things they need aren't there. Response: the nearly-mutinous programmers grab a copy of the code and start hacking at it, neutering what isn't needed, changing here and adding there. They are NOT permitted to "enhance" the original code, but hack a copy of it to meet the new customer's need. At this point, some of the programmers might discover that they like the feeling of making a customer happy more quickly than ever before.

    After doing this a couple times, exactly when is a matter of judgement, it will be time to do the "design" on the software that's already been built. Cynics might call this "paying off tech debt," except it's not. You change the code so that it exactly and only meets the requirements of the design you would have made to build these and only these bodies of code. You take the several separate bodies of code (remember, you did evil copy-and-modify) and create from them a single body of code that can do what any of the versions can do.

    When you do this, it's essential that you NOT anticipate future variations — which will lead to the usual problems of design-first. The pattern for accomplishing this is the elimination of redundancy, i.e., Occamality. When you see copy/modify versions of code, you replace them with a single body of code with the variations handled in the simplest way possible — for example, putting the variations into a metadata table.

    This isn't something that's done just once. You throw in a post-hoc design cycle whenever it makes sense, usually when you have an unwieldy number of similar copies.

    As time goes on, an ever-growing fraction of a new user's needs can be met by simple parameter and table settings of the main code line, and an ever-shrinking fraction met by new code.

    Post-Hoc Design

    Ignoring the pretentious name, post-hoc design is the simplest and most efficient way to build software that makes customers happy while minimizing the overall programming effort. The difference is a great reduction in wasted time designing and building, and in the time to customer satisfaction. Instead of a long requirements gathering and up-front design trying valiantly to get it right for once, resulting in lots of useless code that makes it harder to build what it turns out is actually needed, you hard-code direct to working solutions, and then periodically perform code unification whose purpose is to further shorten the time to satisfaction of new customers. To the extent that a "design" is a structure for code that enables a single body of code to be easily configured to meet diverse needs, doing the design post-hoc assures zero waste and error.

    What is the purpose of architecture and design anyway? It is to create a single body of code (with associated parameters and control tables) that meets the needs of many customers with zero changes to the code itself. The usual method is outside-in, gaze into the future. Post-hoc design is inside-out, study what you built to make a few customers happy, and reduce the number of source code copies to zero while reducing the lines of code to a minimum. The goal is post-hoc design is to minimize the time and effort to satisfy the next customer, and that's achieved by making the code Occamal, i.e., eliminating redundancies of all kinds. After all, what makes code hard to change? Finding all the places where something is defined. If everything is defined in exactly one place, once you've found it, change is easy.

    Post-hoc design is a process that should continue through the whole life of a body of code. It prioritizes satisfaction of the customer in front of your face. It breaks the usual model of do one thing to build code and another to modify it. In the early days of what would normally be called a code "build," the code works, but only does a subset of what it is likely to end up doing. When customers see subsets of this kind, it's amazing how it impacts their view of their requirements! "I love that. I could start using it today if only this and that were added!" It's called "grow the baby," an amazing way to achieve both speed and quality.

    New name for an old idea

    All I'm doing with "Post-hoc design" is putting a name and some system around a practice that, while scorned by academia and banned by professional managers, has a long history of producing best-in class results. I'm far from the first person who has noticed the key elements of post-hoc design.

    Linus Torvalds (key author of Linux, the world's leading operating system) is clearly down on the whole idea of up-front design:

    Don’t ever make the mistake [of thinking] that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence much too much credit.

    Gall's Law is a clear statement of the incremental approach:

    A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

    The great computer scientist Donald Knuth, author of the multi-volume Art of Computer Programming, was a master of shifting between assembler language programming and abstract algorithms and back, the key activities of the speed-to-solution and post-hoc abstraction phases of the method I've described here.

    People who discover the power and beauty of high-level, abstract ideas often make the mistake of believing that concrete ideas at lower levels are worthless and might as well be forgotten. On the contrary, the best computer scientists are thoroughly grounded in basic concepts of how computers actually work. The essence of computer science is an ability to understand many levels of abstraction simultaneously.

    Thanks to Daniel Lemire for alerting me to these quotes.

    Conclusion

    Post-hoc design is based on the idea that software is only "built" once, and after that always changed. So why not apply the optimal process of changing software from day one? And then alternate between as-fast-as-possible driving to the next milestone, with periodic clean-ups to make fast driving to the next goal optimal? Post-hoc design is a cornerstone of the process of creating happy customers and optimal code.  It also happens to conform to the goals of software architecture. Post-hoc design is like first fighting a battle, and then, once the battle is over and you've won, cleaning and repairing everything, incorporating what you learned from the battle just past so that everything is ready for the next battle. Post-hoc design is the way to win.

     

  • Does Vaccine Efficacy of 95% mean I won’t get sick?

    The Moderna and Pfizer Covid vaccines have 90-95% efficacy, but the studies submitted for their approval showed they helped only about 1% of the people who took them. This is news to most people. How can this be?

    We are constantly told that vaccines are safe and highly effective, for example by the CDC. Numbers like 90% efficacy are thrown around, which most people understand to mean that getting vaccinated means there's only a 1 chance in 10 that you'll get sick. You're really protected!

    What the CDC and major authorities fail to disclose is that standard statistical methods applied to the vax vendors' own data shows that only about one in a hundred people who get the jab would be protected from getting covid! The tests did indeed show 90% or better "efficacy" (relative risk improvement), but what's more relevant is "absolute risk" (AR), which their own data showed was around 1%.

    Read on to understand these industry-standard measures that are mostly ignored; if widely understood and acted on, they would transform not just vaccines, but pharma and public health in general.

    Winter Coats and Vaccines

    Winter coats are a standard solution to protect people from getting cold when the weather outside is cold. Kind of like when the air is suffused with invisible vaccine particles, you want to help your body defend itself.

    There are a wide variety of coats available to protect against the cold. What would happen to a new coat vendor that promoted its coats as being highly effective against the cold, protecting most people who wear them, but it turned out that the maker and seller knew that 99% of the people who wear them on a cold winter day wouldn't be helped by them — word would get out quickly and the coat maker's reputation would be in the cellar.

    What would happen if major authorities had subsidized the coat making, regulated their testing, and then promoted them as "safe and effective?" And then what would happen if all the authorities demanded that you buy and wear the coats, to the point refusing to let you enter a football stadium on a cold day unless you were wearing one of the approved coats? There would be mass revolt. Which is what would have happened with covid if people knew the facts that were so carefully concealed from them.

    When locations like restaurants and performance halls opened, authorities in places like New York City declared that only people with proof of vaccination would be admitted. People were eager to eat out and be entertained, so this was another reason to get the jab. Vaccination cards were checked on entry so that everyone could be "safe."

    Vax covid D card no birth

    While covid is the most current example of this grotesque propaganda/misinformation, it is all too common in healthcare and pharma, as I have shown for example here for saturated fat, here for cholesterol and here for hypertension. What's new in covid is the level of coercion involved.

    Relative risk, absolute risk and Number Needed to Treat (NNT)

    The widely used number for a vaccine called "efficacy" is technically "relative risk" (RR). In scientific papers, it's typically a number like .05, which means that compared to the number of people who got sick without the vax, just .05 of the vaxed got sick. This is translated to saying 95% of the vaxed avoided sickness compared to the unvaxed who got sick. While technically true, it is NOT about your chances of getting sick or staying well. It means relative risk, which is how much better the vax is compared to those who had no vax and got sick, independent of the number of people in the study.

    Let's go back to winter coats. When people go out in the cold, they put something on to keep warm. Sometimes the coat doesn't keep some of them warm enough. Suppose the august health authorities got real worried about people dying of the cold without adequate protection. Huge amounts of time and money were spent developing what the developers thought was a great winter coat. Never mind that, for various reasons, the vast majority of people weren't getting cold. They went to a northern football stadium near the end of play-off season (winter). They got everyone entering at half the entrance gates to wear their wonderful coat and everyone who entered at the other half to wear a fake, ineffective version of the coat (the placebo) on top of whatever they were already wearing. At the end of the game, they briefly interviewed and temperature-measured everyone who left, noting which version of the coat they wore.

    Let's suppose that 20,000 people went to the football game, with 10,000 getting fancy new coats and the other 10,000 getting fake coats. Suppose 10 people wearing the fancy new coat got cold, while 100 people in the fake coat group got cold.

    First let's calculate the number everyone talks about, efficacy, technically known as Relative Risk (RR). RR in this case is 100 minus 10 divided by 100 = 90% efficacy. The wonderful coat did much better when added to what people were already wearing, about ten times better than the fake coat (placebo)! This is the number everyone thinks means that 90% of the people who take the vax won't get sick. Except it doesn't mean that. The key to understanding that is that RR has NOTHING to do with the size of the group, the number of people getting poked.

    So let's calculate Absolute Risk (AR). In this case, of the 10,000 in the fake coat (placebo) group, 100 got cold, which is 1 in 100, for an AR of 1.0%. Your chances of avoiding getting cold without the fancy coat were excellent — 99 out of 100! For the 10,000 people in the fancy coat group, just 10 got cold, which is 1 in 1,000, an AR of 0.1%. The relative difference between the fake and real coats was truly big — ten times! But the absolute difference means that 10,000 people had to get the fancy coat in order to avoid just 90 of them getting cold. The reduction in absolute risk was 1.0% – 0.1% = 0.9%.

    How many people have to get the fancy coat in order for one to benefit? Scientists have a name for this. It's NNT: Number Needed to Treat, sometimes called NNTV (Number Needed To Vaccinate) when a vax is involved. While "efficacy" focuses on "relative" risk, NNT turns the absolute risk (AR) into a more relevant number — of those getting the treatment, how many will benefit? In this case, all 10,000 football fans would have to wear the fancy coat so that about 100 wouldn't be cold, ignoring the 10 who got cold anyway. In other words, in order for one person to benefit, 100 people have to get the treatment, an NNT of 100. For the other 99, the fancy coat made no difference — they would have been warm without it.

    Getting back to reality, this means that the coats most people choose to wear protect them from getting cold remarkably well. Anyone surprised? What's the normal reaction to being in the stands and getting cold? Doing something to warm up! Jump up and down. Wave your arms. Drink a cup of hot cocoa. Get hugged. Sit on someone's lap, get wrapped in their coat. If worse comes to worse, leave for someplace warm. There are "treatments" that work just fine.

    Why would anyone bother accepting and wearing the authorized coat on top of what they already have? In the vast majority of cases, they'll be fine without it, and there are things they can do if they start to feel cold. Not to mention the risk of side effects of the fancy new thing. Here and here are more detailed explanations with examples.

    ARR and NNT for Covid

    I used round numbers above to make sure the concept was clear. But the whole point is the real world. There is a wonderful scientific website that provides NNT's for many treatments, based completely on scientific studies. For example, here is their article on cholesterol-reducing statins. which makes it clear that no one should be taking these widely used but destructive drugs.

    Let's turn to the NNT for covid. What's amazing about this is that the information about NNT for covid is hidden in plain sight. Let's look at the FDA's announcement of their EUA (Emergency Use Authorization) for the Pfizer covid vaccination. The FDA states:

    The FDA has determined that Pfizer-BioNTech COVID-19 Vaccine has met the statutory criteria for issuance of an EUA. The totality of the available data provides clear evidence that Pfizer-BioNTech COVID-19 Vaccine may be effective in preventing COVID-19. The data also support that the known and potential benefits outweigh the known and potential risks, supporting the vaccine’s use in millions of people 16 years of age and older, including healthy individuals.

    Later in the same announcement, the FDA gives the details about how good the vaccine is. Here is the start of the key paragraph:

    FDA Evaluation of Available Effectiveness Data 

    The effectiveness data to support the EUA include an analysis of 36,523 participants in the ongoing randomized, placebo-controlled international study, the majority of whom are U.S. participants, who did not have evidence of SARS-CoV-2 infection through seven days after the second dose. Among these participants, 18,198 received the vaccine and 18,325 received placebo. The vaccine was 95% effective in preventing COVID-19 disease among these clinical trial participants …

    This gives the key point of (relative) effectiveness: it's 95% effective! Hooray, we've got it! See what happens when you keep reading:

    … with eight COVID-19 cases in the vaccine group and 162 in the placebo group. Of these 170 COVID-19 cases, one in the vaccine group and three in the placebo group were classified as severe. At this time, data are not available to make a determination about how long the vaccine will provide protection, nor is there evidence that the vaccine prevents transmission of SARS-CoV-2 from person to person. 

    First, let's look at the chance of getting covid without getting vaccinated; it's 162/18,325 = 1 in 113. Fewer than 1% of the placebo group got covid! And of those 162 cases, just 3 were classified as severe, so just 1 in over 6100 unvaxed people got severe covid. The numbers to achieve the benefit of vaccination aren't much different. The NNT is over 1 in 110 — over 110 people had to take the vaccine for one person to avoid getting covid! Yes, the relative benefit is huge, but in absolute terms, less than 1% of people are actually helped by getting jabbed.

    Note also that there was zero evidence that the vaccine prevents an infected person spreading the infection.

    Is this the case only for Pfizer? A group of French scientists calculated ARR and NTT for the leading Covid drugs, based solely on the published studies of the trials of those drugs. Here is a summary and here is the study published in a scientific journal. It deserves much more attention than it seems to have gotten because of its focus on NNT.

    Let's jump right to the key table.

    NNT Covid

    The first drug, Pfizer, has a terrific efficacy (RR), listed there as 0.05, but normally reported as 95%. Everyone (including me, when I first saw it), thinks that means that taking the Pfizer vax means there's only a 5% chance of getting covid, right? It works great! Now look at the NNT, 141. That means that for each 141 people who are vaxed, just one benefits by not getting covid!! It makes common sense: there were 21,728 people in the control group (people who got shots that were placebos), and only 162 of them got covid,

    You might think that relative and absolute risk are related, but the third drug, AstraZeneca, makes clear that they're not. AstraZeneca had efficacy (RR) of 0.30, normally reported as 70%, which is dramatically worse than Pfizer's — why would anyone choose it? But AstraZeneca has an NNT of 83, which means that your chances of the AstraZeneca vax helping prevent covid were much better than the Pfizer vax. But even with the better NNT, chances are extremely high that you wouldn't get covid, with or without the vax.

    The issues I describe here are not radical or new. The paper above was notable only in that it covered all the major covid vaccines; other doctors and scientists have publicly pointed out the same facts. For example, here is a note by a doctor published in the BMJ shortly after the trial results were first published.

    Conclusion

    After learning about efficacy, absolute risk and NNT, your understanding of what it means for a treatment to be "effective" changes radically. Absolute risk and NNT are at least as important. Authorities should discuss all these number prominently.

     

  • The Medical Treatment of Obesity

    In a prior post I asked why there is no search for the origins of the widely-acknowledged obesity epidemic that harms so many people. I suggested that the data shows that there is an obvious cause: the government nutrition recommendations that pervade our society and prominently stated on packaged food. The overweight/obesity numbers started their steady growth shortly after these were promulgated and people followed the recommendations.

    Ignoring overwhelming evidence, the authorities continue the health-destroying drumbeat of bad eating advice. Now, the medical people who are charged with dishing out this destructive nonsense are being criticized for making the people who follow their advice feel bad. When will it end??

    Smoking

    When I was growing up, I saw advertisements and commercials for smoking.

    The-Marlboro-Men

    The Marlboro Man was particularly memorable. The ad campaign generated billions of dollars of sales.

    The tobacco industry was always concerned about their image; throat irritation from smoking was a well-known side effect, not to mention the growing number of deaths by lung cancer. So ads were created and widely shown claiming the support of the medical profession for smoking, for example:

    Camel_MoreDoctors_RedOnCall_1946-1

    We know today that smoking causes lung cancer. It wasn't until 1964 that the Surgeon General declared it the cause, and many years passed before other measures were taken. For example, United Airlines was the first to create a non-smoking section of the plane, in 1971. It took until 1990 for smoking to be banned on domestic flights in the US, and later for international flights.

    Obesity

    So where do we stand with obesity compared to smoking? I would estimate we're at about 1960. The government is hard at work revising the nutritional guidelines most recently updated in 2017, and the drafts that have come out strongly resemble the equivalent for nutrition of what for smoking would be: "smoking unfiltered cigarettes is just fine, but don't smoke too many a day, and make sure you practice breathing exercises regularly to keep your throat and lungs healthy."

    As a reminder, the science is solidly behind consuming whole-fat dairy, eggs and meat, while minimizing sugar and carbs. Here is an example of the current version of nutritional insanity:

    7670e352-4820-482b-a92a-b53226bdbd33_1252x1352

    Sugar-loaded Frosted Mini Wheats and Lucky Charms are better than a whole egg, and ice cream with nuts is better than ground beef. Sure! I wonder what the role of the processed food industry has been in all this…?

    Doctors and obesity

    Doctors are required to dish out their profession's broken nutritional recommendations to one and all. They are particularly supposed to give good advice to the obese people those recommendations continue to harm. But now there's a new twist — doctors are being blamed for the on-going troubles of their obese patients!

    Obese people are often “weight-shamed” by doctors and nurses — worsening their problem and causing them to wrongfully blame themselves for the condition, according to a new study.

    Fat-shaming by medical professionals leads patients to feel humiliated and anxious about appointments — making them more likely to overeat, according to research from the University of London.

    Researchers examined 25 previous studies centered on 3,554 health professionals and found evidence of “strong weight bias” — including that doctors and nurses tend to assume overweight people are lazy, according to the report, published in the journal of Obesity Reviews.

    “[They] believe their patients are lazy, lack self-control, overindulge, are hostile, dishonest, have poor hygiene and do not follow guidance,” Dr. Anastasia Kalea, who authored the study, told the UK Guardian.

    So what should physicians do?

    The study concludes that medical professionals should be trained in “non-stigmatizing weight-related communication.”

    Tam Fry, the chairman of the National Obesity Forum, said doctors and nurses should take responsibility for the role they play in the UK’s obesity epidemic.

    “It is shameful that the condition continues to be regarded by health professionals as being solely a personal problem, little to do with them and it’s disgraceful that they stigmatize patients for being overweight,” said Fry, who was not involved in the study. 

    “This is the last thing a patient wants to hear from professionals who they trust will help them.”

    It's clear that physicians are stuck between a rock and a hard place. If they dish out their profession's nutritional advice, the obese person will stay over-weight. If they dish out the limit-calories-exercise-more stuff, most people just can't keep it up — as we know from the obesity numbers. And if they bend over backwards to make sure to avoid giving obese people the slightest impression that their own actions might, just maybe, have something to do with their condition, then they've really blown it! Can't talk and what they're told to talk doesn't work.

    Conclusion

    Remember what happened with smoking — the decades it took for the cancer-causing truth about it came out, got proven, and the more decades it took for it to be acted on. We're still in the early innings here with nutrition in general, and saturated fat in particular. We can only hope that sanity and science can move more quickly this time.

  • Samuel Johnson’s Dictionary and Writing Software

    Samuel Johnson's famous Dictionary of English was written hundreds of years ago. Nonetheless, it has a great deal to teach us about software.

    English is a language, of course, and we know there are numerous computer languages. Each kind of language has words, and it's essential to spell the words correctly so that the reader of what you write knows what you mean to say. Similarly with grammar and usage. When explaining the use of words, it's helpful to give examples. Of course there are more words in human languages than in computer languages, so the book defining a human language tends to be longer, as you can easily see by picking up a copy of Kernighan & Ritchie's The C Programming Language and any English dictionary.

    The Dictionary

    In 1755, Samuel Johnson published "A Dictionary of the English Language."

    JohnsonDictionary

    His was far from the first such dictionary — there were already dozens in existence. But his became the most influential dictionary of English for over 150 years.

    Much like a top modern programmer, he furiously wrote away with great energy and concentration. He originally claimed he would finish it in 3 years, but it ended up taking him seven. He did everything himself, with just a small amount of clerical help.

    Meanwhile, L'Academie Francaise had forty scholars working for forty years to do a similar job for French. Johnson is said to have commented on this:

    This is the proportion. Let me see; forty times forty is sixteen hundred. As three to sixteen hundred, so is the proportion of an Englishman to a Frenchman.

    In software, there is sometimes talk of programmers who are 10X more productive than the average programmer. Johnson claimed to be more than 500X! When he was "really" only over 200X. Such an exaggerator… Of course in programming, there are people and small teams that perform 100X better than the ancient, lumbering giants "competing" against them.

    With such a vast amount of work, it's not surprising that some humor managed to find its way into the work. For example:

    Lexicographer: A writer of dictionaries; a harmless drudge, that busies himself in tracing the original, and detailing the signification of words.

    Oats: a grain which in England is generally given to horses, but in Scotland supports the people.

    Software

    If you're a high-productivity programmer, I suspect the parallels have leaped out at you. Here are a couple:

    • Programmers have strong opinions, and sometimes express them with wonderful humor. Here are dictionary examples and language opinions.
    • The productivity difference between a top-down typical big-company MBA-driven software project tends to take 10 to 100's of times more effort than a similar project done by a great programmer or small team. See this and this.

    The lesson here is clear: when it comes to writing software, be more like Samuel Johnson and less like the French Academy.

  • Software Programming Language Evolution: Credit Card Software Examples 3

    Credit card systems are among the earlier major enterprise software systems written. The early systems were written in assembler language or COBOL. If programming languages really did get more powerful and advanced, you would think that a wave of re-writes would have transformed the industry as card systems written in creaky old languages were streamlined and turbo-charged by being written in more modern languages. Generations of industry executives and technical experts and leaders have thought exactly this.

    The earlier posts in this series have described two major such efforts that ended in face-plants, and other efforts that illustrate the ongoing power and productivity of supposedly decrepit approaches. In this post, I'll describe a couple not-famous examples of advances that actually took place.

    Clarity Payment systems and TSYS: the first version in Java

    In the late 1950's a group inside a local bank, Columbus Bank and Trust in Georgia, started building a system for processing credit cards. The division went public and eventually became known as TSYS, Total Systems, which is now one of the world's major card processing companies.

    During the early 2000's a special kind of card with limited functionality called a pre-paid debit card started to be used. Unlike a credit card, which you use to make charges and then later pay it off or use revolving credit (called "pay later"), a prepaid debit card is just what it sounds like: you first pay in some money and then can make charges using the card until the amount you put in runs out (called "pay before"). This kind of card is vastly easier to implement in software than a credit card, but is still complicated because of all the bank and card interfaces. See this for more.

    Meanwhile a small company called Clarity Payment Solutions had created a working prepaid card system. The technical founder of the company had bought into the rhetoric around the Java enterprise Object-oriented language, and had constructed the code using it. What everyone believed was that basing a program on objects would make it tremendously more flexible and easier to change than using traditional languages. Objects were thought of as being like Lego blocks with super powers, enabling you to pick out the ones you like and piece them together easily. A feature called inheritance promised the ability to make minor changes without much effort and no side effects.

    The executives at TSYS needed to get into the rapidly growing market for prepaid debit cards. They put some effort into having their staff modify their internal systems to meet the need but weren't getting anywhere fast. When they encountered Clarity Payments they were pretty happy — it's what we need! It's already working in the market! And best of all, it's written in Java, so we'll be able to make changes easily without all the trouble of systems written in prehistoric languages like the one we're stuck with! They bought the company.

    The technical leader of Clarity was sobered by his experience of writing the software for prepaid debit. It was a lot harder than he thought it would be, and the ease of making changes because of the object orientation of Java proved to be little but hollow rhetoric. He had proven to himself that it was all b.s. through hard personal experience, learning what others have learned. He now had years of practical experience building a production system to make clear to him what the real obstacles were. He was glad to sell off the company and not have to struggle with the code any more. TSYS was welcome to it!

    TxVia and Google: the second version in Java with help

    The technical founder of Clarity set about building the software infrastructure he would have liked to have had when building the Clarity code in the first place. Java by itself didn't solve the problems. It needed lots of help, and he was going to build the system that would give it help. This is a response that some smart programmers have when they get their noses rubbed in the broken promises of some new programming fad. It led this fellow to build a system that took a significant step up the hierarchy of abstraction, as I describe here. Java would remain the core language, but given a huge practical boost by having the ability to make changes built in at critical points using a kind of workbench approach with cleverly chosen "user exits" to enable safe customization. Eventually he turned the power of his new system to building prepaid debit card software, whose requirements he understood so well. Reconnecting with his old business partners, they went into the same market again and started getting some real business.

    Meanwhile a team at Google had been working on the same problem.They wanted a Google implementation of prepaid debit card functionality for the new Google Wallet. The leaders took at look at prepaid debit card functionality and immediately felt it was no big deal. It's nothing but putting money into an account and checking withdrawals to make sure they was enough money.  Adding and subtracting and a few interfaces. No biggie. But just to be safe they assembled a crack team of nearly 100 Google-level geniuses and put them on the job. They used languages and tools that were generations ahead of standard Java.

    A year later they still had nothing working. It turned out to be harder than they had thought, even with the astounding power and flexibility of Google software resources. When one of the leaders heard about TxVia, he insisted on giving them a look. A group of Googlers came to the TxVia offices and threw down the specs of what they were trying to build on the table. They sneered, we heard you guys were real smart; our managers tell us you've gotten a lot further than we have. Sure. If you're so smart, prove it by making a system like what's in this document work.

    Shortly after the TxVia team came back with — a system that met Google's requirements. That the Google team had spent a year failing to meet. Skipping over the emotions of everyone involved, which pretty much covered the gamut from embarrassed to ashamed to denial to exultant to you-can-imagine, Google bought the company. It became a key part of Google Wallet.

    This was a clear demonstration that Java or other modern languages are NOT the determining factor in programmer productivity and software effectiveness.

    The Paysys Corecard system and Apple

    I told the story of my time as CTO at Paysys in the late 1990's here, including the sale of its millions of lines of COBOL code to First Data. I gave details about how Paysys became a powerful player in the market for credit card software by increasing their breadth of automation here.

    While I was CTO, after studying the COBOL code in detail with lots of help from the people who had written it, I came up with a way to re-create the system's functionality. The idea was to implement a core set of concepts written in a small amount of abstract code and then build extensive metadata as needed to support the product’s existing functionality and more. I wasn’t fully aware of it at the time, but I used the method described here and in the linked posts. It was written in C++ (mostly the C subset) and ran of a network of servers. One of the big national consulting groups ran tests that verified it could handle tens of millions of cards with linear growth. In addition a team at First Data in Omaha ran the code and modified the metadata to make it match the functionality of their existing system written in assembler language. The trouble they had modifying the assembler language to handle a variety of requirements already met by the Paysys COBOL code was the main reason they were buying Paysys. They decided they would really like to have the new code as well.

    While his team urged the CEO to include the new code in the purchase, he decided he didn’t need it, and kept it out of the deal. The COBOL code solved immediate problems like supporting cards in Japan, and who cared what a bunch of nameless programmers babbled about the speed of making future changes?

    Thus it happened that when First Data bought the Paysys VisionPLUS COBOL code in the year 2001, the new metadata-based system was left out of the deal and stayed with the remainder company, now called Corecard.

    Years went by. Some leading people began to notice Corecard because they could make it do thing unanticipated things much more quickly and easily than with normal procedural systems. Then Apple decided to get into the credit card issuing business. Not the cheap and easy pre-paid kind, the full-featured, tough credit card kind. They took their requirements to the usual suspects who gave them the usual lengthy go-live times with the usual astronomical custom programming fees. Somehow they talked with Goldman Sachs, which had connected with Corecard through a small, adventurous group. They could get the job done, quickly and efficiently, when no one else in the industry could come close. A deal got done and the Apple card came out quickly, doing everything Apple wanted. And scaled quickly.

    Normal parameters could not have accomplished this. The TxVia workbench approach didn't have even a fraction of the functionality needed. Only a software system that went far beyond the capabilities of procedural languages of any generation could have met the challenge. In the end, languages of ANY generation can only do so much. They're like prop planes. If you want to go REALLY fast you need a rocket engine, and that's what meta-data-based systems are.

    One of the sobering lessons here is the very basic human one: No one will want a rocket engine unless they're trying to build a rocket. If you build a rocket engine, however powerful it may be, people will look at it, scratch their heads, express mild amazement, but walk away — they don't need it. And won't until they decide they want to build a rocket.

    Conclusion

    These real-life examples demonstrate the limits of normal procedural languages, no matter how modern and fancy. They demonstrate how taking even small steps up the ladder of abstraction can yield amazing gains, as they did for Google, at least after they bought TxVia. And finally they demonstrate that there's a whole quantum leap further you can go to meet software requirements beyond what procedural languages alone can handle — but no one will want to buy them until they have a problem that nothing else can solve.

  • Blood Pressure Pills can make you Blind

    As a direct result of ridiculous, anti-scientific standards, pills to lower blood pressure are the mostly widely prescribed pills in the US, with over 100 million people supposedly cursed by the “disease” of hypertension. Did you know that there’s a never-refuted medical study published by the American Academy of Ophthalmology and sponsored by the National Eye Institute (part of NIH) showing that taking those pills greatly increases the risk of going blind? I didn’t think so.

    AMD: Age-related Macular Degeneration

    More than 11 million people in the US have this disease. It mostly affects people 60 and older. The most common variety of it – dry AMD – is progressive and has no cure. Eventually it leads to complete loss of vision. Here is the NEI description of the disease, its causes, prevention and non-cures. You will notice that there is NO mention of blood pressure medication.

    I have described the largely suppressed side effects of blood pressure medication, and my path to freedom, with the result that I'm not taking the pills and I'm healthier. Two years ago I was diagnosed with early stage AMD. After resolving the issue with harmful blood pressure pills, I decided to see if the pills also impacted AMD. While it wasn't too hard to find out about the side effects of blood pressure medications, including the ones related to heart health I experienced, I hadn't seen anything about vision in general, much less AMD. I decided to look harder.

    I mostly found things like this from the Cedars-Sinai website:

    111

    In other words, they don't really know. And they clearly state that "uncontrolled high blood pressure" — in other words, failure to take blood pressure medicine when you "should" — is a cause.

    OK, let's go to the professionals. the American Academy of Ophthalmology. What do they say about blood pressure drugs and AMD?

    11

    This blows me away. The very first risk factor they list is the garbage about saturated fat. Totally wrong. This is the cornerstone of the explosion of obesity that harms so many and has nothing to do with AMD. I'm suspicious. Scanning down the list, I see one of the causes they list is "have hypertension (high blood pressure)." Not "treating" it or "taking blood pressure medications," but simply "have" it. In the linked article about high blood pressure, they simply declare that it can lead to big trouble, and "can cause permanent vision loss." OMG! I'd better start taking pills to get my blood pressure under control!

    I guess it's clear. Whatever the cause of my AMD, it can't be the blood pressure pills I took for eight years.

    The Beaver Dam Eye Study

    Stubborn guy that I am, I kept looking. I found a little eye group in the DC area that promotes its services. I found them because my search engine surfaced two closely related blog entries on the site, one of them titled "The Link between Blood Pressure Drugs and AMD," a close match to my search string. Score! The second sentence of the post is: "If you take medication to lower your blood pressure, it’s important to know that you could be increasing your risk of developing AMD, or age-related macular degeneration." The bold was in the original!

    Both blog posts give a reference to the 2014 study and extract some details, all of which I have verified. Here is the attention-grabbing sentence from the blog post: "For residents who were not taking blood pressure drugs, only 8.2 percent of them developed early AMD. For residents who took medication for high blood pressure, nearly 20 percent of them developed AMD."

    The chances of getting AMD were more than doubled by taking the drugs.

    Here are the highlights of the study.

    Screenshot 2022-07-20 173158 T

    In short, thousands of people in a Wisconsin town were followed over 20 years, tracking their use of blood pressure medication and the incidence of AMD. Here is the conclusion at the top of the paper:

    Conclusions: Use of vasodilators is associated with a 72% increase in the hazard of incidence of early AMD, and use of oral b-blockers is associated with a 71% increase in the hazard of incident exudative AMD. If these findings are replicated, it may have implications for care of older adults because vasodilators and oral b-blockers are drugs that are used commonly by older persons. Ophthalmology 2014;121:1604-1611 ª 2014 by the American Academy of Ophthalmology.

    Whatever the chances of you getting leads-to-blindness AMD are, you increase them by about three quarters by taking widely-prescribed blood pressure pills. Still think lowering your blood pressure is worth it, particularly considering the proven facts I describe here?

    So where are the headlines? Where are the cautions about the vision-killing side effects of blood pressure drugs? Where are the follow-up studies? Where are they on the websites of major public and private healthcare organizations? Nowhere, that's where they are. Nowhere!!

    It's clear that this isn't just ignorance. It's suppression. Just above I showed how there's no hint of a problem with blood pressure pills on the official AAO website. When I did a full search on Google for "AMD blood pressure," instead it showed me results for "And blood pressure." I corrected it and mostly found propaganda, but did find a reference to the Beaver Dam study. When I used my favorite non-Google search engine, which I like because they don't have thousands of engineers hard at work adding bias to the results, the very first result was a direct link into … the AAO website! … to a news item about the Beaver Dam study! The Expert-fueled AAO organization put a brief post on their site about the study, but failed to mention it anywhere else! Not only that, when you use their embedded (Google) search facility on the site, their own post fails to appear in the results!

    Why do you suppose that is? Pharma money? What about the ethics of the healing profession, not to mention their self-respect? Given the near-total suppression of the information, I suppose simple ignorance could explain the actions of most providers, along with "standards of care" that demand regular taking of blood pressure and prescribing medications according to standards. Which are wrong, not to mention destructive.

    I paid to get a copy of the full study. It had important information not included in the brief summaries. Look at this extract from Table 4 near the end of the paper:

    111

    The first line is the one often quoted. Let me show the math. Of the 2714 people in the study, 295 of them (more than 10%) got AMD because they were taking the BP pills.

    I took two pills for eight years. One was Amlodipine, a calcium channel blocker, which in the study nearly doubled the chance of getting AMD. I also took Losartan, an ARB, which had zero percent AMD — not because it was innocent, but because as shown in an earlier table, almost none of the participants took it. It could be awful, but the study was too small to know.

    An earlier table also showed the incredible extent of BP medication use. About a third of the participants in the youngest age group (under 64 years) took medications, while over two thirds of those over 85 were taking them. Most of whom shouldn't have been taking them at all! I  wonder, just wonder, if this could have something to do with the increasing incidence of AMD with age — you think that's a possibility that should be studied?

    Conclusion

    I used to think that the pharma and the industrial food industries make mistakes, like any industry, and you have to take the good with the bad. There is certainly some good. But the more I learn, the more I discover the all-too-widespread shameless self-dealing of the industries, strongly supported by government agencies and professional authorities. They force through regulation putting misinformation on our food and our diets in hospitals, and are making billions of dollars selling pills that are standard procedure for preventative care that, instead of keeping us healthy, actively make us sick — even to the point of making us blind — along with numerous other problems I have briefly touched on in prior posts.

     

  • The Dimension of Software Automation Breadth Examples

    One of the major ways software evolves is by increasing along the dimension of automation breadth. A domain can be dominated by products at a given breadth of automation, and suddenly a existing or new competitor starts winning by increasing its breadth of automation, offering its customers more value for less effort and money. It's a classic move and a good way for new entrants to disrupt a market.

    One of the most frequently given pieces of advice, including by me, is to “focus,” i.e. basically solve fewer problems, try to satisfy a narrower range of customers, etc. While this advice is applicable more often than not, the natural and recurring progression of products through the spectrum of “automation breadth” makes it clear that, sometimes, when the conditions are right, the winning strategy is to be among the first to increase the breadth of automation that you incorporate into your product.

    Example: Athena Health

    A clear example of this is shown by the story of AthenaHealth. At the time the founders started the company, a wide variety of products were already available to run physician offices, from small single-office practices to extended medical groups. These products generally ran on inexpensive machines that the practice would keep in some back room, and would support multiple users via a LAN or terminals. Most of the products were sold by license, so that the office had to pay only a modest price for the license, and then annual maintenance.

    Along comes little Athenahealth, with a better way of doing things. Athena had a cool new practice management system (PMS). Unlike all PMS’s at the time, it was built using internet technologies, so that it could be operated as a service, with people at the office accessing the system using machines with browsers and internet connections. Athena took care of the computers, relieving the medical office of a burden it basically didn’t want.     

    But they ran into a little problem: the people in charge of medical practices are doctors, and doctors really don’t care about PMS’s – they care about patients and medicine. A PMS is a necessary evil, something you should buy for as little as you can get away with and ignore until things get so bad you are forced to buy a new one. Money spent on the PMS is just money out of the doctors’ pockets, as far as they’re concerned. Oh, you have a “better” one, do you, whatever that means? Stop wasting my time.

    The folks at Athena noticed that one little thing the PMS does is produce bills and claims, the purpose of which is to get patients and insurance companies to send them money. No claim, no money. Unfortunately, merely producing the claims rarely proves to be sufficient to get the money flowing. People are required to do special things to the claims, provide additional information, harass the payers, etc. This is so specialized and time-consuming that it either consumes the time of a number of people at the office, or is outsourced to a “billing service.”

    The Athena folks went on to notice additional important things: (1) the chances of getting paid are a direct reflection of the quality and appropriateness of the information on the claim; (2) the PMS and how it is used is the main source of this information; (3) by actually performing the billing service, you can learn how to produce a better PMS that produces better claims, increasing the effectiveness of the billing service while reducing the cost of running it at the same time. Finally, they found out that there is something the doctors who run medical practices care about other than medicine – surprise, surprise, that something is money.

    So Athena introduced an outsourced billing service, but required practices that use it to also use their practice management system – at no additional cost! And they got so good at collecting the money that doctors could essentially get more money and a really cool, state-of-the-art PMS (like they cared…) for free!

    This is a nice story for Athena, but the point of telling it here is that it illustrates the principle of product automation breadth evolution. While products are evolving within a “level” of automation breadth (i.e., how many of an organization’s functions it automates), it is normally a good idea to maintain discipline, avoid distractions, and concentrate on automating that function. But at a certain point in the evolution of each product category, pretty much everyone in a space has automated everything within that function, and everyone is reduced to concentrating on sales strategies and niggling little details. At that point, and PMS’s were at that point when Athena came along, it makes sense to do what you’re normally supposed to avoid – look for another function inside the organization to automate, particularly if there are synergies in implementing the two functions within a single framework, as there certainly were in this case.

    Example: Bank and Retail Credit Cards

    In 1983 a small company called CCS (Credit Card Software, later called Paysys) released a body of COBOL code that would enable a bank to process credit cards. A number of small and regional banks bought copies of the code and ran it successfully. The code was enhanced over the years.

    A major retailer, Michael's Jewelers, approached CCS and asked if they could make a version of the bank software that could handle purchases from their stores, including a variety of payment plans and financing options offered by the store that were not supported by bank card software.

    The company's programmers quickly gave up on the idea of modifying the bank code to handle the problem. Many aspects of bank card processing, such as the difference between issuing and acquiring, were irrelevant to retail. In addition, the many financing options supported by retailers went far beyond anything banks did. So they borrowed from the bank code to the extent that it helped and ended up creating a separate body of software called Vision 21. Once it was available, it proved to be a big success in the market, and was quickly enhanced by customer demand to include all the options desired by retailers Before long it supported the needs of retailers in other countries as diverse as Japan and South Aftrica.

    Finally, there was very large processor, Household International, that was running multiple copies of both products, separate because they had been customized for a variety of reasons, for example to support methods of credit that were unique to a market (for example “hire-purchase” in South Africa). While CCS, now called Paysys, had failed to create a generic bank/retail product when confronted with an example of the generic problem, unifying multiple bodies of related code into a single, highly parameterized code base proved to be a far more tractable problem, particularly with a single important customer who insisted that these variations were the only ones to worry about.

    The industry quickly rallied to this new product, called Vision PLUS, that could be directed at so many different problems with such relative ease. For example, it enabled retailers to issue co-branded cards that worked like regular bank cards, except when used in the issuer's retail store, when it acted like a classic store card with features like "90 days same as cash" options that bank cards don't support. While “parameterized product” may sound like an abstract concept, it translates directly into business advantage compared to more primitive product types, by enabling the product to be customized, installed, upgraded and maintained with less labor, less time and lower risk of error.

    The company that built Vision PLUS was bought by First Data, a major card processor. The reason is interesting: in spite of having thousands of programmers (Paysys had only dozens), First Data was unable to modify their US-centric code base to handle processing in Japan. At the time of the sale, Vision PLUS was processing about 150 million cards world-wide. The code lives on and currently runs over 600 million cards.

    Conclusion

    These are two examples of companies growing by broadening their focus — in a strategic way, driven by just a couple of representative customers. In neither case did they address a whole new market at the beginning, though that was the eventual goal. They took their existing software along with a cooperative customer and met that customer's needs. Athena started with just one specialty in one state with a single payer. Paysys started with a single existing customer. In each case they grew the broadening of their focus a step at a time, making each customer happy as they went. As they grew, word got around in the industry, and they shifted to saying "no" to the vast majority of inquiries in order to maintain the step-wise customer success they were building.

    This is a classic pattern of focus broadening that can bring transformative success to companies when handled well.

     

  • The Destructive Treatment of Hypertension

    I’ve talked about how all the medical authorities are united in the importance of fighting the “silent killer” of blood pressure that’s too high, i.e., hypertension. I’ve described in detail that what doctors call ‘essential hypertension” is NOT a disease. Fighting the non-disease of hypertension is an ongoing bonanza for doctors and the drug companies while leading to serious problems for patients.

    In this post I’ll describe my personal experiences that led me to these observations. What happened to me was not unusual, and others have had it worse than I have.

    Getting Cancer and Hypertension

    Eight years ago I developed a rare form of cancer, desmoid tumor, of which there are about a thousand cases a year. I was treated with drug infusions. The drugs sometimes have bad effects on the heart, so I received tests and a consultation with a NYC cardiologist. She told me I had high blood pressure that must be treated immediately. I was surprised since my reading had always been low, but complied, ending up taking daily doses of Amlodipine and Losartan.

    This was unusual for me, because I normally dive in and check for myself everything that’s important to me. I didn’t in this case. I was consumed with my study of my rare cancer and the ineffective early advice I got. I found the one doctor in the country who knew how to treat it. The blood pressure seemed like a bump in the road at the time. My bad.

    I kept up with the daily drugs after that, with new prescriptions issued by my primary care doctor with minor adjustments. Not once did any doctor mention anything about side effects. I felt OK and did no research.

    About a year ago I started monitoring my blood pressure myself because I began experiencing symptoms it was hard to put a finger on. I knew the drugs I was taking were generics and had discovered the widespread corruption of generic drug makers and the ineffectiveness of the quality monitoring conducted by the FDA. I asked my primary care doctor for prescriptions for the branded versions of the drugs, which I hoped were more carefully monitored. I discussed in detail with my local CVS pharmacist, who ultimately was unable to get the drugs.

    Symptoms of Heart Trouble

    The symptoms increased. In March 2022 I had a tough time driving with symptoms that included being light-headed and a heart pulse rate that was high for me, as though I were exercising. I went to my primary doctor who gave me some tests including an EKG. With inconclusive results, she referred me to a cardiologist. The cardio guy gave me lots of tests, including an electro-cardiogram, a nuclear stress test, a week-long Holter monitor. This all took a few weeks.

    Meanwhile, I did what I should have done eight years ago – dove in and studied heart function and blood pressure. It didn’t take long for me to discover – surprise! – that the symptoms I experienced were the same as side effects of the drugs I was taking, and were widely reported by patients online. I tried to get FDA data on them and discovered the great lengths the FDA goes into order to keep drug adverse reactions as secret as possible – kind of like the way medical offices say you have full access to your medical records, except that they prevent it, as I have described in detail.

    I brought up the subject on my next visit with the cardiologist. He immediately dismissed the possibility. He refused to discuss it or take seriously the possibility that my symptoms were due to the drugs that he and all the other members of his profession profusely prescribe.

    After that I took matters into my own hands. I stopped taking the drugs after the last test was conducted, in order to avoid confusing the results. I continued daily blood pressure readings, sometimes more often.

    I finally got the results of all the tests. Nothing was wrong with me – except of course when you monitor for seven days straight, sometimes your heart beats fast. It’s scary! It’s called supraventricular tachycardia (SVT). Once or twice a day, for a dozen beats at a time and on the low end of the "fast" scale. Call 911! My valves are fine, no blockages, no Afib, etc. etc. His recommendation? Consult one of his friends to get either a pacemaker or a six hour operation to zap random bits of my heart in hopes that the scary SVT would go from 0.001% to zero. Maybe. NFW, thanks anyway, esteemed board-certified cardiologist.

    My first step after stopping the drugs was to start taking a well-reviewed natural heart-health additive based on L-Arginine. After 3 weeks I was better, but not satisfied. So I just stopped messing with my body and its extremely complex mechanisms. After my body cleared out, I was much better.

    The blood pressure numbers are interesting:

    Average        Systolic         Diastolic

    with drugs     137              64

    L-Arg only     157              74

    nothing          144              65

    Taking no blood pressure or other drugs resulted in Diastolic numbers that were unchanged and Systolic numbers that were 7 points higher, well within a healthy range, though not according to current cardiologist drug-pushing fashion.

    Side effects of blood pressure control drugs

    There are lots of non-government places to learn about the side effects of the awful blood pressure drugs — thanks, internet! No thanks at all, cardiologists! — and even published studies that show 10% of participants in studies dropping out due to the intolerable side effects.

    Here are a few samples of problems with Amlodipine from a data-rich site.

    fast, irregular, pounding, or racing heartbeat or pulse

    Common (1% to 10%): Palpitations, ankle edema

    Amlodipine has an average rating of 3.7 out of 10 from a total of 571 ratings for the treatment of High Blood Pressure. 20% of reviewers reported a positive experience, while 61% reported a negative experience.

    41% gave it 1 star out of 10

    Common in reviews below: “feeling lightheaded, heart palpitations and arrhythmia”

    After 35 years of taking it, “I took myself off of 2 years ago, but could not get through the withdrawals. It caused my heart to feel like it was beating out of my chest”

    After starting “suddenly I felt dizziness I went to ER they admitted me to the heart hospital. I was told I needed a pacemaker. I declined.”

    After strong heart beat “I attended a pre-operative assessment where I was given a routine ECG and this confirmed that I had become tachycardic while taking amlodipine. My pulse was racing at over 100 BPM”

    After stopping “Five weeks later I still have the tinnitus”

    After 6 years, “worse side effects have been; heart pounding/palpitations, fatigue, and increased anxiety…. I stopped taking the Amlodipine Besylate 10 mg. over 3 weeks ago and and have noticed that my energy level has increased, anxiety lessened and heart pounding decreased.”

    After 18 months “… anxious. Couldn't sleep, couldn't concentrate. I noticed muscle tics all the time, heart palpitations, more joint pains, memory loss and more. …I had myself convinced I had contracted some fatal condition (ALS, MS, etc…). After every specialist I could find, we decided it was anxiety. Then one day I read someone's account of anxiety and amlodipine. …try a switch. Today I am back to my old self.”

    Conclusion

    I have re-learned one of life’s most important lessons: if you want to be healthy, take charge of your own health. It’s your health, no one else’s. You own it, you have to live with it. There are experts and authorities all over the place who are lined up to tell you what to do. They want you to pay them, take drugs and undergo invasive procedures. Most of these people are highly trained and well-meaning. They sometimes know things that are worth knowing. They can be of great help. It’s worth listening. But it is not worth mindlessly following their orders, because their profession’s best, standard advice is all too often wrong. WRONG. And not just wrong – actively harmful.

    Once I took my health into my own hands eight years ago, I found a truly expert doctor who brought my nasty rare cancer into remission, a place where I hope it’s happy. On the path, I stupidly and without examination followed doctor’s orders about blood pressure, following advice given to nearly half the population of the US. How could it possibly be bad? Easy. The same way the nutrition advice given to ALL the US population was and remains highly destructive, leading to the ongoing obesity epidemic and widespread avoidable suffering. The same way bad science about blood cholesterol has led to the most profitable drug in pharma history, treatment which shortens lives and makes patients less healthy.

    The good news is that you’re not alone. There are dedicated people devoted to discovering and putting out the facts so that diligent, self-reliant people can find out what’s best for their health, most importantly for those cases where the medical profession stubbornly clings to destructive error, as it has so often in the past. It’s your health. Own it!

  • How to Integrate AI and ML with Production Software

    Most enterprises that build software are proudly flying the flag of AI/ML. "We're technology leaders!" their leaders crow in annual reports and at conferences. At the same time, any objective observer usually sees a lack of common sense in the operation of the company's systems. It often appears that, far from needing beyond-human artificial intelligence, they could use some insect-level functioning instincts that get things done. What's going on? Can it be fixed?

    The Industry-standard way to fix the problem

    The usual fix to the problem is to completely ignore the fact that there's a problem in public, while following something like these proven strategies:

    • Brag, loudly and often, about you and your organization's commitment to AI/ML. The commitment is serious; it's deep and it's broad!
    • Talk about the initiatives you've funded and the top experts you've hired.
    • Talk about the promising things you've got in the works.
    • Use extra phrases to demonstrate your seriousness, things like "1-to-1 personalization" and "adaptive processes" and "digital-first transformation."
    • Put your top executives with fancy titles out there to follow the same strategy, using their own words.

    I've given a detailed example of how a top healthcare insurance company follows this strategy while operating at a sophistication level that is best described as "hey, this electronic mail stuff sounds neat, let's give it a try."

    Sometimes one of these organizations puts something in practice that works. It typically takes a great deal of time and effort to find and modify the relevant production systems. The efforts that are mostly likely to make it into production are those that can be done with the least amount of modification. For example, minimal-effort success can sometimes be achieved by extracting data from production systems, subjecting it to AI/ML magic and then either feeding a new system or making it effective with just a couple of insertion points.

    The Obstacles to AI/ML Success

    The obstacles to AI/ML success have two major aspects:

    • The typical practice of leap-frogging all the predecessors to AI/ML to maximum sophistication.
    • The extensive, incompatible existing production systems into which AI/ML power has to somehow be inserted.

    A good way to understand these obstacles is to imagine that you're in a world in which boats are by far the most important means of bulk transportation. In other words, the world in which we all lived at the start of the 1800's. Suppose by some miracle a small group has invented nuclear power and has decided it would be a great way to provide locomotion to large boats instead of the sails and wind power then in use. What prevents the amazing new technology from being used?

    Easy: the boats were designed for sails (with masts and all that) and have no good place to put a nuclear engine, and no way to harness its power to make the boat move.  The strong steel and other materials required to make a turbine and propellers doesn't exist. You can demonstrate the potential of your engine in isolation, but making it work in the boats available at the time won't happen. You can spend as much time as you like blaming the boats, but what's the point?

    The solution is clear by studying boat locomotion: there were incremental advances in boat materials and design, and the systems used for powering them. Paddle wheelers have been around for over a thousand years. Here's a medieval representation of a Roman ox-powered paddle wheel boat.

    De_Rebus_Bellicis _XVth_Century_Miniature

    For serious ocean travel, the choice became the large sail boat, as in this painting of boats near a Dutch fortified town:

    2022-04-06 15.58.34

    Suppose you had a nuclear engine of some kind and were somehow able to make it with materials that were generally not available in the 1600's. How would you use it to power the sail boat? The very thought is ridiculous. The problem is that the boats have no way to accept or utilize the nuclear engine.

    How to overcome the obstacles to AI/ML

    What would a sensible person do? Exactly what real-life people did in history: incrementally make boats suitable for more powerful means of locomotion, and make more powerful means of locomotion that would make boats go more quickly. Practically. You know, in real life.

    That means, among other things, once steam power was created, gradually make it suitable for powering ships with sails — using the sails to conserve coal when the wind was strong, and using coal to power paddles when the wind wasn't blowing. Then, after materials advanced, invent the screw propeller — which didn't happen until the late 1800's — to make things even better. Eventually, the engine and the ship would converge and be suitable for the introduction of nuclear power.

    This is an excellent model for understanding how to overcome the obstacles to powering existing enterprise applications with AI/ML:

    • The AI/ML can only be jammed into existing systems with great effort and by making serious compromises.
      • With a few exceptions, simpler methods that can make real-life improvements should be devised and introduced first, with the portion of AI/ML gradually increasing.
    • The existing enterprise applications are like wooden sailing ships, into which generation-skipping advanced locomotion simply can't be jammed.
      • Evolve the applications with automated decision-making in mind, first putting in simple methods that will produce quick returns.
      • The key to AI-friendly evolution is to center the application architecture on metadata in general, and in particular with metadata for workflow.

    The important thing is this: increase the "intelligence" of your applications step by step, concentrating on simple changes for big returns. Who cares whether and to what extent AI/ML is used to make improvements? All that matters is that you make frequent changes to improve the effectiveness, appropriateness and personalization of your applications. Experience shows that relatively simple changes tend to make the greatest impact. See this series of posts for more detail.

     

  • The Facts are Clear: Hypertension is not a Disease

    The medical community, organizations and government agencies couldn't be clearer: hypertension (high blood pressure) is a silent killer. You may not feel anything wrong, but if you've got it, your risk of strokes and heart failure goes way up. Therefore it's essential to monitor and treat this deadly condition.

    They're all wrong. Hypertension is not a disease that needs to be cured. It may be a symptom of a problem, but not a problem itself, just like fever is a symptom, not the underlying problem. By treating it as a disease and giving drugs to lower blood pressure, the medical establishment makes patients less healthy and raises costs substantially. With a few exceptions, we would all be better off ignoring blood pressure and most of the associated advice.

    Drugs for "Curing" Hypertension

    The single most prescribed drug in the US is for lowering cholesterol. But most prescriptions for a disease are to reduce blood pressure.

    Screenshot 2022-04-23 152522

    Here's the story with blood pressure pills.

    In fact, a majority of the most prescribed drugs in the U.S. are used to treat high blood pressure or symptoms of it. That’s because 108 million or nearly half of adults in the U.S. have hypertension or high blood pressure.

    Is Hypertension a Disease?

    There is no doubt that blood pressure can be measured and that it varies greatly. What is hypertension? As I describe here, currently it's a systolic pressure reading above 120 (until 2017 it was above 140). There are lots of things you can measure about people. What makes this measurement bad?

    There's a clue buried deep in Doctor-language, a clue that is nearly always missed — but it's one that doctors with a basic education should know. The official name for high blood pressure is essential hypertension. What's that? Let's ask Dr. Malcolm Kendrick, a long-experienced cardiologist:

    At medical school we were always taught – and this has not changed as far as I know – that an underlying cause for high blood pressure will not be found in ninety per cent of patients.

    Ninety per cent… In truth, I think it is more than this. I have come across a patient with an absolute, clearly defined cause for their high blood pressure about five times, in total, and I must have seen ten thousand people with high blood pressure. I must admit I am guessing at both figures and may be exaggerating for dramatic effect.

    Whatever the exact figures, it is very rare to find a clear, specific cause. The medical profession solved this problem by calling high blood pressure, with no identified cause, “essential hypertension”. The exact definition of essential hypertension is ‘raised blood pressure of no known cause.’ I must admit that essential hypertension certainly sounds more professional than announcing, ‘oh my God, your blood pressure is high, and we do not have the faintest idea why.’ But it means the same thing.

    Hypertension = your blood pressure number is high. Kind of like having a high temperature, which we call a "fever," right? Wrong. When you get a fever, doctors first make an effort to determine the cause of the fever! What an idea! The fever is a clue that something is wrong, not the problem itself! Here's the real, bottom-line clue: When you treat fever you treat the underlying cause e.g. bacterial infection, NOT the fever itself! If we treated fever the way we treat hypertension, we would give drugs whose sole purpose was to lower the body temperature, ignoring the underlying bacterial infection that caused the fever. Wouldn't do any good! Maybe we'd sweat less, but the bacteria would rage away inside our bodies. But high blood pressure? Doctors ignore the cause and "treat" the symptom, which can often do more harm than good — except of course for the drug makers, who make out just fine.

    Makes me sick.

    Causes of hypertension

    From Kendrick:

    So, why does the blood pressure rise in some people, and not in others. It is an interesting question. You would think that, by now, someone would have an answer, but they don’t. Or at least no answer that explains anything much.

    Just as fever is caused by an infection (or something else), could it be possible that hypertension results from some underlying problem? Kendrick again:

    Looking at this from the other direction, could it be that cardiovascular disease causes high blood pressure. Well, this would still explain why the two things are clearly associated, although the causal pathway may not be a → b. It could well be b → a.

    I must admit that I like this idea better, because it makes some sense. If we think of cardiovascular disease as the development of atherosclerotic plaques, leading to thickening and narrowing of the arteries then we can see CVD is going to reduce blood flow to vital organs, such as the brain, the kidneys, the liver, the heart itself.

    These organs would then protest, leading to the heart pumping harder to increase the blood flow and keep the oxygen supply up. The only way to increase blood flow through a narrower pipe, is to increase the pressure. Which is what then happens.

    Over time, as the heart is forced to pump harder, and harder, the muscle in the left ventricle will get bigger and bigger, causing hypertrophy. Hypertrophy means ‘enlargement.’ So, in people with long term, raised blood pressure, we would expect to see left ventricular hypertrophy (LVH). Which is exactly what we do see.

    He goes on to give lots of detail about how this takes place, if you're interested.

    Correlation and Causation

    There's a little problem that everyone who knows about science and statistics is supposed to know. It's the difference between correlation and causation. Two things seem to happen at the same time. They are correlated. No problem. But does one of the cause the other? That's a whole other thing, and it's super-important. At McDonald's, burgers and fries are often seen together. They're correlated. Did the burger cause the fries? Fries cause the burgers? Nope. They're just listed together on the menu and lots of people like them together.

    How about knife cuts and bleeding? Definitely correlated. Causation? By looking at repeated cases of knives making cuts, you can determine that putting a knife into someone's skin nearly always causes bleeding.

    This is the problem at the heart of hypertension — except perhaps in extreme cases, hypertension can be correlated with heart attacks and strokes — but it can't be shown to cause them in the vast majority of cases.

    The range of blood pressure

    The authorities don't like to talk about this, but blood pressure varies HUGELY not just from person to person, but also by age and for a single person during the day!

    Here's something to give you the idea from a scientific paper:

    Screenshot 2022-05-26 154740

    The range of pressure for a single person can be rather larger. I just took my pressure this morning. The systolic was 126. In the previous days the readings were 159 and 139.I have taken my pressure with different devices over a year, and that variation is not unusual. It can vary that much in a couple hours, depending on my activity level.

    It is well-known in the medical community that blood pressure varies naturally with age, generally rising as you get older. Has anyone documented this statistically? If they have, I can't find it. Generally, what is normal is roughly 100 plus your age, so a 50 year old man would have 150, roughly 10 less for women. Here is an interesting description of the age factor from a former NASA astronaut and doctor.

    The assumed causation fails to hold

    A surprising amount of modern medical misinformation goes back to the diet-heart hypothesis put forward by Ancel Keys and supported by the seven countries study. It's what led to the obesity-causing fat-is-bad diet recommendations and the ongoing harm of reducing blood cholesterol using statins. Out of the same witch's brew came the notion that high blood pressure causes heart disease.This notion was supposedly locked down by the famous Framingham study, which continues to this day.

    In the year 2000, the edifice crashed when a careful review was published in the journal of the European Society of Cardiology, "There is a non-linear relationship between mortality and blood pressure." It includes references to the original Keys study and many following journal articles.

    The article is prefaced by a quote that is so appropriate, I can't help but share it with you:

    "For every complicated problem there is a solution that is simple, direct, understandable, and wrong." H. L. Mencken

    The authors start by explaining the current paradigm:

    "the relation of SBP (systolic blood pressure) to risk of death is continuous, graded and strong…" The formulation of this "lower is better" principle … forms the foundation for the current guidelines for hypertension.

    They point out that Ancel Keys himself concluded that "the relationship of overall and coronary heart disease death to blood pressure was unjustified."

    They went on to examine the detailed Framingham study data.

    Shockingly, we have found that the Framingham data in no way supported the current paradigm to which they gave birth.

    Systolic blood pressure increases at a constant rate with age. In sharp contrast to the current paradigm, we find that this increase does not incur additional risk. More specifically, all persons in the lower 70% of pressures for their age and sex have equivalent risk.

    Cardiologist Kendrick in his recent book Doctoring Data points out

    Has this paper ever been refuted? No, it has not. Sadly, it was given the worst possible treatment that can be dished out by the medical establishment. It was completely ignored.

    The benefits of blood-pressure lowering, whatever the level, became so widely accepted years ago that it has not been possible, ethically,[viii] to do a placebo-controlled study for a long time. I am not aware of any placebo-controlled trials that have been done in the last twenty years, or so.

    A bit of sanity

    The same year (2017) the AHA and cardiologists were lowering the target blood pressure for everyone from 140 to 120, a group representing family physicians published an official guideline for treating hypertension in adults age 60 and over. Their method was rigorous, taking into account all available studies. Here is their core recommendation:

    ACP and AAFP recommend that clinicians initiate treatment in adults aged 60 years or older with systolic blood pressure persistently at or above 150 mm Hg to achieve a target systolic blood pressure of less than 150 mm Hg to reduce the risk for mortality, stroke, and cardiac events. (Grade: strong recommendation, high-quality evidence).

    What a breath of fresh air! And completely in line with this data-driven review that showed that a large number of people taking anti-hypertensive drugs just 1 in 125 were helped (prevented death), while 1 in 10 were harmed by side effects. Also in line with this careful study of people with elevated blood pressure in the range of 140-160; the study showed that none were helped by drugs, while 1 in 12 were harmed.

    BTW, if you're not familiar with the concept of NNT, you should learn about it. It's crucial.

    Hypertension Drugs can hurt you

    Doctors dish out hypertension drugs like candy. It's often the case that two different kinds of drugs will be required to get your blood pressure to "safe" levels. For reasons that don't seem to be studied, it's rare indeed for doctors to mention side effects; yet in repeated studies, the generally data-suppressing researchers can help but mention that the side effects are so bad that roughly 10% of study participants drop out of the study! (See above for references.)

    There are good lists of side effects at Drugs.com. Here's some information about Amlodipine:

    Side effects requiring immediate medical attention

    Along with its needed effects, amlodipine may cause some unwanted effects. Although not all of these side effects may occur, if they do occur they may need medical attention.

    Check with your doctor immediately if any of the following side effects occur while taking amlodipine:

    More common

    • Swelling of the ankles or feet

    Less common

    • Chest tightness
    • difficult or labored breathing
    • dizziness
    • fast, irregular, pounding, or racing heartbeat or pulse
    • feeling of warmth
    • redness of the face, neck, arms, and occasionally, upper chest

    Rare

    • Black, tarry stools
    • bleeding gums
    • blistering, peeling, or loosening of the skin
    • blood in the urine or stools
    • blurred vision
    • burning, crawling, itching, numbness, prickling, "pins and needles", or tingling feelings
    • chest pain or discomfort
    • chills
    • cold and clammy skin
    • cold sweats
    • confusion
    • cough
    • dark yellow urine
    • diarrhea
    • dilated neck veins
    • dizziness or lightheadedness when getting up from a lying or sitting position
    • extra heartbeats
    • fainting
    • fever
    • itching of the skin
    • joint or muscle pain
    • large, hive-like swelling on the face, eyelids, lips, tongue, throat, hands, legs, feet, or sex organs
    • numbness and tingling of the face, fingers, or toes
    • pain in the arms, legs, or lower back, especially pain in the calves or heels upon exertion
    • painful or difficult urination
    • pale, bluish-colored, or cold hands or feet
    • pinpoint red or purple spots on the skin
    • red, irritated eyes
    • redness of the face, neck, arms, and occasionally, upper chest
    • redness, soreness or itching skin
    • shakiness in the legs, arms, hands, or feet
    • slow or irregular heartbeat
    • sore throat
    • sores, ulcers, or white spots on the lips or in the mouth
    • sores, welting, or blisters
    • sudden sweating
    • sweating
    • swelling of the face, fingers, feet, or lower legs
    • swollen glands
    • trembling or shaking of the hands or feet
    • unsteadiness or awkwardness
    • unusual bleeding or bruising
    • unusual tiredness or weakness
    • weak or absent pulses in the legs
    • weakness in the arms, hands, legs, or feet
    • weight gain
    • yellow eyes or skin
    Then there are the ones judged to be less severe:

    Side effects not requiring immediate medical attention

    Some side effects of amlodipine may occur that usually do not need medical attention. These side effects may go away during treatment as your body adjusts to the medicine. Also, your health care professional may be able to tell you about ways to prevent or reduce some of these side effects.

    Check with your health care professional if any of the following side effects continue or are bothersome or if you have any questions about them:

    Less common

    • Acid or sour stomach
    • belching
    • feeling of warmth
    • heartburn
    • indigestion
    • lack or loss of strength
    • muscle cramps
    • redness of the face, neck, arms, and occasionally, upper chest
    • sleepiness or unusual drowsiness
    • stomach discomfort, upset, or pain

    Those are the issues with just one of the many hypertension drugs, one of the most widely prescribed!

    Conclusion

    Blood pressure varies greatly, reflecting the human body's amazing self-regulation systems. In the vast majority of cases, blood pressure goes up with age. Lowering it by drugs does more harm than good. Except perhaps in extreme cases, high blood pressure does not cause disease. When pressure is extremely high, a search for the cause should be made. The ongoing focus on hypertension as a disease reflects nothing but the stubborn refusal of the medical establishment to admit that they were wrong, and of the pharma companies to give up a lucrative market.

  • Flowcharts and Workflow in Software

    The concept of workflow has been around in software from the beginning. It is the core of a great deal of what software does, including business process automation. Workflow is implicitly implemented in most bodies of software, usually in a hard-coded, ad-hoc way that makes it laborious and error-prone to implement, understand, modify and optimize. Expressing it instead as editable declarative metadata that is executed by a small body of generic, application-independent code yields a huge increase in productivity and responsiveness. It also enables painless integration of ML and AI. There are organizations that have done exactly this; they benefit from massive competitive advantage as a result.

    Let’s start with some basics about flowcharts and workflow.

    Flowcharts

    Flowcharts pre-date computers. The concept is simple enough, as shown by this example from Wikipedia:

    Fix lamp

    The very earliest computer programs were designed using flowcharts, illustrated for example in a document written by John von Neumann in 1947. The symbols and methods became standardized. By the 1960’s software designers used templates like this from IBM

    Flowchart

    to produce clean flowcharts in standardized ways.

    Flowcharts and Workflow

    Flowcharts as a way to express workflows have been around for at least a century. Workflows are all about repeatable processes, for example in a manufacturing plant. People would systematize a process in terms of workflow in order to understand and analyze it. They would create variations to test to see if the process could be improved. The starting motivation would often be consistency and quality. Then it would often shift to process optimization – reducing the time and cost and improving the quality of the results. Some of the early work in Operations Research was done to optimize processes.

    Workflow is a natural way to express and help understand nearly any repeatable process, from manufacturing products to taking and delivering orders in a restaurant. What else is a repeatable process? A computer program is by definition a repeatable process. Bingo! Writing the program may take considerable time and effort, just like designing and building a manufacturing plant. But once written, a computer program is a repeatable process. That’s why it made sense for the very earliest computer people like John von Neumann to create flowcharts to define the process they wanted the computer to perform repeatedly.

    What’s in a Flowchart?

    There are different representations, but the basic work steps are common sense:

    • Get data from somewhere (another program, storage, a user)
    • Do something to the data
    • Test the data, and branch to different steps depending on the results of the test
    • Put the data somewhere (another program, storage, a user)
    • Lots of these work steps are connected in a flow of control

    This sounds like a regular computer software program, right? It is! When charted at the right level of detail, the translation from a flowchart to a body of code is largely mechanical. But humans perform this largely mechanical task, and get all wrapped up in the fine details of writing the code – just like pre-industrial craftsmen did.

    Hey, that's not just a metaphor — it is literally true! The vast, vast majority of software programming is done in a way that appears from the outside to be highly structured, but in fact is designing and crafting yet another fine wood/upholstery chair (each one unique!) or, for advanced programmers, goblets and plates made out of silver for rich customers.

    Workflow

    In the software world, workflow in general has been a subject of varying interest from the beginning. It can be applied to any level of detail. It has led to all sorts of names and even what amount to fashion trends. There is business process management. Business process engineering. And re-engineering. And business process automation. A specialized version of workflow is simulation software, which led early programmers to invent what came to be called "object-oriented programming." To see more about this on-going disaster that proved to be no better for simulating systems than it has been for software in general, see this.

    When document image processing became practical in the 1980’s, the related term workflow emerged to describe the business process an organization took to process a document from its arrival through various departments and finally to resolution and archiving of the document. The company that popularized this kind of software, Filenet, was bought by IBM. I personally wrote the workflow software for a small vendor of document image processing software at that time.

    Workflow in practice

    There has been lots of noise about what amounts to workflow over the years, with books, movements and trends. A management professor in the 1980's talked about how business processes could be automated and improved using business process re-engineering. He said that each process should be re-thought from scratch — otherwise you would just be "paving the cow paths," instead of creating an optimal process. As usual, lots of talk and little action. Here's the story of my personal involvement in such a project in which the people in charge insisted they were doing great things, while in fact they were spending lots of money helping the cows move a bit faster than they had been.

    The Potential of Workflow

    The potential of workflow can be understood in terms of maps and driving from one place to another. I've explained the general idea here.

    Most software design starts with the equivalent of figuring out a map that shows where you are and where you want to get to. Then the craftsmanship begins. You end up with a hard-coded set of voluminous, low-level "directions" for driving two blocks, getting in the left lane, turning left, etc.

    When the hard-coded directions fail to work well and the complaints are loud enough, the code is "enhanced," i.e., made even more complex, voluminous and hard to figure out by adding conditions and potential directional alternations.

    Making the leap to an online, real time navigation system is way beyond the vast majority of software organizations. You know, one that takes account of changes, construction, feedback from other drivers on similar routes about congestion, whether your vehicle has a fee payment device installed, whether your vehicle is a truck, etc. Enhancements are regularly made to the metadata map and the ML/AI direction algorithms, which are independent of map details.

    When software stays at the level of craftsmanship, you're looking at a nightmare of spaghetti code. Your cow paths aren't just paved — they have foundations with top-grade rebar, concrete and curbs crafted of marble.

    Conclusion

    Metadata-driven workflow is the next step beyond schema enhancement for building automated systems to perform almost any job. It's a proven approach that many organizations have deployed — literally for decades. But all the leaders of computing, including Computer Science departments at leading universities, remain obsessed with subjects that are irrelevant to the realities of building software that works; instead they stay focused on the wonders of craftsman-level low-level software languages. It's a self-contained universe where prestige is clearly defined and has nothing to do with the eternal truths of how optimal software is built.

     

  • The Experts are Clear: Control your Blood Pressure

    Most of us have heard about high blood pressure. It's one of those conditions that afflict a large number of people. Nearly half of American adults are said by the AHA to have it! You may be able to control it by maintaining a healthy lifestyle, things like avoid eating saturated fats, salt and alcohol, keeping your weight down and getting exercise. Fortunately, there are drugs that can help keep it under control.

    Why should anyone care? Strokes! Heart attacks! Premature death!

    Is this one of those things that floats in the air but isn't real? Let's take a look at what people who know what they're doing say about it.

    The American Heart Association (AHA)

    Blood pressure is all about the heart, right? So let's start with the medical association that's all about keeping our hearts healthy. They make it very clear why we should care:

    Health threats diagram

    Those folks at the AHA may be doctors who can't write legible prescriptions, but they were sure able to rope someone into producing a scary diagram! OK, you've got my attention. Here's the facts with blood pressure:

    HBP

    What can I do?? What if I maintain a good weight, eat a heart-healthy diet, cut back on salt and the rest and my BP is still scary? There are medications.

    How long will you have to take your medication? Perhaps for the rest of your life.

    OK, then. If that's what has to be done to avoid the things in the scary diagram above, then so be it.

    More American Heart Association (AHA)

    I decided to dig a bit deeper. When did they come to this conclusion?

    Here is a chart from the AHA as it was in May 2010:

    Screenshot 2022-04-15 150201

    Compare this to the same chart on the same site in April 2022, shown earlier.

    It appears some things have changed! Basically they've decided to crank up the alarm level on most of the numbers. You can observe the differences yourself; Stage 2 hypertension is a good example. In 2010 you had it if your numbers were more than 160/100, while now it's 140/90. In 2010, if your pressure was below 140, you didn't "have" hypertension — just "prehypertension." Now, stage 1 hypertension starts at 130.

    I did some research. The change happened in 2017. Here is the AHA's news release on the subject:

    High blood pressure should be treated earlier with lifestyle changes and in some patients with medication – at 130/80 mm Hg rather than 140/90 – according to the first comprehensive new high blood pressure guidelines in more than a decade. The guidelines are being published by the American Heart Association (AHA) and the American College of Cardiology (ACC) for detection, prevention, management and treatment of high blood pressure.

    The guidelines were presented today at the Association’s 2017 Scientific Sessions conference in Anaheim, the premier global cardiovascular science meeting for the exchange of the latest advances in cardiovascular science for researchers and clinicians.

    Rather than 1 in 3 U.S. adults having high blood pressure (32 percent) with the previous definition, the new guidelines will result in nearly half of the U.S. adult population (46 percent) having high blood pressure, or hypertension.

    A whole lot more people have high blood pressure! I sure hope they did their homework on this. Reading on we find:

    The new guidelines were developed by the American Heart Association, American College of Cardiology and nine other health professional organizations. They were written by a panel of 21 scientists and health experts who reviewed more than 900 published studies. The guidelines underwent a careful systematic review and approval process.

    OK, it looks like a whole team of experts was in on this one. 

    Harvard Medical School

    Better check with the people who train the best doctors. Let's make sure this is really up to date.

    Harvard

    Here's what they have to say:

    Arteries that are tensed, constricted, or rigid offer more resistance. This shows up as higher blood pressure, and it makes the heart work harder. This extra work can weaken the heart muscle over time. It can damage other organs, like the kidneys and the eyes. And the relentless pounding of blood against the walls of arteries causes them to become hard and narrow, potentially setting the stage for a heart attack or stroke.

    Most people with high blood pressure (known medically as hypertension) don't know they have it. Hypertension has no symptoms or warning signs. Yet it can be so dangerous to your health and well-being that it has earned the nickname "the silent killer." When high blood pressure is accompanied by high cholesterol and blood sugar levels, the damage to the arteries, kidneys, and heart accelerates exponentially.

    Sounds scary. Can I do anything about it?

    High blood pressure is preventable. Daily exercise, following a healthy diet, limiting your intake of alcohol and salt, reducing stress, and not smoking are keys to keeping blood pressure under control. When it creeps into the unhealthy range, lifestyle changes and medications can bring it down.

    They agree. There are pills I can take.

    Department of Health and Human Services (HHS)

    Let's make sure the government is on board. After some looking it was very clear that HHS is in favor of keeping blood pressure under control. Finding out exactly what they think and what they're doing proved to be a bit of a challenge. Here's some of the things I learned our government is doing to help us:

    • They have published standards and require reports requiring health providers to specify the frequency of visits and other things they are performing with their patient population to control blood pressure.
    • They sponsored the Million Hearts Risk Check Challenge, asking developers to create a new consumer app that informs consumers of their general heart risk, motivates them to obtain a more accurate risk assessment by entering their blood pressure and cholesterol values, and directs them to nearby community pharmacies (and other locations) offering affordable and convenient blood pressure and cholesterol screenings.
    • The Surgeon General issued a Call for Action to Control Hypertension. It's a major document issued in 2020. Sadly, the link to the document was broken, so I wasn't able to read this important initiative. But here's a helpful diagram about it:

    Hhs

    The fact that the document was issued is impressive. The section introducing it has a stirring ending: "We must act to preserve the nation’s cardiovascular health now and into the future. Together, we’ve got this!"

    Conclusion

    Governments and the big authorities in the field are united in the effort to keep us all more healthy by encouraging us all to address the "silent killer" of hypertension. They want us to address it first of all by lifestyle changes, but if that fails, medication is available to keep things under control. Even if we have to take a couple pills a day for the rest of our lives, that's a small price to pay for having a longer, healthier life.

     
    This is an issue that similar in many ways to the goal of maintaining a heart-healthy diet that minimizes saturated fat in meat and dairy products, and to combating LDL, the "bad" cholesterol in our blood; they all contribute in their own ways to keeping us healthy.
     
    We should all have our blood pressure checked and do what we have to do to keep it under control. If, that is, we want to live a long, heart-healthy life. Naturally there are contrasting views on this seemingly settled topic, for example here.
     
  • Cartoons and Video games evolved into Bitcoin and NFT’s

    Bitcoin and other cryptocurrencies are in the news. NFT’s (non-fungible tokens) have exploded onto the scene, with people spending large amounts of money to acquire unique rights to digital images. The explosion of invention and innovation is amazing, isn’t it?

    Except that it's all just minor variations of things that were created decades ago, grew into huge markets with the participation of a good part of the world's population, and continue to grow today. Invention? Creativity? How about minor variations of proven ideas, giving them a new name and slightly different context, and getting super-rich?

    From Drawing to Cartoons to Video Games

    Drawing, sculpting and otherwise creating artificial images of the reality we experience has a long history.

    For example, here’s a painting of a bovine from a cave created by early humans over 40,000 years ago:

    Lubang_Jeriji_Saléh_cave_painting_of_Bull

    Drawings that suggest reality but are purposely different from real things are called cartoons, and go back hundreds of years, becoming more widespread in the 1800’s in print media.

    Then there was a breakthrough: animation. Leveraging early movie technology, artists worked enormously hard to create a fast-changing sequence of images to create the illusion of motion. Along with sound, you could now go to a theater and watch and hear a whole cartoon movie, filled with characters and actions that could never happen in real life. Characters like Mickey Mouse and Bugs Bunny became part of modern culture.

    The next big step took place after computers were invented and got video screens. Of course the computers transformed the process of creating animation. But animation was always like watching a movie: the human could only watch and listen. With computers, the possibility first arose for actions of the person to directly and immediately change what happened on the screen. The video game was born.

    The video game has gone through an extensive evolution from the primitive, simple Space War to immersive MMORPG's (massively multiplayer online role-playing games), enabling players to interact with each other in evolving shared animated worlds, often with fighting but also including other activities.

    World of Warcraft (WoW) wasn't the first, but became the most popular of the MMORPG's.

    Similar to other MMORPGs, the game allows players to create a characteravatar and explore an open game world in third– or first-person view, exploring the landscape, fighting various monsters, completing quests, and interacting with non-player characters (NPCs) or other players. The game encourages players to work together to complete quests, enter dungeons and engage in player versus player (PvP) combat, however the game can also be played solo without interacting with others. The game primarily focuses on character progression, in which players earn experience points to level up their character to make them more powerful and buy and sell items using in-game currency to acquire better equipment, among other game systems.

    World of Warcraft was a major critical and commercial success upon its original release in 2004 and quickly became the most popular MMORPG of all time, reaching a peak of 12 million subscribers in 2010.[4] The game had over one hundred million registered accounts by 2014[5] and by 2017, had grossed over $9.23 billion in revenue, making it one of the highest-grossing video game franchises of all time. The game has been cited by gaming journalists as the greatest MMORPG of all time and one of the greatest video games of all time

    The industries creating hardware and software for these artificial worlds has grown to be huge. In 2020 video gaming generated over $179 billion in global revenue, having surpassed the film industry years before.

    Video games aren’t just for kids. There are an estimated 3.24 billion gamers across the globe.

    In the US the numbers are huge. “Three out of every four, or 244 million, people in the U.S. play video games, an increase of 32 million people since 2018." Gamers spend lots of time on their games: “… gamers average 14 hours per week playing video games.”

    Game World and Virtual Economies

    Huge numbers of people go to a screen or put on a headset and "enter" the world of a video game, where they often spend hours at a time. While in that world, they can move from place to place as an observer, or as the controller of their personal avatar. They can interact with others, as shown by this scene from the virtual world of Second Life in 2003.

    Second_Life_11th_Birthday_Live_Drax_Files_Radio_Hour

    Long before Bitcoin was created, video games had virtual economies with digital currencies.

    The currency used in a game world can be called different things. For example in World of Warcraft it's called — big shock coming up here — Gold. Gold can be earned by players accomplishing things in the game world, and can be spent for skills or in-game objects. Players can buy and sell items among themselves using such currencies. Many games enable players to buy in-game currencies using real money. In some cases, in-game virtual "land" is also for sale.

    Long before Bitcoin, markets arose to enable in-game currencies to be traded (exchanged) for real-world currencies. It is now a multi-billion dollar industry. "In 2001, EverQuest players Brock Pierce and Alan Debonneville founded Internet Gaming Entertainment Ltd (IGE), a company that offered not only the virtual commodities in exchange for real money but also provided professional customer service." The company was the largest such on-line exchange and accounted for hundreds of millions of dollars of transactions.

    Video Games, Bitcoin and NFT's

    The first Bitcoin was sent in 2009. It wasn't much used or valued until 2013. Ethereum first went live in 2014. By this time there were already MMORPG's with many hundreds of millions of players earning, spending and exchanging digital currencies involving virtual objects in their game worlds.

    Let's see how the things used by literally billions of gamers compares to Bitcoin (and other crypto-currencies) and NFT's.

    • Games have digital currencies with no real-world value.
      • Sounds like Bitcoin and other crypto-currencies
    • In-game virtual objects can be bought and sold using in-game currencies
      • Sounds like buying crypto-world NFT's with Bitcoin
    • New units of the digital currency are created by the game software
      • New crypto is created by Bitcoin mining software
    • Game currencies can be used and exchanged among gamers
      • Same with Bitcoin
    • Game currencies can be exchanged for and bought with real-world money
      • Same with Bitcoin
    • There are exchanges outside the game that enable buying/selling
      • Same with Bitcoin
    • The exchange price can vary greatly
      • Same with Bitcoin
    • Teams create new games with currencies and virtual objects
      • Teams create new crypto-currencies and NFT's

    Still think there's no relationship between gaming and crypto? How about, as mentioned above, the fact that Brock Pierce and a partner founded the game currency exchange IGE in 2001, and the same Mr. Pierce was active in crypto-currency by 2013 and became a "Bitcoin billionaire" by 2018.

    Of course, the new worlds of crypto and NFT's are different in some important ways from the gaming worlds. Games along with the objects and currencies are created and managed by the game company. While there's more control than is generally recognized, crypto-currencies have a large degree of self-management with their built-in miners. Similarly, NFT's are created independently

    Conclusion

    First Bitcoin came seemingly out of nowhere in 2009. A few years later, variations of Bitcoin appeared on the market. An astounding explosion of crypto followed, along with digital objects that "live" in the crypto world.

    Like many other "brand new" things, the worlds of crypto and NFT's have remarkably close relations to the world of gaming, from which they appear to have evolved. Compared to the gaming world, the number of people invested in crypto is truly tiny, hundredths of a percent. But the inflation and amount of real-world currency that has been converted to crypto dwarfs the amounts in the gaming world.

    As with many other tech trends, the history and evolution of the elements of the trend reward study.

    Note: this was originally published on Forbes.

  • How to Improve Software Productivity and Quality: Schema Enhancements

    Most efforts to improve programmer productivity and software quality fail to generate lasting gains. New languages, new project management and the rest are decades-long disappointments – not that anyone admits failure, of course.

    The general approach of software abstraction, i.e., moving program definition from imperative code to declarative metadata, has decades of success to prove its viability. It’s a peculiar fact of software history and Computer Science that the approach is not mainstream. So much the more competitive advantage for hungry teams that want to fight the entrenched software armies and win!

    The first step – and it’s a big one! – on the journey to building better software more quickly is to migrate application functionality from lines of code to attributes in central schema (data) definitions.

    Data Definitions and Schemas

    Every software language has two kinds of statements: statements that define and name data and statements that do things that are related to getting, processing and storing data. Definitions are like a map of what exists. Action statements are like sets of directions for going between places on a map. The map/directions metaphor is key here.

    In practice, programmers tend to first create the data definitions and then proceed to spend the vast majority of their time and effort creating and evolving the action statements. If you look at most programs, the vast majority of the lines are “action” lines.

    The action lines are endlessly complex, needing books to describe all the kinds of statements, the grammar, the available libraries and frameworks, etc. The data definitions are extremely simple. They first and foremost name a piece of data, and then (usually) give its type, which is one of a small selection of things like integer, character, and floating point (a number that has decimal digits). There are often some grouping and array options that allow you to put data items into a block (like address with street, town and state) and sets (like an array for days in a year).

    One of the peculiar elements of software language evolution is whether the data used in a program is defined in a single place or multiple places. You would think – correctly! – that the sensible choice is a single definition. That was the case for the early batch-oriented languages like COBOL, which has a shared copybook library of data definitions. A single definition was a key aspect of the 4-GL languages that fueled their high productivity.

    Then the DBMS grew as a standard part of the software toolkit; each DBMS has its own set of data definitions, called a “schema.” Schemas enable each piece of data to have a name, a data type and be part of a grouping (table). That’s pretty much it! Then software began to be developed in layers, like UI, server and database, each with its own data/schema definitions and language. Next came services and distributed applications, each with its own data definitions and often written in different languages. Each of these things need to “talk” with each other, passing and getting back data, with further definitions for the interfaces.

    The result of all this was an explosion of data definitions, with what amounts to the same data being defined multiple times in multiple languages and locations in a program.

    In terms of maps and directions, this is very much like having many different collections of directions, each of which has exactly and only the parts of the map those directions traverse. Insane!

    The BIG First Step towards Productivity and Quality

    The first big step towards sanity, with the nice side effect of productivity and quality, is to centralize all of a program’s data definitions in a single place. Eliminate the redundancy!

    Yes, it may take a bit of work. The central schema would be stored in a multi-part file in a standardized format, with selectors and generators for each program that shared the schema. Each sub-program (like a UI or service) would generally only use some of the program’s data, and would name the part it used in a header. A translator/generator would then grab the relevant subset of definitions and generate them in the format required for the language of the program – generally not a hard task, and one that in the future should be provided as a widely-available toolset.

    Why bother? Make your change in ONE place, and with no further work it’s deployed in ALL relevant places. Quality (no errors, no missing a place to change) and productivity (less work). You just have to bend your head around the "radical" thought that data can be defined outside of a program.

    If you're scratching your head and thinking that this approach doesn't fit into the object-oriented paradigm in which data definitions are an integral part of the code that works with them, i.e. a Class, you're right. Only by breaking this death-grip can we eliminate the horrible cancer of redundant data definitions that make bodies of O-O code so hard to write and change. That is the single biggest reason why O-O is bad — but there are more!

    The BIG Next Step towards Productivity and Quality

    Depending on your situation, this can be your first step.

    Data definitions, as you may know, are pretty sparse. There is a huge amount of information we know about data that we normally express in various languages, often in many places. When we put a field on a screen, we may:

    • Set permissions to make it not visible, read-only or editable.
    • If the field can be entered, it may be required or optional
    • Display a label for the field
    • Control the size and format of the field to handle things like selecting from a list of choices or entering a date
    • Check the input to make sure it’s valid, and display an error message if it isn’t
    • Fields may be grouped for display and be given a label, like an address

    Here's the core move: each one of the above bullet items — and more! — should be defined as attributes of the data/schema definition. In other words, these things shouldn't be arguments of functions or otherwise part of procedural code. They should be just like the Type attribute of a data definition is, an attribute of the data definition.

    This is just in the UI layer. Why not take what’s defined there and apply it as required at the server and database layers – surely you want the same error checking there as well, right?

    Another GIANT step forward

    Now we get to some fun stuff. You know all that rhetoric about “inheritance” you hear about in the object-oriented world? The stuff that sounds good but never much pans out? In schemas and data definitions, inheritance is simple and … it’s effective! It’s been implemented for a long time in the DBMS concept of domains, but it makes sense to greatly extend it and make it multi-level and multi-parent.

    You’ve gone to the trouble of defining the multi-field group of address. There may be variations that have lots in common, like billing and shipping address. Why define each kind of address from scratch? Why not define the common parts once and then say what’s unique about shipping and billing?

    Once you’re in the world of inheritance, you start getting some killer quality and productivity. Suppose it’s decades ago and the USPS has decided to add another 4 digits to the zip code. Bummer. If you’re in the enhanced schema world, you just go into the master definition, make the change, and voila! Every use of zip code is now updated.

    Schema updating with databases

    Every step you take down the road of centralized schema takes some work but delivers serious benefits. So let’s turn to database schema updates.

    Everyone who works with a database knows that updating the database schema is a process. Generally you try to make updates backwards compatible. It’s nearly always the case that the database schema change has to be applied to the test version of the database first. Then you update the programs that depend on the new or changed schema elements and test with the database. When it’s OK, you do the same to the production system, updating the production database first before releasing the code that uses it.

    Having a centralized schema that encompasses all programs and databases doesn’t change this, but makes it easier – fewer steps with fewer mistakes. First you make the change in the centralized schema. Then it’s a process of generating the data definitions first for the test systems (database and programs) and then to the production system. You may have made just a couple changes to the centralized schema, but because of inheritance and all the data definitions that are generated, you might end up with dozens of changes in your overall system – UI pages, back end services, API calls and definitions and the database schema. Making an omission or mistake on just one of the dozens of changes means a bug that has to be found and fixed.

    Conclusion

    I’ve only scratched the surface of a huge subject in this post. But in practice, it’s a hill you can climb. Each step yields benefits, and successive steps deliver increasingly large results in terms of productivity and quality. The overall picture should be clear: you are taking a wide variety of data definitions expressed in code in different languages and parts of a system and step by step, collapsing them into a small number of declarative, meta-data attributes of a centralized schema. A simple generator (compile-time or run-time) can turn the centralized information into what’s needed to make the system work.

    In doing this, you have removed a great deal of redundancy from your system. You’ve made it easier to change. While rarely looked on as a key thing to strive for, the fact that the vast majority of what we do to software is change it makes non-redundancy the most important measure of goodness that software can have.

    What I've described here are just the first steps up the mountain. Near the mountain's top, most of a program's functionality is defined by metadata!

    FWIW, the concept I'm explaining here is an OLD one. It's been around and been implemented to varying extents in many successful production systems. It's the core of climbing the tree of abstraction. When and to the extent it's been implemented, the productivity and quality gains have in fact been achieved. Ever hear of the RAILS framework in Ruby, implementing the DRY (Don't Repeat Yourself) concept? A limited version of the same idea. Apple's credit card runs on a system built on these principles today. This approach is practical and proven. But it's orthogonal to the general thoughts about software that are generally taught in Computer Science and practiced in mainstream organizations.

    This means that it's a super-power that software ninjas can use to program circles around the lumbering armies of mainstream software development organizations.

Links

Recent Posts

Categories