Author: David B. Black

  • Adventures with Health Insurance Software: Email and Primary Care

    Giant organizations have trouble building effective software that works and gets the job done. I have gone into depth on this subject, giving examples of the problems. But there’s something about being a large organization that seems to prevent even being aware that there’s a problem, much less being able to fix it.

    I recently had occasion to dive into my health insurance company’s website, enticed by an email to do so. What I experienced was a travesty. If this company were run like a company should be run, heads should have rolled. It’s as bad as a trucking company having a large fraction of their trucks wander around getting lost, and another fraction driving off the road and crashing.

    Unfortunately, this story is not about an unfortunate bug or two that somehow snuck into otherwise fine software, which is what any self-respecting manager would start by trying to claim. This is story is about software that is broken in concept and in execution – even when it “works,” it’s simply awful!

    What I’m saying here flies in the face of what nearly everyone says and appears to think – including all the managers at all the places that preside over this nightmare of dysfunction. You also don’t hear any lofty academics decrying the “crisis in software,” as they should. So I’m going to lay out the facts, point by point; this is NOT fake news.

    This is the first of three blog posts on this subject. This first one is pretty mild.

    I got an email from my insurance company. Here it is:

    Pay 0

    I have a new message – and it’s not sales or promotional! Nothing about what the message could be about. It must be too secret and confidential to put it in regular email. Maybe it’s something about my health? I’d better check. So I click.

    Pay 01

    Oh, yuck. I’ve got to log in.

    Now I have to decide how badly I want to read this non-sales email. They seem to have decided that giving me an intelligence test combined with an endurance test was the best way to determine whether I was worthy to read this non-promotional, possibly health-related message. I persisted. I dug out my user name and password for this site I rarely use, and logged in.

    Or rather, I attempted to login. Here’s what I got after successfully entering my user name and password:

    Pay 02

    My user name and password weren’t good enough! This is clearly an incredibly confidential message! Even though I was using a computer I use all the time, including when accessing Anthem. I picked email, and then got this screen:

    Pay 03

    I entered the 6 digit code.

    This is classic 2-factor authentication. The security “experts” at Anthem probably felt pretty good about how they increased the security at Anthem, particularly after their past embarrassments. But it’s all GARBAGE! Nothing but security Kabuki Theater! Think about it: I got to the login screen by clicking on an email that Anthem sent me!! It’s trivial to include in the email link’s URL the information about the email. So when the request comes in … Anthem knows it’s coming from the email they sent! A simple check would tell them it also is coming from a computer associated with that email. By going through the send-email-enter-6 digit-code b.s., all they’re doing is wasting my time because they already have proof that it’s my email.

    Next, there’s the remarkable screen telling me how hard Anthem is working on my behalf:

    Pay 04

    All this hard work will surely result in displaying the information that the email I clicked on long ago was enticing me to click for, right? Well, no.

    Pay 05

    A completely generic welcome page!

    This is a problem. A big one. You’re supposed to click to read an important message. In every system I know, a “click-me” email is a “deep link,” i.e., it doesn’t go to the home page of the web page; it goes “deep” into the site, to the place the email wants me to see. You’ve experienced this. When Facebook or LinkedIn sends you an email about something, when you click, it always deep-links you to the place referenced by the email. My blankity-blank BANK does this. Even confidential document stores that need to be highly secure do the same – once you’ve verified yourself, you go right to the document. Makes sense.

    Except to Anthem. Anthem’s email link brings me to the generic welcome page of Anthem, exactly the same thing I’d see if I’d gone to the site directly.

    I can barely remember how I got here, it was such an annoyingly long time ago. Oh, yeah, the email – I’ve got an important message! Now, where might that be? I look at the screen. Why don’t you check it out too – do you see anything that says “messages?” Me neither. Clearly this page, the front splash page of the Anthem patient website, has received the best vetting that the skilled professionals at Anthem can muster. And the vetting somehow failed to notice that they were going to send me to a page looking for a “message” without those seven wonderful letters appearing anywhere on the page.

    Again, a combined test of intelligence and endurance. Let’s see if I can pass. Taking a closer look at that generic landing page, look at where I've put the big red arrow…

    Pay 05a

    Aha!  I wonder if, by any remote chance, that red shape means messages (in the secret Anthem language), and I have 10 messages that have piled up? Let’s try clicking.

    Pay c

    Score!

    The endurance test continues. Click again. Finally, the important message in question:

    Pay d

    At this point, all I can say is OMG.

    1. I have a primary care doctor, Anthem. You know it because you pay insurance bills for that doctor covering suspiciously primary-care items like “wellness visit.”
    2. The primary care doctor you’ve selected for me is indeed in the same state as me. But “close?” Not even in the same county. Sorry. No chance.

    I’m so glad I endured the obstacle course and endurance test, making my way past the elaborate privacy protections to read this important message with spot-on recommendation, so cleverly refined with accurate GPS data. I can’t put into words what this has done for my admiration of the excellent insurance company that orchestrated this software ballet.

  • Using Advanced Software Techniques in Business: Fashion or Real Value?

    Using advanced software techniques can make a dramatic positive impact on business. It’s important for everyone to assure that your software efforts aren’t stuck in out-moded, last-generation tools and techniques.

    Nearly everyone, including me, agrees with this simple statement. Nearly everyone also agrees on at least the top members of a list of reasonable candidates of “advanced software techniques” that are not “out-moded” or “last-generation.” That last statement is where the best technical people part ways with the crowd.

    No, I’m not talking about hard-to-understand weird-o’s babbling about esoterica in some corner. The best technical people understand and support using the best and most appropriate techniques for solving a given problem, regardless of the recency of the technique or its prevalence. Sadly, it is often the case that the most-talked-about hot trends in software should not be used in any business that actually wants to spend money widely and get stuff done, quickly and well.

    This is a BIG subject. It’s important. It’s deep. And it’s extensive. So let’s start with a big, fat, juicy example, one that was hot, hot, HOT but is now fading away, so it’s possible to talk about it somewhat more rationally. Maybe. I hope.

    Big Data and Hadoop

    There is little doubt that Big Data is a huge trend in software, though at least talking about it with that name appears to be undergoing a typical slow fade. Here is a review I wrote more than 5 years ago of the Big Data fashion trend as it existed at that time. It was everywhere you looked! Magazine covers! Ads! Conferences! Books! If you weren't somehow doing Big Data you were nothing and nobody.

    I've been working with data my entire professional life. The data has been small, medium, large, big, huge, totally awesomely huge and even gi-normous. Since I've been faced with space and time constraints, I have long since settled on a fundamental concept of computing, simple but rarely done: Count the data! It sounds ridiculous, but it's almost a secret weapon, and appears to be rarely done. Here is some analysis of data sizes in the context of the big data trend. Here is a more detailed example of a big data set that Harvard bragged about. Hint: the data isn't very big.

    I shouldn't have to say this, but here it is: For anything that people say is "big data," the very first step should be to … count the data. Sounds simple, but apparently it's not. It's also not common to dig into the data a bit. I guess it's uncommon because in most cases of data that starts out looking big after you count — this by itself is rare — is that you find that most of the data is just not needed or not relevant. Which makes it not big anymore. Which means you don't need Hadoop!

    Hadoop

    I know I'm being silly here. Hey, we're talking Big Data! Surely we've got some somewhere. We've got to get in an expert and crunch away so we can get those virtuous, business-enhancing juices flowing through our company.

    In this situation, at least until recently, what that meant was that you dove into the next level of detail and found out that the go-to tool was Hadoop. Dig in some more, and it sounds great. It's scalable without limit. You build your Hadoop cluster, script up some calculations, tap into the ocean of data you've got somewhere, and hear about how the Hadoop spins those computers up and down, and crunches all the data, using the computers that are available, and even working around ones that fail without anyone having to respond to some old-style beeper or something. No wonder Hadoop is the go-to tool for Big Data!

    Yahoo 1

    In the vast majority of situations, it's "decision made" time at this point. You get in your experts, they build their Hadoop cluster and away you go, climbing the Hadoop stairway to Big Data Heaven, with a glow of virtue surrounding everyone involved.

    Very few people seem to dive in and understand what Hadoop and its main programming paradigm MapReduce are all about. The Hadoop "experts" don't seem to know what the reasonable alternatives are, and when they might be applicable.

    Here's an example. In 2011, one of our large web companies had a huge problem caused by Google's move to a new search algorithm. The CTO grabbed a massive web log file, wrote some code to boil the terrabytes (Big Data for sure!) of data down to the key data elements, and then loaded them into the 512GB of DRAM of his powerful laptop computer and ran some advanced machine learning against it. You can see the CTO doing the work here. A few days later he had figured out Google's algorithm, reflected it in the company's website family, and traffic increased back to nearly the pre-change norm. If he had taken the Hadoop path, he would have worked for months, spent huge amounts of money, and found that the cluster and Hadoop thing would have basically been irrelevant to the problem.

    Here are a couple things to consider:

    • Hadoop, by definition, spreads its computing over the many machines available to it in the cluster, using HDFS (the Hadoop file system) for reading and writing data.
      • It is literally thousands of times faster to get data from local memory than it is to get it from a disk-based file system. The fewer file reads and writes needed to perform a computation, the faster it will be. Hadoop doesn't care.
      • Using more computers in a cluster means that there will be more I/O than using fewer. Many important calculations can be performed on a single properly-configured machine!
    • MapReduce, the key processing engine of Hadoop, is one of those cool-sounding ideas whose job can be done perfectly well with normal code, which can do way more, vastly more efficiently.
    • Why would anyone consider such an insanely wasteful approach? Once you know the origins, it makes sense.
      • If you're a big search engine company, you have to have loads of servers, enough to hold all the data and handle all the search queries at peak traffic times.
      • As is typical in situations like this, loads of servers will be under-used a large fraction of the day. Why not write some code that sucks up these "free" cycles and put them to work? Why not build a framework so you can just specify what you want done, without worrying about what resources from what machine where is used? Who cares if it's inefficient? It gets stuff done with the computers I already have. Brilliant!
      • Now it makes sense that Hadoop started and grew at Yahoo, copying some ideas about a narrowly-applicable (MapReduce) system and framework built at Google.
      • Except that at Yahoo, they somehow decided to make the Hadoop machines dedicated! Last I heard, they were up to, get ready … 40,000 servers. Wow.

    Yahoo 2

    • With such an investment in getting value out of Big Data, Yahoo must be booming, just sky-rocketing with all the juice that has come out of the investment. Not. Why would anyone want to use an expensive, strange tool that generated no value for its originator? One word: fashion.
    • Yes, there are some narrow situations in which Hadoop might be applicable. But in the vast majority of cases, you'll spend too much time getting way too many computers to do too little processing on not all that much data, and taking way too much time to get it done.

    Conclusion

    There is no doubt — none! — that you should use advanced software techniques in your business, because it will give you a competitive edge over everyone else.

    The trouble is telling the difference between (1) value-adding advanced software techniques and (2) hype and software fashion. Even most software people have trouble telling the difference! In fact, software people who insist that there is a difference between value-adding software techniques and the latest thing that everyone is talking about run the serious risk of being marginalized, and categorized as being old farts who are unable or unwilling to do the work to learn the new methods.

    In sharp contrast to the general thinking, software is a pre-scientific, fashion-driven field that resists holding new ideas to reasonable standards of proof and evidence. This makes it tough for business executives to know what to do. There is only one approach that works: roll up your sleeves, put ego and pride to the side, and figure it out using evidence and common sense.

  • Here’s what we can learn from the shift to smart credit card terminals

    I’ve been involved in computer software for decades. Lots has changed over that time. One thing that hasn’t changed is the question people most like to ask me. It’s this: “What do you see that’s new and interesting?” It’s a perfectly reasonable question, though one for which I rarely have a ready-made answer.

    A question I never hear goes something like this: “What do you see that’s touted as the newest new thing, but is mostly old stuff, and was completely predictable?” Now that’s an interesting question. And the un-helpful but honest answer is “Practically everything that’s touted as a new thing is mostly old stuff, with a little bit of ‘obvious next step’ thrown in for variety.”

    Still, there are some unpredictable aspects of the fancy new things: it’s really hard to know WHEN the new thing will happen and WHO will make it happen.

    A case in point is … [I’m not sorry about the pun] … the new smart card terminal company Poynt. (Disclosure: my VC fund, Oak HC/FT, is an investor.) I can see eager marketing people at Poynt are raising their hands in the back at this … ahem … point, all anxious to point (groan…) out that Poynt is a pioneer in the market, arguably the inventor of the smart terminal, an amazing device that not only takes card payments, but also rings up items just like a POS terminal and hosts endless numbers of third-party apps. True! I happily concede the point. But I hasten to point out that there are robust competitors in the market, notably including Square and Clover.

    The smart terminal is a new thing, and the market is glad to have it, but it’s hardly a NEW new thing, or something where you’d knock your head and say “now who’d-a thought-a that!?”

    It’s natural for consumers of technology to look at the new devices and appreciate them for what they are. That’s like being a tourist, driving on a road through the country-side, appreciating the nice new views. That’s nice for the tourist, but are there patterns here, patterns that would enable an educated person to expect something like a smart terminal to appear, and womdering when it would happen?

    Yes there are. The main pattern at work here is the rate of change of the underlying hardware. Today’s hardware is something like 1,000 times faster than much larger, more expensive hardware at the turn of the last century, less than 20 years ago. That number may not seem like much, but think of this: the average human walking speed is about 3 mph. The speed of a commercial jet while flying is less than 600 mph, about 200 times faster. Now imagine a human evolving so quickly that the human could walk at the speed of a jet — and increase in computer speed is 5 times greater than that, in less than 20 years!

    What’s the point? Or Poynt? Here it is: there are underlying geology-like forces in the world of computing that make it highly likely that something very much like Poynt would be invented – though as I said, predicting who will do it and when they’ll do it is a whole other thing.

    The first step in creating a technology solution to a problem is often building problem-specific hardware. Then the technology evolves, getting faster, cheaper and more capable. Then there’s a tipping point, at which the purpose-specific hardware is replaced by general-purpose hardware, and most of the specific features of the device are implemented in sofware that runs on the general-purpose hardware. Then a new era begins. The general pattern is that special-purpose devices are supplanted by general-purpose ones.

    In the case of card processing technology, first we see imprints of cards made on paper, with the physical paper being sent to a central place for processing. Then the big jump to computer technology and networking: a series of increasingly-better charge terminals, specifically made for processing card charges. The terminals evolved from dial-up networking to the internet, and from stand-alone to connected to a point-of-sale system. Wonderful devices!

    Now think about cell phones. If you’ve been around for a little while, you remember big phones getting better and smaller and finally evolving into flip phones. Great phones, … but they’re phones. Then came the big shift, to a next generation of phones that were really small, portable, general-purpose cmputers that can run a myriad of applications … with cell phone hardware and software built in. Yes, it’s a phone. But it runs Facebook, email, and any of thousands of applications avialable in the app store. It’s a “smart phone!”

    You know this. The reason I’m reminding you of that history is that it’s exactly the transition that card-charging “terminals” are going through right now – as they become “smart terminals,” i.e., small, portable, general-purpose computers that can run a myriad of applications … with card charging hardware and software built in. Yes, it’s a terminal, but a smart one.

    How often do you see “I’m just a phone” devices? Flip phones? Yup! The old card-charge terminal will become just as rare a sight in the next couple of years.

    So are the new “smart terminals” new? Yes! But hardly unexpected, at least to those who see the clearly repeating patterns of the underlying technology.

    A less Poynted version of this post was previously published at Forbes.

  • Computer Security Breach Response Excellence

    Here's what the experts do for computer security:

    • Hire security experts to implement best-in-class security.
    • Follow all the regulations.
    • Pass all the audits.
    • Spend lots of money.

    Then, of course, you get breached, because in spite of doing the above, you have no idea what you're doing…

    Here's how you respond:

    • Get more experts to find what happened.
    • Establish a carefully-thought-out strategy to recover from the breach and minimize damage to your reputation.
    • Alert the public and your users about the event and your concerned, respectful response.

    Then, of course, you change your website, put lots of money into attractive graphics, while making it hard for users to login or reset their passwords.

    The share-your-expertise website Quora is surely in the running for best-in class when it comes to computer security; they have followed the above plan with true excellence.

    The Quora Story

    I got this email from Quora, of which I'm an occasional user, on December 3, 2018:

    Capture

     

    Dear David B. Black,

    We are writing to let you know that we recently discovered that some user data was compromised as a result of unauthorized access to our systems by a malicious third party. We are very sorry for any concern or inconvenience this may cause. We are working rapidly to investigate the situation further and take the appropriate steps to prevent such incidents in the future.

    What Happened

    On Friday we discovered that some user data was compromised by a third party who gained unauthorized access to our systems. We're still investigating the precise causes and in addition to the work being conducted by our internal security teams, we have retained a leading digital forensics and security firm to assist us. We have also notified law enforcement officials.

    While the investigation is still ongoing, we have already taken steps to contain the incident, and our efforts to protect our users and prevent this type of incident from happening in the future are our top priority as a company.

    What information was involved

    The following information of yours may have been compromised:

    • Account and user information, e.g. name, email, IP, user ID, encrypted password, user account settings, personalization data
    • Public actions and content including drafts, e.g. questions, answers, comments, blog posts, upvotes
    • Data imported from linked networks when authorized by you, e.g. contacts, demographic information, interests, access tokens (now invalidated)
    • Non-public actions, e.g. answer requests, downvotes, thanks

    Questions and answers that were written anonymously are not affected by this breach as we do not store the identities of people who post anonymous content.

    What we are doing

    While our investigation continues, we're taking additional steps to improve our security:

    • We’re in the process of notifying users whose data has been compromised.
    • Out of an abundance of caution, we are logging out all Quora users who may have been affected, and, if they use a password as their authentication method, we are invalidating their passwords.
    • We believe we’ve identified the root cause and taken steps to address the issue, although our investigation is ongoing and we’ll continue to make security improvements.

    We will continue to work both internally and with our outside experts to gain a full understanding of what happened and take any further action as needed.

    What you can do

    We’ve included more detailed information about more specific questions you may have in our help center, which you can find here.

    While the passwords were encrypted (hashed with a salt that varies for each user), it is generally a best practice not to reuse the same password across multiple services, and we recommend that people change their passwords if they are doing so.

    Conclusion

    It is our responsibility to make sure things like this don’t happen, and we failed to meet that responsibility. We recognize that in order to maintain user trust, we need to work very hard to make sure this does not happen again. There’s little hope of sharing and growing the world’s knowledge if those doing so cannot feel safe and secure, and cannot trust that their information will remain private. We are continuing to work very hard to remedy the situation, and we hope over time to prove that we are worthy of your trust.

    The Quora Team

     

    What a bunch of careful, responsible people, those folks at Quora are! So appropriate for a share-your-expertise site!

    After this notice, I kept getting the occasional teaser email from Quora, tempting me to click and answer a question or see an answer someone else gave. For example I got this one a couple weeks before the breach:

    11

    I know, it's not click-bait for the general public, but definitely a good one for me.

    Yesterday I got the first teaser I'd gotten since the breach email reproduced above. Here's the lead:

    12

    Not a killer issue, but I clicked out of mild curiosity about the answer, and also to see whether Quora was up and running normally. What I got was a lesson in how to respond to a security breach by driving your customers off. It's true, after all, that if there aren't any users, there won't be any meaningful security breaches — problem solved!!

    Here's the landing page — a new thing in itself, because clicking on an email used to be enough to identify you.

    11

    The cute graphics are all new. I put in my password and got the box in red above, telling me I had to reset the password by responding to the email they sent. OK.

    I got a typical password reset email:

    12

    I clicked on the link. I got to see even more wonderful new graphics! These guys are really trying! Then I put in my old password, because I wanted to; it's my password, I should be able to pick any one I want, unless they tell me there are rules.

    11

    Can't use my old password, huh? If you're so sensitive and caring, you could just possibly have warned me about that up front. Oh well. Here's a new one:

    12

    I put it in. It's new. They match. I click on the Reset Password button. Nothing. I change the password and click again. Nothing. Again. Nothing again.

    They just don't want me, it's clear. If I were a normal user, it would have been game over. But I'm not, so I went back to the password reset email and clicked again. This time I put in a brand-new password. Then, clicking worked — it got me to the login page, where I had to enter my email and new password yet again.

    Quora has a big, fat, ugly, super-obvious, BUG in their "we're taking responsibility for this breach and hoping to win back the trust of our users" new entry door to their site, not bothering to perform super-elementary QA on one of the main pathways of the new code. Not some obscure condition. Software QA 1.01.

    So just who are these geniuses at Quora? Are they the super-smart, rich, cool kids that have such a track record of excellence at other tech sites? Like Facebook and Twitter and the rest? It takes a bit of looking, but the simple answer is: yes. Super-smart. Beyond cool. Rich. And still can't get the most elementary details right!

    Business as usual in software. Whether it's government, big corporation or cool young hip tech company the story is the same: getting stuff to actually, you know, old-fashioned WORK is beneath, beyond, above or whatever for whoever's involved. Not to mention make software that protects customer data.

     

  • Software Planning is Impossible

    What else is new? Everyone knows that software planning is, well, just impossible. Live with it!

    I mean something else entirely. I mean that software planning, in the usual sense of the word, is literally impossible to do. Planning for a new house? OK. Planning a new road or intersection? You may not like the cost, disruption or time, but it can be done. "Planning" in the sense that everyone means it and uses it for everything else, is literally impossible for software.

    Here's why, in a nutshell: a "software plan" that is the exact equivalent of an architectural plan for a building, including the materials list is the software itself. Anything less is a vague sketch of what part of the building might look like, something a builder asked to quote on it would say, "I'll be glad to give you a quote when you give me a plan. Which this is not."

    Let me explain.

    Building plans

    Most people understand the basics of planning a building, say a house.

    You start by answering some very basic questions, like How Big, and Where. This location and size information determine much of the eventual cost of the building.

    You then go with architecture, i.e., the guts of building planning. Starting with size and place, you then go into the general shape and style of the house, things like How Many of What Kind of rooms and finishes. You may interact with the architect, creating several rounds of plans. As you agree to one level of generality, you dig deeper, ending with things like colors, appliances, and materials (wood clapboard vs. Hardie board, etc.). If the architect is modern, he'll show you 3-D renderings and give you 3-D walk-throughs of fully furnished rooms.

    Here's an overview that I built using common software, two snapshots from different directions: 11


    11

    Notice the landscape, the terrain, the bushes, the shadows — these are just snapshots from a complete 360 degree view. You can pick the angle, distance, height and even the position of the sun. Those are images of real clapboards, doors and windows that are commercially available, everything.

    Here's part of a floor plan I built with the same software: 13

     

    And here's a variation of that plan with a bunch of furniture added and some other changes:


    13

    Here's a sample 3D interior view from the software's website:

    15

    The software enables you to not just have pictures, but to do a full 3-D live walk-through, just like in a video game.

    If the architect uses modern software, the software will not only assure that the building is structurally sound, the software will fill in all the structural elements, including electrical and mechanical, and produce an item-level materials list.

    That's the plan! You can give it to builders for time and money quotes, and to the local building department for permits. Once you make an agreement with the builder, off starts the construction project. Hopefully it finishes about when promised, and you end up paying the agreed-upon amount. Done!

    When it's done, unless the builder has screwed up, it will look in physical life just like the drawings and renderings, with all the selected materials.

    Software planning

    Software planning is basically the same as building planning, right? Except even better! With buildings, you're digging holes, pouring concrete, putting up framing, and all sorts of time-consuming, physical things that are costly to buy and install. By contrast, software is just writing code, a very non-physical activity — any change you want or mistake you make is easily fixed, without ripping out building materials or breaking up concrete. Moreover, building planning is like old-style "waterfall" project management. Agile, the modern trend in software, isn't possible with buildings.

    Why is it, then, that everyone with practical experience knows that software planning is a mess, a total nightmare, pretty much no matter what planning regime you follow? Why do you think the industry keeps coming up with new ways of planning things? For building planning, the methods were always effective, and with the increased use of software, the methods have stayed the same, but the ability to render the results to clients and automate error-prone architecture work has gotten better and better. For software planning, we experience a ever-ending sequence of planning "revolutions," each of which promises to eliminate the perennial problems. Except the benefits are always in the future, and the future somehow never happens. What's going on here?

    At root, the cause of the problem is simple: the analogy of building planning and software planning doesn't work! But we keep trying.

    Here's why: a plan for a building at the level of detail I described above is truly comprehensive. The ability to render detail in 3D makes that clear, and the ability to generate the structural elements and item-level material list makes it crystal clear. While it's incredibly comprehensive for a building plan, no software plan comes close to that level of detail in terms of building software!

    The reason is simple:

    A building (or road or bridge) is passive, built by workers and machines from a carefully assembled inventory of passive parts and materials, things like lumber, wiring, pipes, siding, concrete, etc. A program is active, a carefully organized set of instructions and data (mostly instructions, i.e., software) that do things in response to active inputs and stored data, as a result of which stored data is added or altered and new data and/or actions are created in the outside world (e.g., new screen displays, messages sent to distant servers).

    This difference is a BIG DEAL. It's EVERYTHING!

    The Building metaphor for software is bogus

    Think back to the beautiful building plan I showed above, complete with its ability to enable a person to do a virtual walk-through of the building. That's like a software plan, right? It's kind of like what people call wire frames, mock-ups of the UI, right?

    Now imagine that the software plan we're building is for a combat-oriented, multi-player video game. How close is that ready-to-build physical building plan to a plan for the video game? Ahhh, maybe 5%? The easiest 5%, for sure.

    The detailed building plan we can walk through is like the "world" that video games create. Except the world is always changing. No building plan can deal with a player racing through the building, crashing through a window and firing a blast that blows a hole in a wall — a hole that is customized to the location and angle of the shooter, the type of wall and the kind of blaster used. The wall explosion could further impact another game player on either side of the wall, in real time. This is the hard part, and where the vast majority of effort of building a video game goes. It's the action part.

    Is building commercial transaction software like building a video game? No, of course not. There are some elements of video games that are unique. In other ways, building commercial software is harder than building video games! Video games are completely self-contained "worlds." For better or worse, commercial software is very much part of an existing, amazingly extensive world — of other software! Existing software that is always changing, has bugs, is extremely elaborate, and whose actions depend not only on what you ask of it, but what's been asked in the past — context. Commercial software is a complex series of actions, many of which involve interactions with other software that is no less complex.

    What's important in software, and takes the VAST majority of the work, is the action part. A building is passive. It just sits there. Even the "passive" parts of software are created, on the fly, by the action part. For example, when you first go to a web page, it might be fairly passive. But the second you click on something, action starts. Once you login, the action gets fierce, and is created, on the fly, just for you: with data that's been stored for you and things on the screen that are customized for you.

    This is the difference between walking into one of those Amazon retail book stores, which are what they are, and going to the Amazon website as a logged-in visitor. In the store, personalization is performed by a worker who walks up to you, who of course doesn't know your complete history of interactions with Amazon, which the software does and reflects in how it interacts with you. This goes all the way down to simple things; for example, if I go to the web page of a book I bought years ago but forgot, the software helpfully tells me I already own the book, would I like to get it again?

    Conclusion

    As usual, metaphors lead us astray — even the universally-accepted metaphors for software. This is really sad, and the fact that it's gone on for decades is even sadder, with no signs of changing for the better. I've gone into the sources of the bogus building planning metaphor in great detail in my book on Software Project management, along with more detail on why the metaphor doesn't work.

    All this history and logic is besides the point, in a way: the important thing is that, even today, the bogus building metaphor for software enjoys near-universal acceptance, in spite of its persisting monumental failures. Instead, the "leading minds" of the industry yammer on about irrelevancies like Agile and Scrum that may make people feel better, but don't change the underlying problem.

    You wish for a better world? Be one of the renegades who actually gets stuff done; try Wartime software. Here's the background.

  • Patient Incentives in Healthcare: Case Study

    We all know that incentives work. That's why you always read about the "low prices" and the "sale" about to start, or the "limited-time offer." They're incentives to buy this or that. A server at a restaurant is incented to provide good service to get a good tip, and a salesperson is incented to sell by getting a commission. What a good idea it must be to apply this idea to healthcare, right?

    Maybe the idea of incentives applies to healthcare. I take no position on that subject. But I do know that when the sprawling bureaucracy of a health insurance company tries to apply the idea, it turns into yet another costly bit of overhead that yields no benefits beyond allowing top executives who float blissfully above the facts and reality to claim that they're modern and innovative. Right.

    Incentives in business

    A business has strong incentives — to get incentives right! A giant commercial or government bureaucracy has NO incentive to get them right. The business is aware that it's spending money to get customers, money that could be spent on an endless number of good things, from advertising to improving the product/service, to improving customer service to get more repeat customers, and on and on.

    Here's the key thing: incentives are old news in businesses that have customers and need to make a profit. There is a long history of giving incentives, measuring how effective they are, and adjusting accordingly.

    For example, in retail there were specialists who carefully controlled each season's products and set the incentives based on the experience of measuring the results. Already in the 1990's software products began to emerge that would take line-item POS (point-of-sale terminal) data from the last few years, and predict what would be the best time to start a sale, on which products, in which exact stores, and how deep the cuts should be. After side-by-side testing in multiple retail chains, the math-driven sales proved to be more effective than the ones generated by the best, most experienced people. So the industry transitioned to the algorithmic approach — it's now malpractice if you don't use algorithmic sale incentives in retail.

    Retail has gotten amazingly effective with incentives, following the classic path to excellence in AI/ML as I described in this series of posts.

    Incentives by health insurers

    Given this background, when the health insurance giants finally decided to apply incentives to their insured population, naturally they carefully studied incentives in other fields and adapted state-of-the-art techniques to their situation, right? WRONG!! In every case I've seen, they've done the dumbest things possible, not just starting from incentives 1.01, but screwing up so badly that any internal measurement system (which of course was NOT in place) should have resulted in the prompt cancellation of the project and demoting everyone concerned to a starting position opening mail. None of which happened, of course.

    Incentive Case Study: Anthem

    Anthem is an excellent health insurance company. It and its managers strive to be industry-best and provide great service to the employers and individual customers it serves. I am using Anthem for the case study here NOT because they're the worst — far from it! I'm featuring them simply because I'm a customer, and so get a ground-level view of how things work there.

    Sadly, as a giant, heavily-regulated bureaucracy, Anthem lumbers along and, along with its peers, gets really important things wrong. I wrote about the mess they made a couple years ago, first by allowing themselves to be hacked, and second by responding to the hack somewhat, ahem, ineptly.

    Some time ago Anthem decided that Wellness and Health Incentives were something they should dive into. They now appear to be thoroughly committed to it, as they say loudly and clearly on their website: 11

    Some of these programs may be truly wonderful. I make no comment on them. But I doubt that the one that was pushed onto me was exceptional, so let's dive in.

    I decided it had been too long since I'd had a general health check, so I signed up for one and got it. After a while, I got a packet of stuff in the mail. Here's the top page: 12

    Anthem clearly had gotten the claim for my visit and auto-enrolled me in their incentive program to get me to do the stuff they say, including getting such check-ups regularly. Wow, the incentive-program-babies who designed this program thought, we'll pay him some money for getting a check-up and maybe he'll do it again next year, hoping to get another card in the mail — though of course we won't breathe a word about that.

    Let's check out the rest of the package. Next page I get a wonderful, inspiring picture  of how exercise makes me healthy: 13

    Next page, reality starts to hit. No more nice pictures and color. Just the facts, ma'am (the card itself was glued onto this page): 14

    It's a gift card kind of thing! Except it's "pre-paid," and is accepted where Visa debit cards are accepted, but when you use it, you've got to lie and say it's "credit." Hmmm. And NOWHERE does it say how much money is on the card! I have to call or go online to find out. And of course the card doesn't say Anthem, it says SVM. Who are they?

    Maybe I'll find out. Let's keep on. Next page: 15

    Great. When I get a regular credit card, I call a number to make it live, and then away I go. For this one I have to read something and then check somehow what's on it. I wonder how much IS on it? I hope eventually they'll tell me. First I better read the rest. 16

    Man, this print is getting mighty small! I wonder if I have to pass a test before I'll be allowed to use the card that may contain some secret amount of money that no one will tell me. Minus whatever fees and other stuff that they jam into it when they feel like.

    Reading carefully, I find out that, even though it's a debit card, like for your bank, I can't just get the cash out of the card! What the &*&&$&*()$ is THAT about?? It's an incentive, darn it! A incentive that's a money incentive — and I can't get the cash and spend it?? What is this, if I go to Burger King and order something they don't approve of the card won't work?? Who knows??!!

    Reading more, I can't use it at an automated gas pump. Or at most restaurants. And there's a PIN, which I have to call to set — in TINY print in the middle of the TINY PRINT page? I thought I was supposed to select CREDIT, and credit cards don't have PIN's. What's going on?

    Maybe the next page will help: 17

    Or maybe it won't. Or the next 3 after that, which are more of the same, and which I refuse to read.

    Finally, last page, bigger print:

    18

    This appears to be the cheat sheet. This I can read. It appears I really do have to go online and enter a bunch of stuff, and remember all the conditions on using it, including the ones they don't repeat on this sheet. Maybe I can even find out how much money they're giving me!

    If I were a regular person with some mild interest in whatever this incentive is, I would have dropped out by now. But I'm a fanatic and want to see how this story ends, so I'm going on to the next step. On-line we go! First step:

    SVM 0

     

    Get my card, copy the numbers in. Next step:

     

    SVM 1

     

     

    Put some more numbers in. Next step:

     

    SVM 2

     

    I appear to be in, but not really. I have to go back, enter the security code again and also enter the hard-to-read code they put in to stop robots. Wow — all I can think is, this incentive must be huge. Why else would they be making it so hard??

    Finally, I get logged in:

     

    SVM 4

    I think I've said "wow" a few too many times, but I should have held off for now. Tucked away in the upper right corner is the size of the golden goose I've spent all this time and effort seeking: $50! Maybe. Sort of. Except not in cash. And minus whatever fees. And not at restaurants or gas stations unless you follow the rules. And maybe there's a PIN, check the amount before each time you use it because you might have been dinged a fee, and…

    And before I can touch any of it anywhere, I've got lots more information to enter. I'm outa here!

    Conclusion

    This incentive card program is one of the more bone-headed, dysfunctional things I've encountered in a while. Lots of lawyers, bureaucrats, managers and even publicity/image people contributed to it, but did anyone with, you know, real knowledge of how things like this work ever get a shot at it? The number of steps to get rewarded and all the uncertainty and conditions are guaranteed to produce maximum drop-out. In the reward card business this is called "breakage," and unethical rewards people try to maximize breakage; Anthem should be up for "rookie of the year" in the breakage stat.

    It's one thing to try to create good behavior by giving an incentive. It's another to dangle the promise of an incentive, trick people into going through a horrific maze that most won't make it through, to get the incentive (of unknown value!), with the primary result that your basic impression of the health insurance company as incompetent and wasteful and something you should ignore whenever possible is strengthened.

    Is anyone out there listening?

  • What are Software Fashions?

    “Fashion” is a word we associate with clothes. Software is hard, it’s objective, it’s taught in schools as “computer science.” Software can’t have anything to do with “fashion” if it’s a “science,” can it?

    Sadly, software is infected by fashion trends and styles at least as much as clothes. Fashion has a huge impact on how software is built. Understanding this, along with other key concepts like those involved in Wartime Software, can contribute greatly to building great software that powers a business to great success.

    Fashion

    We all know what fashion is, exemplified by fashion shows like this one:

    Lady models

     with impossibly thin female models strutting down cat walks wearing clothes that no one is likely to wear in real life.

    Not as often, but guys too: Male models

    Far more important than models wearing extreme clothes is everyday fashion. I grew up looking at men dressed like this: Suit

    It's what men wore to church and to aptly-named "white-collar" businesses all the time. But there's nothing special about a suit and tie. Here's a look at Dutch fashion in the early 1600's:

    11
    11

    Fashion is arbitrary! it's just what people wear. Everyone judges you by what you or don't wear, according to prevailing fashions.

    Of course, "fashion" goes way beyond how people dress. It's how you act, how you speak, the accent you use, the interests you express, just about everything. If you don't think it matters, just trying wearing NY Yankee regalia into a South Boston sports bar and start spouting trash about the Red Sox. Lots of people who think they're the kind of people who are "above" fashion are driven by it nonetheless — just look how they respond to people walking into a room, and if there's any doubt, seconds after the newcomer opens his mouth.

    Fashion is about people. It's about belonging, status, fitting in or "making a statement." We live in a fashion-dominated world, like it or not.

    Fashion in Software

    I was one of those people who was convinced I was "above" fashion. Being above it made me superior, in my mind, to those who were slaves to it. During and beyond college, I bought the few clothes I wore from a local used clothing store and wore hiking boots most of the time. Once when I was in my first post-college programming job, I was called out of the cubical where I spent most of my time, heads-down, programming away, and asked to come into a meeting in the front of the building. I walked in to a meeting populated entirely by men in suits. One of the men I didn't know glanced at me and immediately exclaimed "finally! We get to talk with someone who knows things!"

    I had been called into a sales meeting, and one of the visitors had software questions no one knew the answer to, so "the suits" had called in the guy who knew the answers. How I dressed and acted in fact made a statement to the visitors — I dressed the way a programmer dressed, the kind of programmer who wanted to program, not one who aspired to management. So, like it or not, I was making a "fashion statement," while fooling myself thinking that I was "above" fashion. The hard fact is, no one is "above" fashion. The way we dress and act and talk, the choices we make, says loads about us. Those unavoidable choices clearly establish our place in various groups, social and status hierarchies.

    Software Fashions

    Given how fashion-driven our lives are, it would be shocking if programmers weren't fashion-driven in their shared activity of software. In fact, they are! Sadly, the vast majority don't think their choices are fashion-driven. They believe they're modern, with-it software professionals who are using the proven, advanced methods for doing software. The trouble is, few of them take the trouble to cast a knowledgeable, cold, hard eye on the arguments, experience and facts concerning their chosen methods and tools. They've made their choices so they can be with whatever software social group they identify with and/or aspire to. It's all about relationships and status. If you're ambitious, you may want to be with the "cool kids," members of an elect social group, yes, in software. And it works! If it didn't "work" (elevate their software group status), they wouldn't do it.

    We like to think of people who wear fashion-forward clothes as being empty-headed, shallow people. Surely, programmers aren't that! But to the extent that they adapt fashion-forward software, that's exactly what they're doing — only worse! They're lying to themselves, deceiving others, and making believe they're pushing some trendy software thing because it's advanced technology, yielding results superior to the obsolete stuff that used to be the standard.

    Just like with clothes, software fashions evolve. Fads start and may become hot. The fad may evolve into a fashion as it spreads, with people who haven't adopted it taking notice. The fashion may further evolve into standard practice, with eyebrows being raised for anyone who dares to question it. More often, the fashion simply fades away.

    It's rare for any fad or fashion to be explicitly repudiated — oh, that was a terrible idea, people are turning away from it for good reason and here's why. No one says that! In the "advanced clothing" area, everyone knows that fashion is "just fashion." In software, fashions aren't considered "fashions;" they are considered "advances," emerging modern techniques that are objectively better than what came before, like a new drug or operation that has emerged from clinical trials and now saves lives that used to be lost! Saying that a widely adopted software fashion was never proven, was always a bad idea, but got widely used and promoted anyway would expose the game. So when software fashions die, they fade slowly away and simply stop getting used and talked about.

    After a fashion fades away, it's generally forgotten by nearly everyone, usually except for a band of true believers. Some of the more intellectually heavy-weight fashions retreat to academia, where they live on, always with "exciting futures."

    Some of these flourished-but-died fashions rise to live renewed lives. In one pattern, the fashion was so broadly accepted but such a failure (though rarely discussed as such) that when it becomes fashionable again, it has a new name. No one ever refers to last time, why the older fashion didn't work out, and why this slightly altered version of the same thing will. In another pattern, the fashion baldly re-emerges with exactly the same name, and nearly the same blazing-bright future as the last time. Sometimes there are even some successes. But it remains a fashion and therefore has disappointing results to anyone who cares to look, which is essentially no one — such is the social power of fashion!

    Not all fashions die. Some fashions have such powerful support that they become locked in as part of modern mainstream practice, sometimes even becoming part of so-called Computer Science, or at least IT Management. I don't fully understand how and why this happens, but I know that in part, the fashions that become standards address some widely felt need in the people involved in software. When this happens, there is often a series of waves of renewal or reform — while the reformers refuse to acknowledge fundamental problems with the fashion-enshrined-as-best-practice, they latch onto some minor tweak or addition and promote it, usually with a new name, as the best way to get results with standard-practice X.

    What are these software fashions exactly?

    I have already talked about a few important toxic software fashions. I have gone into huge detail for a couple of them, with multiple blog posts and even books. I'm gradually starting to understand this bizarre phenomenon in terms of powerful social fashions with bad results, masquerading as "advances." I'm seeing the resistance to seeing the Emperor's New Clothes for what they are because of the self-delusional conception of software as a science/math-based STEM field, rather than as the pre-scientific collective group-think that it largely is.

    I have already challenged a couple of the modern hot fashions, for example my series of posts on AI/ML — a classic example of a once-hot fashion that has died away and been re-born multiple times. This particular fashion is distinguished from some of the others because there are some truly excellent algorithms at the heart of it that can be applied to great benefit — and this has been true for decades! But because of widespread fashion-itus, the money and effort spent on them is mostly wasted.

    In future posts and at least one future book (in process), I will continue to dive into and expose specific software fashions for what they are. I do this in part to strengthen the resolve of those special people and groups (some of them ones we've invested in) with the understanding that the "wrong" or "uncool" things they're doing give them a fundamental business/technical advantage, and they should stick to their guns and ride their truly effective methods to success, to the benefit of all concerned.

    Further reading:

    Resistance to treating scurvy compared to software disease treatments.

    The modern AI/ML fashion.

    The story of how I discovered the fashion vs. what-really-works issue.

    Deconstructing project management.

    The story of fashions-becoming-standard-practice.

    The Cloud fashion.

    Big Data fashion. Big Data bubble.

    Evidence-based software methods don't exist.

    The recurring fashion of data definition location.

     

  • Medicine as a Business: Medical Testing 5: The Results

    I've gone through quite a bit to get the results of my MRI. See here for the previous installment, and here for the start of the saga. I glanced at the report and it looked good. In this post, I'll describe the unsettling things I found when digging deeper. In sum: the whole baroque nightmare of scheduling, performing and delivering results of medical tests is not only inefficient and riddled with needless high cost and waste, but more important there are serious quality problems leading not just to delay and waste, but bad results.

    I fully acknowledge that what has happened to me pales beside the waste, incompetence and fraud that pervades the worst medical systems. My point is that, even in the best healthcare systems, bad things are happening.

    The Results

    I glanced at my hard-won test results and felt OK, mostly because I had been told that radiation therapy took a looooong time to show results, and I shouldn't expect anything to change at the first MRI. There was no OMG or IT'S GROWN LIKE CRAZY in the notes when I glanced at them, so I let it rest. I was tired out from all the effort of getting the darn thing.

    Then I looked more carefully. I'm still OK for my personal case, but on close examination, I realized that the final report of the MRI was consistent with the crazy things that led up to it: scheduling the test, taking it, and getting the results. It's all part of a bizarre system that has GLARING flaws that seem like they should be easily fixed, but nothing much happens.

    Here is the key paragraph from the earlier of the scans, the one that led to the radiation treatment: 11

    Here is the corresponding paragraph from the second scan, the one I struggled to get: 12

    The first thing that jumped out at me was the simple observation that there are no standards! Radiologists are incredibly smart, well-educated people. College degrees. Super scores on GMAT tests. Degrees from incredibly-hard-to-get-into medical schools that have TINY numbers of students. Then more years getting further training to become medical imaging specialists — usually five more years, on top of the four years of college and the four of medical school! 

    If I showed you the whole report, you'd immediately see that even the paragraph and subject-matter organization was different. About the most glaring thing to me was that the second report gave actual dimensions of the tumor, while the first did not! Don't you think that when tumors were involved, specifying the actual size would be the standard?

    There's lots more that could be said, but I'll leave it with these simple observations:

    • There is no system in place to record and assure that the required location is being imaged. The key thing can be missed because it wasn't imaged.
    • There is no consistency of exactly what is reported on and how it is reported. It is difficult to compare reports and assure that what you need is there.
    • For tumors, there is no consistent positioning and measurement of size. You could miss a tumor altogether, and easily miss size/location changes.

    Conclusion

    I'm in remarkably good shape, having a scary diagnosis of an extremely rare cancer. I received great treatment from highly skilled professionals at every step of the way. I received chemo that had only 25% chance of working, but it shrank a rapidly growing tumor. Then, when it started growing again, I got right into radiation, which has at least prevented further growth, and should finally stamp it out. I have nothing to complain about, and a great deal to be thankful for, including all the professionals who treated me. 

    I've written this series of posts about Medical Testing NOT as an indictment of the individuals who have treated me, but as a serious indictment of the system in which they work. Here is a summary list of the things that "could be improved;" details are in the prior posts of this series:

    • Scheduling and getting a pre-auth for a test can be a labyrinth and delay-filled nightmare.
    • There are multiple issues with the wasteful, expensive and time-consuming blood test.
    • There are multiple issues with specifying and following an exact procedure for the location and mechanism of the scan itself. Instead of being in the system, the nurse has to get information from the patient and guess about other things!
    • The equipment and software is built in a regulation-protected bubble, which results in 10X or greater cost and trailing-edge technology.
    • Getting results that have already been created by the radiologist can be an obstacle-filled maze, even if you try to use a “patient portal” that is supposed to make things to easy and transparent.
    • The patient portal is mostly a sales pitch about how the hospital is wonderful – getting the information is a big problem, and then important information is wrong or just plain not there.
    • Finally, the results produced by super-highly-trained doctors based on these expensive and questionable inputs don’t meet any modern standard for content.

    Why doesn't anyone in management seem to care? I've often wondered this, and speculated about why several times. What's clear is that there's a hierarchy of prestige in every society, including ours, and that the top of the hierarchy is populated by people who focus on strategy, policy, direction and messaging. They are, for the most part, "above" getting "lost in the weeds." Sorry guys; the action is on the ground, where real things happen to real people. That's where you discover what's wrong, and when you "fix" something, that's where it's got to change.

  • Medicine as a Business: Medical Testing 4: Getting the Results 2

    I've done everything I can to use the Mount Sinai patient access portal to access my test results, without result. (See here for the start of this saga, and here for the previous post.)

    Now it's time for desperate measures. I finally take the radical step of picking up the phone and calling for help. Surely the results are there!

    Here's what happened.

    • I called.
    • I was put on hold.
    • I explained the situation.
    • I was put on hold while the CSR checked something out.
    • More questions. More holding. Rinse and repeat several times.
    • Hold while I check with my supervisor.
    • Rinse and repeat several times.
    • Final result: we can't help you, call your doctor and have them help.
    • But what can they do that you, the specialist can't??
    • They have a number they can call to get help.

    More than half an hour on the phone, and I get to ask my doctor to call someone who won't be able to help either. And I'm sure my doctor would jump at the chance to fix this problem, since he looooooves the EMR so much!!

    Desperate and out of options, I call the doctor's office.

    • I got transferred to a 5 minute wait before getting a dial tone.
    • I got transferred to voice mail.
    • I got transferred to nowhere again.
    • Again.

    Finally, someone picked up whose voice I recognized — the office receptionist. I explain the problem, and he tells me that the Mount Sinai Radiation Center uses a different EMR than the rest of Mount Sinai!! Apparently one that doesn't send patient data to MyChart.

    He promises to get me into the Radiation Center EMR patient portal AND send me the results. "What's your fax number?" he asks. "Umm, can you send it by email?" Pause… "Sure, I can figure out how to do that. What's your email?" I gave him the information, and five minutes later, I got an email with a PDF document attached. The document had the test results and instructions on how to get into the patient portal.Thank you!

    Problem solved! I read the report, and the news was good. The thing that had been growing had stopped growing. But self-sacrificing guy that I am, I didn't stop there. What would have happened had I not persisted in my calling, and connected with a helpful and knowledgeable receptionist? After all, the report was supposed to be in the patient portal.

    So I persisted. I decided to get into this special patient portal and finally see that the test results were actually posted there and available to me.

    The Radiation-only EMR and patient portal

    Leaving out all the details, I followed the procedure and after only a moderately odious amount of work (I had an access code!), I got into the portal: My c

     

    Then I went to the test results, where my report should be: My d

    It's not there, of course. Why am I not surprised?

    The test results report should have been in My Mount Sinai Chart. It was not there, as confirmed by multiple levels of customer support people. It should also have been in the Radiation Oncology patient portal. It was not there, as you can see above. Given that an insider was able to access the report quickly and send it to me, the report was certainly in both EMR's. It was in the normal Mt. Sinai EMR, because that's where the doctor who wrote the report put it. It was also in the Radiation Oncology EMR, because that's the EMR of the doctor who requested the test — and as I learned early in the process, it was easy for people in the radiation center to put orders into the "main" system.

    Here's the key point:

    Neither of the two EMR's at Mount Sinai that were involved with my test put a copy of it into the relevant patient portal so that I could see it. While I managed to avoid the usual doctor's appointment to find out the results, it's not clear how much time and frustration I saved in the end. Here's what was promised: My z

    What was the reality?

    • The test results were not available in MyChart. Is Mt. Sinai management unaware of this? Are they just lying and hoping to avoid embarrassment, as they do with other important "low-level" things? See this for a juicy example, and this for context. Either choice is unacceptable.
    • The customer support service, when finally available, was unable to help.
    • The original doctor's office was unavailable.
    • The SURPRISE! special, different EMR used by my Mount Sinai department also didn't have the report.
    • I only got the report because of repeated calling and a chance encounter with a kind receptionist.

    Yeah, yeah. I'm computer and math guy, and I know statistics, and I know this is just one example. But can you really imagine that what I went through was a giant, almost-never-happens, tiny blip in a uniform fabric of excellence? Right. Wanna buy a bridge? I've got one real cheap for ya…

    The E-mail!

    Wait! There's more! After I drafted the saga of getting my greedy hands on the MRI results, something happened.

    About a week later I got an email: 1 new result email

    WHAT!!?? This test result was supposed to be on the radiation center's portal!

    What's more, the only reason I got the email telling me the result was available on the Mount Sinai patient portal was because I was previously a patient and had signed up for it. If I had come into the Radiation Center directly, without having a history at the broader hospital system, I'd still be waiting.

    On July 24 I'm told that the result was posted and available to me. A result from a test that was posted to Mount Sinai's system on July 3, 3 weeks earlier. It's a good thing we've got computers — if it took 3 weeks to make a copy of a short document from one place in the Mount Sinai computer system and store the copy in another place in a related program in the same computer system, imagine how long it would have taken to do it manually! Years, probably!

    I'm writing this on July 31, 2018, so by now the result surely will be posted on the patient portal for my doctor, nearly a month after the test was taken, right? Let's check: 2018 07 31 Radiation center tests

    Nothing is available. So much for the Radiation Center's patient portal.

    Now I'm curious. Is the test really there, even though on the wrong portal? Here's the results list: 1 mychart 7-24 tests

    Yes, it's there, top of the list.

    MyChart also provides a convenient to-do list, things I'm supposed to do, and there's something on the list. Better check it out, even though no one's told me there's something for me to do; this subject is important to me, to put it mildly, and I wouldn't want anything to slip through the cracks. Here's the to-do: 1 -mychart todo 7-24

    Oops. The MRI that was expected to be taken on June 11, actually taken on July 2 because of my initiative, and posted to the portal on July 24 is listed as a to-do item. The EMR evidently failed to connect the work order with the fact that the work ordered was performed and the results delivered.

    This sounds benign, but it's actually scary. Deeply scary. The system doesn't match orders placed with results delivered, which means that orders could hang in space, ignored, with patient-essential work undone, unless a concerned and involved patient tracks it. In my case, there was a concerned and involved, not to mention detailed-oriented patient. What about the normal case? How many important things just hang out on a to-do list, undone, until they are "cleaned up?"

    There's more trouble coming. When I glanced at the results, I got the impression things were OK. But when you dive in, … see the next and final post in this series.

  • Blockchain is like the Wizard of Oz

    If you haven't already seen the classic movie "The Wizard of Oz," I highly recommend it. It's entertaining and instructive. Its lessons remain applicable today — they can even teach us about the amazing Blockchain technology that is poised to transform so many industries, solve so many long-intractable problems, and that is attracting such massive attention and investment.

    The Movie

    Dorothy, along with her dog Toto, you may recall, is swept up from her home in Kansas by a tornado, and eventually comes to earth in the land of Oz.(Credit.)

    270px-The_Wizard_of_Oz_Judy_Garland_Terry_1939

    It's quite a place, populated by witches and munchkins, among others. The good witch of the North, Glinda, tells Dorothy that the wonderful wizard of Oz may be able to help her get home, so she sets out on the yellow brick road to the Emerald City, where he presides. Along the way she meets the Scarecrow who needs a brain, the Tin Woodman who desires a heart and the Cowardly Lion who needs courage. The four of them join forces to ask the wonderful Wizard's help.

    The Wizard promises to help them all — but only if they bring him the broomstick of the Wicked Witch of the West. So off they go and confront the Witch.

    270px-The_Wizard_of_Oz_Margaret_Hamilton_Judy_Garland_1939

    Eventually they defeat the Witch and bring her Broomstick back to the Wizard's palace. They walk down the intimidating hall 

    Oz walking hall

    Until they reach the Wizard's throne.

    Oz enter big hall

    Gathering up their courage, they present the broomstick to the terrifying Wizard…

    OZ big

    and ask that he fulfill his promise to them. 

    Oz group scared

    The Wizard stalls. Meanwhile, Toto the dog, noticing something, pulls aside a curtain and reveals a man talking into a microphone. It's the real Wizard: the Great and Awful Wizard of Oz is just an ordinary man!

    Oz revealed

    Dorothy confronts him. He admits he's just an ordinary man, and a humbug at that.  

    Oz and dorothy

    He gives the Scarecrow a diploma, the Lion a medal and the Tin Man a heart-shaped ticking watch, helping them see that the attributes they sought were already within them. He offers to take Dorothy and Toto home in his hot air balloon. Then there's a mishap, and he leaves without her!

    The story ends happily, because the good witch intervenes, and shows Dorothy how to return home under her own power, repeating three times "There's no place like home."

    Blockchain

    What can the Wizard of Oz possibly have to do with the marvelous emerging technology of Blockchain, which is set fo transform so many domains that are badly in need of help?

    The movie has amazing lessons for us. I can't spell them all out in a single blog post. Here's a start:

    Dorothy is stranded in a strange place and doesn't know how to get home. People who run financial systems have problems like lengthy settlement times that aren't getting solved.

    Dorothy meets other people in the strange place who also have serious problems. People in other domains, like healthcare, have long-standing problems like EMR interchange that aren't getting solved.

    The Good Witch tells Dorothy that the Wizard of Oz can help her get home. Authoritative people tell us that Blockchain can solve those problems.

    Dorothy travels a long way to the Emerald City with her friends to ask the Wizard's help. After lots of work, people commit to the money and effort of a Blockchain trial.

    The Wizard tells Dorothy that she has to bring the Wicked Witch's broomstick before he'll help them. Blockchain experts explain all the work we have to go through to get a test that has a reasonable chance of success.

    Dorothy and her friends go through battles to get the broomstick, finally killing the Wicked Witch to get it. After lots of money and time and experts, a trial is finally underway.

    Dorothy and her friends approach the Wizard and ask him to do what he promised. The sponsors of the blockchain project insist on results.

    Toto pulls back the curtain, and reveals that, far from having amazing powers, the great and awful Wizard is just an ordinary man, and a humbug at that. The sponsors finally see that Blockchain solves no problems and is worse in every way than a normal DBMS.

    The Wizard makes nice words that make her friends feel better, and after promising to solve Dorothy's problem, abandons her. Blockchain can't do much of anything, outside the context of Bitcoin, and when it appears to "work," the results are awful.

    Glinda the Witch tells Dorothy to close her eyes, tap her heels and say the words three times. She wakes up in her bed in Kansas. Her relatives think she's had a dream. The Blockchain executives quietly let the project fade away. They do their best to calm their minds, refuse to admit defeat, and go back to their normal lives.

    Conclusion

    The world of Blockchain is indeed like the Wizard of Oz. While you're "in" the movie, you're convinced it's real, and so is everyone around you. When you wake up, you're back in normal life and understandably reluctant to think the amazing experiences you've had were "just a dream." But everyone else knows that's all it was. A dream that seemed good at the time, but turned out to be, yes, a bad dream. See this for a fact-based dissection of the bad dream.

     

  • Social Media Has a Long History

    It seems like the whole world is in an uproar about social media, with frequent revelations of awfulness and malfeasance. The uproar is about social media such as Facebook, Twitter and the rest. The trouble is these issues have existed in one form or another in social media going back … hundreds of years. What we are seeing is ignorance of the present combined with ignorance of the past. In other words, business as usual.

    The Core Drivers of Social Media

    Most people care about their place in society — their status. They care about how they're perceived, who knows who they are and how others relate to them. While this drive seems to come to a kind of perverse peak in middle and high school, it persists for most peoples' lives.

    Intimately related to caring what others think is the drive to express what you think and what you've done. While this is related to influencing others' thoughts, it seems to be a kind of innate drive as well.

    In short, you want to tell people what you've done, what you think, and you want to hear about other people, particularly those you somehow are involved with or even just similar to in some way.

    Closely related to this are the core concepts of status and fashion. It's a basic urge to want to see your status reflected publicly if it's high, and many people have strong interest in what high-status people do and how they do it. Particularly as fashions of various kinds wax and wane, from clothing to activity to speech, people who want to increase their status have an intense interest in learning what the new things are.

    Key Characteristics of Social Media

    What makes something Social media vs. some random other kind of media? It's pretty simple: social media mostly consists of media (words and pictures) that is about and either written/created by the person or sourced from that person. It's about what a person says, thinks or does.

    Now let's get to the other key characteristic: money. Who pays for social media? After all, it costs quite a bit to produce it, and that money has to come from somewhere. Historically, the people who consume the media pay a little, while … get readyadvertisers pay a lot. Today, the incremental cost of delivering social media to the person who consumes it is so little that no one bothers to charge for it — the whole cost is borne by advertisers.

    Social media is an amazing phenomenon, deeply rooted in human drives and emotions. People produce the content for it for free — they are glad to have things about themselves distributed at little effort of their own to those they may like to know about it. And they read about themselves and people to whom they are socially connected with, paying to do so if necessary. They can't help but knowing that the ads that are intermixed with the "content" are going a long way (in the electronic world, all the way) to paying for their reading pleasure, but it rarely bothers them. They also know that ads are targeted to particular groups of readers. It makes common sense, after all. No big deal — if I were an advertiser, of course I'd want to show my ads to people who are likely to buy what I'm selling!

    Earlier versions of Social Media

    People like to imagine that social media are strictly electronics-age things. Mark Zuckerberg invented it, didn't he? No, sorry! Social media have been around for a looooong time. I could go all the way to ancient Sumer and Egypt, but I think the point will be clear enough with more recent examples.

    Here is a notice in the Pittston Pa Gazette from 1928 about a function attended by my grandmother, Agnes Black:

    1928 social notice

    Here is an ad that helped pay for that information to be printed:

    1928 ad

    More recently, here is a notice in the same paper from 1955 about a visit made by my parents:

    1955 news

    Here is one of the ads that helped pay for the notice.

    1955 ad 2

    The notice was actually fake news! My parents visited with their two children, David Bruce Black and Douglas John Black — not David and Bruce. The advertisers don't care a whit — they just want the eyeballs to persuade them to buy some hot new technology:

    1955 ad

    I could give examples from many other places and centuries. Things have evolved, but since the principles are rooted in human in human nature, not as much has changed in principle as you might think.

    Conclusion

    Everything is about people. A great deal about people is relationships and status. The experiences we had in middle school and high school didn't disappear into nothingness. They just evolved as each of us entered new groups of people, each with its own pecking order and rules for engagement. One of the most ironic things about modern social media is that certain groups of people are really upset about what gets published, and want to make sure that only the "truth" is published. They, of course, want to be in charge of defining what "truth" is. Sorry, guys, in the world of social relations and much else, "truth" is nothing but a pretty veneer on top of raw power. Yes, your grace, your honor. Why should it be different now that we're staring into little screens and swiping while we walk?

     

  • Medicine as a Business: Medical Testing 3: Getting the Results

    If you went to the time and trouble of a medical diagnostic procedure, chances are … you want to know the results. ASAP!

    It's a perfectly reasonable desire. In most areas of life, getting the results of something you paid for is pretty easy. If the results are information, most organizations just send it to you — by snail mail, email, text or whatever you've arranged. For example, think about the crucial tests you take that have so much influence on your schooling and career, things like the SAT, MCATS, LSAT, and professional certification tests. You take the test and they send you the results in a standard way.

    Not so in the wonderful world of medicine! In that world, you go to considerable trouble to arrange the test, and once it's been taken …the fun of getting the results begins!

    Getting medical test results

    The usual pattern of getting the results from a medical test appears to be based on the assumption that patients are both stupid and illiterate. No way can you just send the results! The patient has to make an appointment with highly qualified medical person, who then patiently explains to the patient what the results were and what they mean. Plus, there's an office visit to be paid for.

    We are told, however, that there's a revolution going on with medical record transparency. In this wonderful new world, patients can access their medical records themselves!! The major EMR vendors now support a "patient portal" for making such results available online, and major hospital systems brag about it.

    Hmmm, I wonder if that's how I could get my results. Oh, I remember now, Mt. Sinai has a patient portal! I'm even signed up for it! Oh, good, this should be easy…

    Getting my results from the Patient Portal

    The test was ordered at Mt. Sinai. It was performed at Mt. Sinai. I have a MyChart patient portal account at Mt. Sinai. This should be a piece of cake. I pull up the main screen: Mychart 1

    Isn't it nice? The EMR software provider, Epic, has a patient portal module called MyChart, which Mt.Sinai has cleverly called My Mount Sinai Chart. All I have to do is login, and I'll surely be able to access my recent test result, just like they say!

    I login. I'll spare you the details, and keep it short: the MRI report is not there.

    How is this possible? What happened to "no more waiting for a phone call or letter — view your results…"??

    I have just one thought. Maybe the fact that my original doctor left Mt Sinai and that I signed up for the MRI with a new doctor at Mount Sinai confused the system. Maybe I was signed up under a different identity!?

    I poke around on MyChart a bit more. In reality, I visited the Mt. Sinai radiation center 30 times over about a six week period, and had separate consults with the doctor in charge of my radiation at least four times. NONE of these visits are listed. In fact, the last visit recorded was from 2016!

    MyChart is still a wonderful program, probably ready, willing and able to show me all my stuff, but probably human error resulted in me being entered as a new person. All I have to do is create a new account, and I'll find all my records.

    Signing up for the patient portal account

    I'll dive right in. Given how important this is, the portal is probably written to make this effective and efficient. Here goes! I click on set up new account and get to here: Mychart 2

    What's this activation code business? I look around and find this: My 3

    Odd. "Sign up online?" I thought that's what I was already doing! At least there's something relevant for me to click. I click it and get this: My 4

    That's more stuff to enter than I've seen in a while. There's a lot that could be said about this form and how it works, but I'll just point out one unique aspect of it:

    My 5
    When was the last time you had to enter your county? Even better, even if you've already entered the state, you get a list of all the counties in the whole USA!

    Once you get to this point in the form, you realize that the creative people who built this software have actually created an obstacle course, a long and challenging one, hoping that most people will drop out from exhaustion long before completing it. And we haven't gotten to the really good stuff yet.

    Establishing identity for the patient portal

    Apparently it's really, really, REALLY important to make absolutely SURE that only the person themselves signs up for chart access. After filling out the form you see above, I got my identity hammered at: My 6

    Next, where have I worked: My 7

    A home equity loan: My 8

    My bank: My 9

    My former home: My a
    Finally, after accurately answered all of these questions, and risking totally awful 100% identity theft if their system is compromised, I get this: My b

     

    At this point, a sensible person would have given up and tried to make an appointment with a doctor, so the doctor can access the results document and essentially read it to me. But convinced as I am of my ability to read documents (egotist that I am), I decided to plunge ahead and try another path to getting the document. The next post continues the story.

  • The Hierarchy of Software Status

    You might think that the hierarchy of status in software closely tracks the hierarchy of software skills, which I explained here. Hah! It is true that there is a small but important subset of people whose internal sense of status tracks the skills hierarchy reasonably well. But not for most programmers. The reason is simple: non-programmers' assessment of software status is unrelated to skill! It's a whole different hierarchy that has nothing to do with the ability to conceive and write good code that works!

    Here is an earlier attempt to explain this issue.

    This is NOT just an "academic" subject, BTW. It is absolutely crucial to getting good software done. See this for another angle at addressing the same subject.

    Status

    Status is part of human existence. There has always been a status hierarchy. Not long ago, we had lords and peasants in which the status differences were overt:

    MedievalLordAndPeasant

    Status itself has barely changed since those days, though the clothing and other means of expressing it has changed a great deal.

    Status is expressed in clothing, language, money, power relations, human interactions and in much of what we do. It would be shocking if generic human status relations did not apply to software. What's interesting is the translations that are made, which track medieval lord-and-peasant relationships quite well.

    Status in Software

    The easiest way to explain how status works in software is to look at how "close" you are (along multiple dimensions) to real people using code today. The closer you are to actual code and/or people using code, the lower your status; the farther away, the higher your status.

    • One dimension is time. The highest status of all is enjoyed by people who think great thoughts about how some software might be built by some undefined group at some undefined future.
    • One dimension is management layer. In any group, the "lowest" person in terms of management levels has the lowest status, the manager of the worker's manager has twice-higher status.
    • One dimension is closeness to a real user. Anyone working in any way on code that isn't yet used by anyone has greater status than anyone working on code that is in actual use.
    • One dimension is tied to the flow of interactions with users. If you work in any way on the flow of code or changes to code that is on its way to users (i.e., to production), you have higher status than anyone working in any way on the flow of information and communications that flow from users to the supplier or deliverer of the software. In other words, building software is higher status than customer service.
    • One dimension is trendiness. It's really important to identify yourself with tech trends that are exploding in fashionable talk.
      • If you're too early, you lose status. When someone hears you babble on about something, if they check it out on conferences, publications and the web and it's obscure, you seriously lose status.
      • If you're too late, you lose status — you're pegged as someone who's part of the crowd. Better late than never, of course. You also lose status if you keep bringing up something that's starting to fade away.
      • Maximum status is early-peak attention.
      • You can't actually know anything or do anything in the fashionable area and have any chance of maintaining status. You've got to direct, hire, establish, prioritize or strategize. Those all keep your hands safely clean and enhance status.
    • A dimension that becomes crucially important during the hiring process, and which often persists long after the hiring date, is the visibility and perceived success of the person's prior company. The more visible and successful, the higher the status.
      • No one knows whether the person had positive impact on the company's success or the opposite. The person from Google has a halo, regardless of what they did.
      • This is a subject I treat in considerable detail in the Software People book, along with other important hiring considerations.

    Naturally, there are nuances.

    • For the people who deal with the flow of data back from users:
      • There are people who respond to customer complaints and problems. Super-low status.
        • Some of those customer issues are real bugs in the software! The very lowest status goes to the people whose job is to patiently educate the stupid users whose inability to do the simplest things with such a wonderful application is obvious to everyone except themselves. The people who get called in to see if there really might be a bug have higher status.
        • The people who fix such bugs have even higher status, but still remarkably low.
      • There are people who analyze conversion rates and details of customer use of the application.
        • All these have higher status than anyone involved with customer complaints or bugs.
        • If they deal with things like conversion rates, that's related to high-status strategy and marketing and therefore higher status than anyone whose job is to tweak the application to make it better for users.

    There's another important aspect of status. I suppose I could jam this into the "dimensions" list, but I think it's simpler to say this other important thing about status this way: the more code you write, the lower your status. Writing no code is, with some important exceptions and subject to the general dimensions above, higher status than writing any amount of code.

    Yes, writing code requires highly specialized knowledge. But so do lots of other low-status things, like repairing garage door openers or installing lawn sprinkler systems. The people who manage the repair people are, of course, higher status than the people who get dirty or use their muscles doing physical work. Similarly, the people who metaphorically dirty their minds while doing demanding, hard mental work are lower status than those who direct them in any way.

    Another way of thinking about this is asking, what's your main focus? To the extent that it's a body of code, your status is lower. To the extent that it's people, your status is higher, with some exceptions, see customer service above.

    Like anything else, there are exceptions. I have met highly productive coders who are older and well-regarded. But they're rare, and usually they're tucked away in a corner if they're tolerated at all.

    Conclusion

    The way status is perceived in software is perverse, and accounts for a great deal of the dysfunction we see in the software world. It is sad. Even sadder is that, with all the fashion trends that sweep through the tech world, none of them addresses this issue. People with high software skills often perceive this and shake their heads, knowing there is nothing they can do about it.

    There are, of course, tiny groups of people where status is conferred as it should be: to the people who can conceive and then build their way to victory, rapidly and effectively. It's from those warriors of the abstract world that I learned the principles I discuss in my various books related to wartime software. There is more information and context on this and related subjects in my book on Software People.

  • Medicine as a Business: Medical Testing 2: Doing the Test

    This is the pinnacle post of the series on medical testing, which starts here.

    It's the pinnacle because I've finally climbed the mountain of scheduling, and I'm going to the radiation center for my test. Hooray! I'm at the top of the mountain! It will be easy after this, just getting the results and the bill.

    I've been to the imaging center before. I'm well aware of their attempts to hide behind misleading signage:

    11

    Its nearly-secret location is several floors deep in the basement — only those who really want to get there, and have the persistence to get there, make it.

    I arrive more than the half hour early they requested, to allow plenty of time for the front-office staff to do their work. What's there to do? I've been there before; how can there possibly be anything about me they don't already know?

    First of all, it's an iron-clad tradition to give entering patients a clip board full of paper that needs to be filled out, with lots of boxes to check. Have I been through this before? Yes. Every single time I visit. There's a simple explanation for this. Think back to cop shows you've seen where there's a witness or suspect the cops think might be lying or leaving something out. Or where there are two people who they think have concocted a story, and they interview them separately, trying to trip them up. The cop usually starts by saying, "I know you've been through this with my doughnut-eating colleague X, but I need you to take me through it again slowly, step by step." If it works for the cops, it should work even better for the medical staff, right? They carefully check every answer I give and cross-check it with all the previous answers I've given and analyze the differences. This way they can tell when a patient is lying, or when their memory is crashing because of whatever is wrong with them. Or simply to gauge the patient's intelligence and memory, to rank all the patients and do something wonderful with the results that only members of the doctors' cabal know about. AlI I know is that I have to waste time on each visit, only to have the staff glance at the first page, and file it.

    I end up waiting for about an hour. Finally someone calls my name, and I follow her out of the waiting room through the trackless maze of hallways. After a bit of walking, I'm introduced to a person who, she tells me, will take my blood for testing.

    The pre-MRI blood test

    I've been through this before, and nothing bad happened. It just caused, as usual, another delay in starting the MRI, because they wait for the results of the blood test. Which results (of course) no one gives to me, the person blood was tested.

    But there are a couple things to note about this practice.

    • The main purpose of the test is to see if I'm likely to have an adverse reaction to the extremely safe contrast material that will be injected for some of the images.
      • The main concern is with the few subjects who have abnormal kidney function.
      • I had an MRI with contrast just 3 months prior. How likely is it that my renal function went south during the interval?
      • Doing the test for everyone is just not needed. See this, for example.
      • Doing the test for me was a waste of time and money.
      • In any case, it's clear that there are no standards that are followed here!
    • Given that you're going to do a test, of course my blood needs to be drawn.
      • My blood was drawn by … an RN.
        • Registered Nurses are amazing people with years of training, often including an undergraduate degree and more.
      • My blood could just as well have been drawn by a phlebotomist.
        • You can train to become a phlebotomist by having a high school diploma, taking a month-long full time course, and taking a certification test. Boom, you're done.
      • I don't think I need to comment about the different in cost.

    After more waiting in a special waiting room, I'm finally called into the MRI room.

    The MRI itself

    The MRI nurse/technician was courteous and professional, like everyone else I encountered during the testing process. But the process was inexcusably bad, wasting time and money and reducing quality.

    First, the nurse asked me where the tumor was that was to be imaged. This could have been good. It's classic checklist, the sort of thing you should do to avoid error. See this for details. Why wasn't it good here? She wasn't double-checking to make sure the computer-based instructions were correct — she was asking to find out!

    Imaging studies have been done on me of this area. Multiple times. Including at Mount SInai. Mount Sinai has incredibly detailed information about exactly where the tumor is, more accurate by far than anything I know. Nonetheless, I was the nurse's primary source of information about exactly where the pictures should be taken! She placed pieces of tape on my body indicating the limits, and those pieces of tape were her only source about where to take pictures.

    Next, I laid down on the MRI bed. The nurse had me slide my shoulder into a little compartment, something which had never happened on any prior MRI. Clearly the dial on my paranoia control was set way too low, because I just vaguely thought, hmm, this is different, well she must know what she's doing. After adjusting me a couple times, I got to enjoy the usual loud noises in a confined space during which I was to remain rock-solid motionless; this pleasure went on for 20 minutes or so. Then I got rolled out.

    The nurse tells me that the compartment my shoulder is in is a "camera." Unfortunately, the camera wasn't capturing all the area of the tumor, so she would have to use a different camera and do everything again. She gets out a thick, flexible plastic sheet and places it on my shoulder. I recognize it immediately, because it's exactly the same device that has been used on each MRI I've had, regardless of the imaging center that has done the work.

    Amazing. Frightening. When I go a hair-cutting place, they record my visit and the choices and selections I made for getting a cut. When I go again, even if it's a different person, they'll ask something like "same as last time?" And then they'll normally do the checklist thing of confirming their understanding of what last time was. The point is: they know what I got last time. They recorded it. A hair-cut place. The only thing I can imagine is that the advanced technologies that hair salons use for keeping information about their customers haven't yet made it to the world of medicine. The plain fact was that Mount Sinai had either not recorded (probable) or not used (possible) key information about the image that was taken and how to take it. I would use the word "inexcusable" for this, but without a few choice 4-letter words, such a word would be far too mild to describe what went on here.

    About 3 hours after I arrived, the MRI had been taken and I was free to go.

    The MRI technology and equipment

    This isn't part of my test specifically, but it's on my mind every time I encounter medical equipment. I'm a computer guy since forever (see this for details of my background), and I know too much about the technologies and the companies that are used in this equipment, and the hardware and software processes that create it.

    The highly regulated companies using highly regulated processes to build this hardware and software are unique in technology. The regulation is supposed to protect the public and assure high quality. In fact what it does is assure that only a couple companies can supply the equipment in a government-protected monopoly, at absurdly high cost.

    The net result of this is that specialized equipment and software are built to meet the regulations, even when COTS (commercial off-the-shelf) equipment is widely available to do the job with high quality and great performance at a fraction of the price. A prime example of this is the PACS (Picture Archiving and Control System) that all medical imaging systems include. This is basically a standard file storage system with a database that logs everything put in and enables access to images.

    At the heart of the MRI is a body of software that could be built, maintained and enhanced at a tiny fraction of today's cost — a 10X improvement is the minimum one could expect under a rational set of rules. Here is a detailed post with examples of the insanity and, just as important, a specific proposal for how to fix it.

    Conclusion

    I got the MRI. Nothing awful happened to me. I'm grateful that medical science/engineering has gotten to the point that something as truly amazing as an MRI is even possible. I can certainly imagine things being much worse than they were.

    That being said, the opportunities for improvement on multiple fronts are HUGE. The patient's time and inconvenience could be greatly improved. The operational cost of performing the MRI could be considerably reduced, and the quality and consistency improved. Finally, the capital cost and rate of innovation of imaging machines in general could be HUGELY enhanced by drastic changes to the regulations controlling the design and manufacturing of the devices.

    Even more good news: my saga was not yet over. I don't have the results yet! Wait until you read about what I went through to get them…

  • The Hierarchy of Software Skills

    Outsiders generally have no knowledge of the hierarchy that exists among programmers in terms of skills. While everyone can understand that a programmer has had a job in a particular industry or company, and everyone can understand what place they have in the management hierarchy, the place that a programmer has in the skills hierarchy is invisible to most outsiders; moreover, it's not something they're interested in.

    I've talked before about the difficulties of managing programmers, particularly when you're not a programmer yourself. I've talked about how strange this is, particularly in comparison to other fields like sports, music and journalism. My book on Software People addresses this subject. It turns out that understanding the skills hierarchy is important not just for understanding the differences among programmers, it has HUGE impact on innovation and the success of entrepreneurial companies.

    The usual view

    Most outsiders know there are software people, programmers. They know there are professors and that you can get degrees in Computer Science. They know that experience with software in banking is different than software in healthcare, but they're not sure how. They know there are levels of management. Many people even know some of the common buzzwords referring to languages and other things. But then the knowledge, such as it is, dribbles to nothing, other than things that apply to everyone: is s/he a "good" programmer? A team player? Etc.

    When new software fashions emerge, many people know the basic buzzwords. Everyone at this point knows about AI, and even Machine Learning. Terms like Big Data are fading in frequency now, while Data Science is on the rise. Self-respecting managers want to be on top of things, and want to assure their organizations are investing in certified AI experts, and leading-edge Data Scientists — otherwise, we'll lose out!

    Sadly, most of these decision-makers don't bother to learn the first thing about what they're spending money on. Or if they do, it's just details that sound impressive, but don't matter. This explains in part the billions of dollars wasted on software fashion, while true innovators working far from the fashion runways make real-life, practical advances. I've illustrated this phenomenon with AI in healthcare.

    The hierarchy

    The skills hierarchy in software is generally not fashion-driven, though programmers can get caught up in fancy trends like anyone else. So I don't say that programmers are immune to fashion, but that the skills hierarchy itself changes very slowly.

    A book could be written on this subject — it deserves a book, both about the hierarchy itself, its evolution and how it impacts everyone involved. Actually, a chapter in a book has already been written about it, in this book. This is a long way of saying … this blog post is just a brief introduction to an important subject that is broad and deep and consequential, but sadly largely ignored.

    The first thing there is to understand is the incredible depth and breadth of highly specialized knowledge that is required to get anything meaningful done in the world of software. The start of the chapter may give you a flavor that we're talking about WAY more here than "learning a language."

    A tremendous amount of specialized knowledge is required to work quickly and well in practical software environments. The knowledge begins with fluency in one of the many extensive, exacting and demanding computer languages. The language itself, however demanding it may be, is just the beginning. The associated “library” (depending on the times, it could be called the run-time library, the foundation class library, or various other things) has to be mastered as well, because it provides the functions that actually get a good deal of your work done. Just by comparing the bulk of documentation, you can easily deduce that the library is as large as the language itself. You can think of the library as being the set of idiomatic expressions, like “raining cats and dogs” that extend any natural language. Depending on the situation, you may need to know two or more complete languages and associated libraries in order to do anything. For example, if you are writing a database-oriented application, you may need to know the application language (e.g., Java), the language’s library (J2EE), the database’s query language (SQL), the database’s stored procedure language (e.g., PL/SQL), and the special library that connects the application language to the database (e.g., JDBC).

    Languages and libraries, however elaborate and challenging to master, are just the raw materials for a real program. Knowledge of these is like a carpenter knowing how to cut and nail lumber – he can work with the raw materials and perform the low-level operations on it. Using raw materials to build a wall of a house requires knowledge of the “design pattern” of walls, and the typical way of assembling them, with the special way of framing windows and doors. While the saws, hammers and nails enable you to build a wide variety of wall-like structures, there is just one way to build them that accommodates standard size windows, doors, sheet rock, etc. There’s a special design pattern for when two walls meet to form a corner, and another one when one wall meets another in a “T.” Once you know all that, you know how to build frame walls, and you get to start over learning how to frame a roof, which itself has many details and variations. Design patterns like this exist in software, unfortunately not as standardized as in home construction, but still important and real. You really don’t want someone building a transaction system who is thinking about the issue for the first time. Just as a person building a wall for the first time might forget about windows or not know that walls are typically built on the ground and then raised up, a person building a transaction system for the first time might not think about audit trails or know that paying attention to row-level locking is a typical concern.

    Unfortunately, once you know all that, you are still just at the beginning. If you know them well, you are as well-qualified as a college graduate with a major in English would be to lead a corporation’s response to a lawsuit – while it is true that all the legal documents are written in English and all legal proceedings are conducted in English, there are structures and procedures and patterns of speaking, writing and interaction intrinsic to legal English, and unless you conform to them, you and your case will be summarily dismissed. While the law-challenged English major may think in terms of making a date with the opponent and a judge and talking things through, there are formal complaints, responses, interrogatories, motions for discovery, rules of evidence and procedure and on and on and on.

    The chapter goes on to cover things like all the tools you have to know to write, build, test and run your program, not to mention find and fix the bugs.

    Once you've done all that, you've mastered a single set of myriad tools that are out there to build and test software, in a single application environment. When you move on to the second set of such tools, you begin to appreciate just how different those different environments are — at least as different as classical Chinese is Chinese
    from modern American English (this blog post), from medieval Celtic runes.

    Runes

    Having learned a couple of these amazingly different language systems and the wildly different worlds in which they "live," you're still pretty low on the skills hierarchy.

    One of the dimensions "up" from "simple" ability to work successfully within one of these language systems is the ability to find patterns in extensive bodies of code and invent ways to leverage the patterns to transform the code in valuable ways. One frequent transformation takes knowledge out of code and puts it into editable data of some kind, with the net result that there's less code and that changes can be made more quickly and safely by changing the data instead of the code.

    A closely related dimension to pattern-finding is working with the tools. If there's a problem with one of the tools you use, can you fix it? Can you build a better tool?

    This may not sound like a big deal, but there is a HUGE difference between a programmer who is able to write an application that uses a DBMS (database) and a programmer who is able to create a DBMS. Among programmers who work with the internals of a DBMS in any way there is an extensive hierarchy. At the bottom of that hierarchy are people who don't change source code but are nonetheless expert in the use of it, running through various steps up to those who can make strategic changes or even create a new DBMS. This is a dramatic a hierarchy as starting with people who can drive a car, onto car mechanics who can tune up an engine and going up to engineers/scientists who can create a whole new kind of engine.

    Within tools, there's even a broad range. For example, someone who can work with a text editor tool is normally worlds away from someone who can build a compiler or a code generator. As different as calculus is from algebra.

    This brings us to a whole new dimension. There are people who are so skilled that, not only can they master the available tools to build effective, working software quickly, they can conceive, create and build tool-building engines to automate the process of using the tools to build software. I won't try to describe this here, but it's a kind of level-shift, much like algebra is a level above arithmetic, fully encompassing it while going way beyond it — while still "boiling down" to numbers in the end.

    With tools, we begin to dip our toes into the deep waters of "systems software." Things like the operating system or networking software or storage software or machine/device control software, each of which constitutes a whole world in itself. The people who are proficient in any of these things are truly rare, and near the top of the skills hierarchy.

    Just to take one small example, in the world of the internet there are levels of software and skills that are a little exposed to people who are administrators, but like with most of these things, the bulk of the iceberg is hidden from the surface. Something that everyone uses for example is DNS, which is the software that translates the URL that appears in the top of your browser to the low-level IP address that identifies the server which will deliver content to you. DNS operates in a way that is mysterious to the vast, vast majority of software people, even those who are deeply skilled in other areas. You can be skilled in DNS and still know little about the devices and software that get the moving job done, the TCP/IP protocol engines (and extensive supporting software) that moves data through the network.

    Hey, you might say, isn't that esoteric stuff only relevant to the nerds who keep things running, like of like the maintenance people in the basement of our office building? Yup, that's typical thinking. There are maintenance people, sure. And there are people who understanding things so well that they can create a new heating and cooling system for your building that is silent, effective, and uses a fraction of the power of the old one.

    Conclusion

    There's a great more to be said about the multi-dimensional, multi-domain software skills hierarchy than I've said. I've only scratched the surface. But I hope you've gotten as a take-away that the depth of skills in software is much greater than most outsiders, and even most normally-skilled programmers, imagine. Understanding this skills hierarchy is essential to attracting and retaining the very best software people, and unleashing their considerable talents to transform software-using businesses in dramatic ways.

     

  • Medicine as a Business: Medical Testing 1: Scheduling

    Before you get a medical test, it has to be scheduled, right? Just like getting a reservation at a restaurant, both you and the place you're going have to agree on a time. Here's the real-life story of my recent MRI, starting with scheduling it. The point of this post isn't to whine about what happened to me, but to illustrate deep, widespread problems in the medical business by means of a concrete example.

    Background

    First some background. My doctor was at Mount Sinai. While he was treating me, I had several MRI's at Mount Sinai. Then he moved to another major hospital system in the NYC area, Northwell Health. Both are excellent places with modern, up-to-date systems. I told the story here of the amazing breakthrough in EMR electronic interchange that allowed me to get the MRI's that had been taken at Mount Sinai and get them to the doctors at Northwell so they could do their jobs. It was true breakthrough technology, since the interchange was almost completely electronic, with the truly minor annoyance of a dozen phone calls, a couple paper forms and faxes, and a couple packages carried by hand. I know it's hard to believe that such giant systems could be so modern and electronic, which is why I give all the facts and associated proof here.

    I felt my tumor growing again. I quickly got an MRI that showed that, unfortunately, I was right. My doctor, the MRI and the follow-on to it were top-notch. The subsequent billing events are an object lesson in how the business of medicine can be improved, see the posts here for details.

    My doctor, who had treated my tumor and greatly reduced it with chemotherapy, now felt it was small enough that radiation was the best approach for treatment. He recommended I go to a radiation specialist, who happened to work at Mount Sinai, my doctor's former employer, and where I had originally been treated. Naturally, I took his recommendation. I got radiated. 30 times.

    It is standard practice to check the results of radiation 3 months after the treatment. In my case, that meant getting an MRI.

    Scheduling the MRI

    Here's what I went through to get the appointment. Nothing horrible here; I'm pleased that something so effective as MRI technology is available and that I was able to get it. But the cost, time and convenience all reflect a broken business model. Fixing the model isn't hard in principle, but would require serious change. This is an example of how it works today.

    On my final visit with my radiation doctor post-radiation, he told me about the follow-up MRI I should have in 3 months. He told me his office would contact me and get it scheduled "shortly."

    Nothing happened. I waited for the 3 months to pass. No one contacted me. Fortunately, I kept track of the time, so I got on it myself.

    I started with my radiation doctor's office at Mount Sinai. Sure, they said, you can use the Northwell imaging center to get your MRI. It's convenient for you, so why not? We'll set it up for you.

    I wait a few days. Nothing. I call again. Oh, sorry, we'll set it up. I wait a few days. Nothing. I tried again. Nothing. Finally I called the Northwell imaging center and explained the situation. Sure, no problem let's make the appointment. What about the pre-auth? Oh, don't worry, we'll take care of it. You've got all my information in your system from last time, right? Yes. Don't worry.

    Two days before the scheduled MRI, I get a call. Hey, about this MRI you're scheduled for, there is no pre-auth for it, so we're going to have to cancel the appointment, there's not enough time to get one. Great. I don't bother complaining, what good would it do?

    I realized that I had been totally wrong in my strategy about this. This medical stuff must be making me lose my mind. Or I'm getting colossally stupid. What was I thinking?? I was choosing where to get my MRI done based on what was best for me. WHAT an IDIOT I am!!know that the smart thing to do is always do what's best and easiest for the medical system. Duhhhhhh!!

    Having returned to sanity, I called the Mount Sinai radiation doctor's office. They said they'd set it up. The whole thing, pre-auth and appointment. A couple days pass. I call again. Same thing. A couple more days pass. I call again. Oh, sorry, excuse excuse excuse. I said, no problem can you just please send me the doctor's order? After more shenanigans, I get the order. Giving in completely to the world as it is, I call the Mount Sinai imaging center, where I've been a few times in the past, and make the appointment, making sure that they've got the doctor's order.

    Conclusion

    I got the appointment. That's the good news. I know that others have had it far worse. But I also know that this peculiar state of affairs, where there's no equivalent of, say, the Open Table scheduling service for making MRI appointments, is a HUGE time-waster for everyone involved. Do you want a job doing what the people I talked with on the phone sort of try to do? Would you want to cope with constant hassle and frustration?

    My insurance company knows the treatments I've had. All the information is in the EMR's of the two major systems I've used. It wouldn't be hard to know that getting an MRI at my stage of treatment is something to authorize. The process could be completely automatic. And I could maybe even have made my MRI decision based on my needs rather than the peculiarities of a deeply flawed, broken system.

    Next step: getting the MRI.

  • Getting Results from ML and AI 4: Healthcare Examples

    While the success patterns laid out in the prior posts in this series may seem clear in the abstract, applying them in practice can be hard, because nearly everyone who thinks or talks about AI (these sets over overlap very little, sadly) takes a different approach.

    https://blackliszt.com/2018/03/getting-results-from-ml-and-ai-1.html

    https://blackliszt.com/2018/04/getting-results-from-ml-and-ai-2.html

    https://blackliszt.com/2018/04/getting-results-from-ml-and-ai-3-closed-loop.html

    So here are a couple examples in healthcare to illustrate the principles.

    The spectrum of problems

    One useful way of understanding the winning patterns in AI is to understand the range of problems to which it may be applied. It's not difficult to arrange the problems as a spectrum. While there are many ways to characterize the spectrum — here's a prior attempt of mine to characterize the spectrum in healthcare — perhaps it's easiest to understand it in terms of the typical salary of the person whose work is being replaced or augmented by the AI technology.

    At one end of the spectrum are low-paid people performing relatively mundane, repetitive tasks. These people have relatively little education and minimal certifications compared to those higher on the spectrum. Think back-office clerical staff.

    At the other end of the spectrum are highly paid, educated and certified people performing what are understood to be highly skilled and consequential tasks. Think doctors.

    The very name "artificial intelligence" tells you at which end of the spectrum AI is normally applied. The popular image, supported by the marketing of the relevant vendors, is that AI is amazingly smart, smarter than the smartest person in the room, just like the way that IBM's Deep Blue (predecessor of Watson) beat the human world champion playing "Jeopardy," and beat the then-reigning world champion playing chess.

    To put it plainly, while these achievements of Deep Blue were amazing, they were victories playing games. They were not victories "playing" in the real world. Games are 100% artificial. The data is 100% clear and unambiguous. There are no giant seas of uncertainty, ignorance or unknowability — unlike the real world, which is chock full of them. Nonetheless, IBM and whole piles of people who self-identify as being "smart," and are widely perceived as being smart, jumped on the "AI does what smart people do" bandwagon.

    This was and is incredibly stupid and 100% bone-headed wrong. Not only is it bone-headed in terms of intelligent application of AI, it violates simple common sense. If you knew a talented high school kid who played a mean game of chess, would you drop them into a hospital and give them a white coat? Even after the kid claimed to have read and understood all the medical literature?

    The smart thing to do is to apply AI to tasks that are relatively simple for humans, at the "low" end of the spectrum, and see if you can get a win. If you can make it work, by all means graduate to the next more complicated thing. It turns out that replacing/augmenting human tasks that are mundane, "simple" and repetitive is amazingly challenging! Yes, even for super-advanced AI!

    IBM Watson in healthcare

    I know I've made some strong statements here. It's little old me vs. a multi-billion dollar effort by that world-wide leader in AI technology, IBM. Who's going to win that one? Turns out, it's easy. See this, for example. 11

    IBM claims to get many billions of dollars in revenue from Watson. But everything about getting it to do what doctors can do has proven to be vastly more challenging than anyone thought, and its advice rarely makes any difference, even when it's not wrong. And this, after years of work by top doctors at top institutions doing their best to help IBM "train" it!

    Here is a summary of the situation: 12

    Let's note: the Watson effort is built on the most famous "smart computer" technology ever, funded to the tune of billions of dollars, with technology acquisitions and expert help from all corners. The "disappointing" outcomes are not the result of having picked the wrong algorithm or something easily fixed. The failures are a direct result of not following the success patterns described in the earlier posts of this series, combined with applying AI to the wrong end of the job-complexity spectrum described earlier in this post.

    Olive in healthcare

    If IBM can't manage to pull off a win in healthcare, after years of applying the most advanced AI and spending billions of dollars with the best help that money can buy, I guess it's impossible, right?

    Wrong. IBM made a fatal strategic mistake. They used AI to attack the hardest problem of all, at the wrong end of the complexity spectrum. Has anyone done this the right way? Applied modern AI and related automation technology to the right end of the complexity spectrum? Yes! Olive has!

    Olive is making a positive difference today (please note the use of the present tense here) in many hospital systems by reducing costs, reducing error rates and getting patient information where it needs to go more quickly and efficiently, saving time and aggravation of medical workers along the way. The money and time it saves in the back office may not seem glamorous or "leading edge," but every minute and dollar it saves is time and money that can go to making patients healthier, instead of disappearing down the "overhead" sink-hole.

    Getting a pre-auth for a key procedure so it can be performed. Submitting all the right information so a claim can be paid. Getting information to pharmacies so patients can get the life-saving drugs they need. Getting all the information from incompatible, hard-to-navigate EMR's so doctors have all the information they need to give patients the most appropriate care. These absolutely essential tasks are largely performed in windowless rooms far removed from patient settings by people who work hard at largely thankless jobs that aren't well-paid — but are absolutely essential to providing care to patients. And they're harder than they look! Anyone who's spend any time with a modern EMR can't help but think of the endless meetings attended by skilled professionals at the software vendor trying to find yet new ways to confuse and confound the users. And anyone who has dealt with getting insurance companies the information they demand can't help but think of a cranky three-year-old who lost emotional maturity when he grew up. Bottom line: this stuff is hard!

    Olive gets it done, using an exotic collection of works-today technology, silently learning from the people who do the work today. And gets it done without having to upgrade or replace existing computer systems. Amazing.

    The founders at Olive are doing AI the right way, attacking the right end of the complexity spectrum. They follow most of the rest of the success patterns laid out in the prior posts of this series, above all attention to data and detail, working from the bottom up in terms of algorithmic complexity, and using closed loop. It's a hard problem, and it was hard work to get it done. But they did it, without the massive armada IBM fruitlessly assembled.

    Disclosure: Olive is an investment of Oak HC/FT, the venture firm at which I'm tech partner.

     

  • Medicine as a Business: Billing 4: What’s Wrong

    Making fun of medical billing, as I have done with gusto in the previous series of posts, is way too easy. Everyone involved knows it's a problem. But it's not getting better. Money that should be spent helping people be healthy or get healthy is instead being spent in completely unproductive ways, annoying and harassing everyone along the way.

    It's amazing how many issues are illustrated by just two bills from one healthcare system. Sadly, this is not an isolated example: it illustrates business-as-usual in healthcare billing.

    I make no claims to be comprehensive, but fixing the medical billing issues I've illustrated would be plenty!

    Here are the prior posts:

    https://blackliszt.com/2018/07/medicine-as-a-business-billing-overview.html

    ttp://blackliszt.com/2018/07/medicine-as-a-business-billing-1.html

    https://blackliszt.com/2018/07/medicine-as-a-business-billing-2.html

    https://blackliszt.com/2018/07/medicine-as-a-business-billing-3-insurance.html

    Here are some of the highlights:

    • The first obvious issue that makes medical billing different from the rest of the world is that there are no price lists. You have no idea what you will have to pay. When you sit down at a restaurant, you get a menu with prices. Not in the medical office.
    • The next glaringly obvious issue: unlike most other services you can think of, the bill was not presented at check-out time! Fixing this would fix a whole host of problems!
    • A single health network has multiple billing systems, each amazingly different from the others, each with its own staff, software, costs, etc. It doesn't have to be this way.
    • The bills can arrive months after the service was rendered. What service organization you interact with lets billing slide for months? It sure sends a message that they're not serious about collecting.
    • When the bill arrives, the address that it comes from and the place to which you send the payment can have totally different names and places from the organization that served you.
    • When you get a bill, you sort of expect to know exactly what the bill is for: what service was rendered, when it was rendered, where and by whom it was rendered. Without those key facts, how can you be sure about the bill? Both bills were a strike-out on this subject. Why is it hard to provide this simple, common-sense information?
    • For many people, receiving bills and paying electronically is convenient. For many organizations, sending bills and receiving payment electronically is more efficient, and encouraged. As I've illustrated in these bills, the health system's electronic payment is like a programming 1.01 course project — one that failed.
      • They didn't even try to have e-bills.
      • E-payment was offered on the paper bills, but the process was amazingly bad and error-prone.
      • In the end, e-payment simply did not work. Period. And of course, there was no electronic way to get help or even register a problem!
    • The second the insurance company is involved, things get real baroque in the bills, with confusing additional information that, in the end, makes no difference to the patient. And without even the name of the insurance company correct.

    Wow-za! Not that any self-respecting healthcare system manager will spend money on fixing billing instead of promoting innovation, AI and ML anytime soon! Why, if they stooped to merely making things better for patients while reducing costs, they would rapidly lose prestige among their peers in the industry!

     

  • Medicine as a Business: Billing 3: Insurance

    In the prior post in this series, I dove into detail of the bill I got from a doctor visit. The doctor was wonderful. On the other hand, the billing amounts to a deep well of opportunity for innovation, innovation of the kind that doesn't involve blockchain, machine learning, AI or even Big Data! Merely the kind of innovation that reduces costs and makes things better for everyone. That's all.

    In this post, we get to dive into a treacherous bay in the sea of healthcare billing the likes of which can be found nowhere else.

    The doctor visit bill

    Again, here’s the bill I got for a visit with the doctor:

    Maki 1

    The bill I got for the MRI was pretty discrete about the fact that an insurance company was involved. Here's what they said:

    11

    That's all. It flies by so fast, even assuming you read it, most people won't notice that according to the bill (I'm not sure I believe it), they were paid just weeks after the service was rendered. No talk about what was billed, who was billed and what they paid. I'm just a patient, I have no "need to know." What I do need to know is that I owe them $85, and I'd better pay up.

    But this is billing for a doctor visit. Different department. Different software. Different bills. Different payment mechanisms. This bill makes clear that the insurance company is a major player here. Here's the first part:

    12

    Unlike the MRI bill, this bill tries to tell who was billed how much and for what. Who was billed? "BCBS OUT OF STATE." My insurance company is Anthem. Yes, I know the industry lingo that BCBS means Blue Cross Blue Shield, but the name of the insurer is Anthem. Sorry.

    What was the bill for? This:

    14

    Remember, we're dealing with a HUGE IT department here, stuffed to the gills with experienced professionals. But I guess that looking at the bills and making sure they make sense is low on the priority list. Do you know what a "comple" is? Because I spend WAY too much time on this stuff, I do know what it means. It's truncated from "complexity."

    This is our first glimmer of a fierce, take-no-prisoners war that's actually going on beneath the surface of these innocuous-seeming bills. What presents itself as a bill is in reality a communique from a war zone. The "high complexity," which is a translation of the ICD-10 code that Northwell put in their claim that they sent to the "BCBS OUT OF STATE" is their rocket launched over the trenches to the Anthem side to try to get Anthem to pay more for the 20 minutes the doctor spent with me telling me what I could have read from the radiology report, if the medical system had stooped to giving me the results of the reading of my images, paid for by me. But those trenches are already dug deep, and aren't going to change because a mild breeze of common sense wafts by.

    Because of inserting the code for "high complexity" in the claim, Northwell is trying to get the enemy … oops, sorry, the honorable insurance company … to pay 641.00 for that visit.

    An inquiring mind may wonder, what exactly does Northwell want, given that they're asking for:

    15

    Do they want Euro? Peruvian Peso's? Bitcoin? I suspect they want plain old US Dollars, but unlike any other bill you've ever seen, they can't be bothered to get it right.

    (You may wonder why I trouble my pretty little head about such "trivial" issues. Simple. I wrote software for 30 years, and led the effort for credit card billing software that now processes half a billion accounts world-wide. I know software in general and billing software in particular. In the same way that an editor has trouble taking seriously a writer who doesn't bother to spell correctly, and that a conductor has trouble taking a candidate musician seriously who flubs lots of the notes, there is good reason to believe that a software group that lets obvious flaws like these appear on patient bills has far deeper problems, and that the "underground" parts of their software are probably nightmares. Which all the evidence shows that they are.)

    Now let's shift to the right column. Here's what we see:

    13

    More than 2 months after my visit, Northwell claims that "BLUE SHIELD," not BCBS and not Anthem, paid them 232.89 Ether, or whatever currency they ended up agreeing to. So the response to the HIGHCOMPLE rocket was a grenade that, when it exploded, screamed "I'll pay you 36% of what your rocket demanded. BOOOM!!"

    Northwell sadly reports to me how badly they lost the battle (they're used to losing), and cleverly inserts a "OK, we lost. Fine." line item of 358.11.

    What the &*()&*() is that about? How did they ever arrive at that amount??!

    This leads to our next juicy topic…

    Insurance Co-pays

    Medical systems have a myriad of ways of putting it. Some of them just say something like they did for the MRI bill: "This is what you owe. Really. Pay it. It's your responsibility." Others, like this branch of Northwell, handle it totally differently. They make a pathetic, flawed attempt to do the standard accounting/billing thing of "This is what you started owing, this is what you paid, and this is what's left. Please pay it." Except you haven't paid for a thing! The insurance company somehow decided to pay 36% of the bill, and then Northwell somehow decided to subtract an "adjustment," magically leaving the nice, round amount of 50.00 Yen, Bitcoin or whatever to be paid.

    Just to be helpful, they put a line item in there "Patient Payments    0.00." Duhhh. Like, you haven't billed me, man. This is the first bill you've sent me for this, a mere 3 months after my 20 minute visit. Of course I haven't paid. And it's in bold, no less. I guess I'm supposed to feel guilty? Or perhaps just hurry up and pay (via the doesn't-work online payment website) the 50.00?

    This whole thing is a fake, of course. As everyone who's dealt with insurance knows, way back around the time the Pope divided the New World between the Portuguese and the Spanish (which is why they speak sort-of Portuguese in Brazil and sort-of Spanish in the rest of South American), a group of genius-level experts, the kind of people who decide important things so that the world will work as it should, got together and invented the notion of "co-pay."

    "Co-pay" is one of those ideas that only true experts, people who see farther and deeper than us mere mortals can see, could come up with. The core idea is to give patients an incentive to care about the cost of their health care. If they have to pay something every time they "consume" health care, they'll exercise caution and not use too much of it! That's co-pay. Sheer genius! Even better, we'll make the co-pay something that they owe to their doctor. Genius again — it's the doctor who's providing services, so of course it's the doctor who should be paid. Insurance companies are hated enough as it is. By shoving the burden of billing and collecting onto the medical systems, maybe they can see what it feels like to be disliked. And get collectors involved. And see what substantial levels of double-digit payment defaults look like on the financials. It's all a good thing because we're influencing patients to be careful about what medical services they consume, and from whom! I really don't understand how this kind of galactic-level genius can sleep at night, quivering from the excitement and self-regard of being responsible for such a transformative idea.

    Now back to reality. Do co-pays "work?" I mean, do they influence patient behavior in the way intended? No, of course not. But now they're deeply dug into the trenches separating the payer and provider armies, and extricating them will take a real act of courage.

    In this example, suppose Northwell decided to bill 591 instead of 641. Suppose (humor me here) that BLUE SHIELD paid the same lousy 232.89. Suppose Northwell made the same 358.11 ADJUSTMENT. Net result: Bill paid. PAID IN FULL!!

    Now was that really so hard? Of course, there are some awful consequences of this. A truce would have to be called on a major part of the front. There are jobs and important bodies of software at stake here, on both sides of the war. And support people. And collection agencies. What would they do with all their time?

    Probably the worst consequence would be patient behavior — patients would start consuming healthcare services like crazy because there's no 50.00 co-pay! Not. The second people respond with the same amount of serotonin to the phrase "don't worry, this giant needle won't hurt a bit, just a pinch" as they do to the question "what kind of massage oil would you like me to use," we'll know we have a problem. Until then, I think we're OK.

    Conclusion

    This post was supposed to focus on the insurance aspect of medical billing, using an example bill. The bill I used was a typical, benign example; not the kind of extreme example you'd expect when reading something that dives into a problem. I said nothing about pre-auth, denials, deductibles, insurance company coverage notices, or any of the other all-too-common joys of the medical business. That was the point! The transaction described here, with the on-the-surface messes and below-the-surface nightmares are business-as-usual!! And that's sad, for everyone concerned — which includes pretty much everyone, except those of us who are looking at a small patch of grass from the side of the grass where the roots are.

  • Medicine as a Business: Billing 2

    In the prior post in this series, I presented a couple bills and dove into detail for one of them. Now it's time to see what pleasures there are in the second bill.

    The doctor visit bill

    Here’s the bill I got for a visit with the doctor who ordered the procedure:

    Maki 1

    This bill is a bit of a relief compared to the one for the MRI. While the return address (some PO Box in New York) and the address to which payment should be sent (NSLIJ at a PO Box in New York) are opaque and confusing, at least the box in the middle of the page names the doctor I saw and gives the date of the visit. I know what I'm being billed for: a visit with this doctor on that date. That's good!

    Let's look a little more closely.

    First, there's something interesting about the date. The visit with the doctor was Dec 11. Now look at the statement date: 3/13/18. Yes, that's right: the statement was dated a full 3 months after the visit! Wow. Northwell has clearly optimized their systems to march everything through so they can bill and receive payment promptly, right? Sadly, no.

    Second, I'd like to point out an important issue: paper vs. electronic. With all the noise, billions of dollars of federal subsidies, and the obvious fact that electronic is better than paper, you would think that a major NYC hospital system would be entirely electronic. You would be wrong. Here is a post about this. But about this bill:

    I got the paper bill in the mail. They could have gotten my email from me at any time, but didn't.

    There is no opportunity to sign up for paperless billing, unlike even notoriously backwards bureaucracies like utility and phone companies, which constantly harass you to sign up.

    Two things on the bill are highlighted to make them stand out: The amount to pay and the URL to pay it:

    11

    I think it's fair to say they're trying to get me to pay online. So I tried. But what a pain! Copying that looooooong URL without error isn't trivial. Then once I entered it correctly, here's where I landed:

    12

    They re-directed me: that long string I copied could have been tiny, because it wasn't actually the place they wanted me to go!

    But the fun has just started. Now I have to fill out the form:

    13

    Once I filled it out, here is the result (with my DOB cut off for privacy):

    14

    Fail!

    Dutiful person that I am, I got out my ancient check book, revved up my hand-printing skills, and … yes, put the check in the mail.

    There is more joy and fulfillment to be found in this simple-seeming one-page bill, but that's enough for now. For the next installment, we can look forward to some only-in-healthcare wonders of billing.

     

Links

Recent Posts

Categories