Category: Innovation

  • AI can automate what doctors do

    There has been a decades-long evolution towards creating an effective clinical diagnosis and treatment AI system, essentially automating the mental part of what doctors do. A solid basis for the content of the system has already been built in the form of medical text books, procedures, published databases, studies and clinical standards such as HEDIS.

    The major elements of a fully automated system have been built and put into practice in a variety of medical practices. When a comprehensive system will be built and deployed is impossible to predict. No fundamentally new tech needs to be invented for this to be created; no “break-throughs” in AI! It “just” needs to be applied.

    While having an AI-driven medical diagnosis and treatment system would be amazing, much more important than the AI aspect of it would be the fact that it would be data-driven instead of human-created-policy-driven. This means that the system would, over time, determine what actually works based on the data and results, rather than what human “experts” and self-interested institutions say works. In other words, it would support true evidence-based medicine, replacing the too-often corrupt practice of studies published in medical journals. This is a huge subject.

    What do doctors do?

    They start with the patient’s complaint, why they’re seeking help.

    They then get from the patient and/or medical records a time sequence of conditions (like a cough), tests, observations, events (like falling down), related personal things (age, heritage), and finally diagnoses, treatments and outcomes.

    Based on this, they make further observations, tests and measurements. The tests may involve other people and equipment, for example a CAT scan. Depending on the expense and trouble of the test and the chances it will affect the outcome, further tests may be performed.

    The result is that the doctor recommends and/or performs treatments that will resolve the issue. The treatments can include drugs and operations. The results of all of this are stored in the patient’s EMR, partly coded data and partly written clinical notes.

    In order to do the above, doctors receive a great deal of training, both general and clinical. While in practice, they are guided by their knowledge and experience, and also by clinical guidelines and protocols, which evolve over time.

    Doctors are limited by a couple of things. First, missing information: they may not have access to and probably don’t have time to read all the patient’s medical history. Second, missing knowledge: there is a huge and ever-growing body of medical knowledge and treatments. It’s amazing that doctors have as much of this in their heads as they do, and not surprising that they sometimes forget or haven’t had time to read and absorb information that is new to them.

    Is all the technology required really available?

    The pattern of an innovation being proven and waiting sometimes for decades has been demonstrated many times. For example, an algorithm applied in production more than 50 years ago (!) for optimizing oil refinery operations has only recently been applied to optimizing some aspects of health care scheduling. Here’s a detailed example.

    No new math or fancy algorithms are needed. The fancy new AI LLM’s (large language models) that are getting attention these days don’t apply to this problem. The vast majority of the effort is in centralizing, codifying and standardizing data that is entered into medical EMR’s, which has already been done and is being refined. Even the tricky work of extracting value from doctor-written clinical notes is largely automated. Large databases of this kind are in use today by pharma companies to help them discover and refine targets for drugs.

    The path to automation

    The word “computer” was originally applied to people, mostly women, who spent hours and days bent over desks, often with calculators, computing the result of various mathematical formulas. For example:

    Screenshot 2024-12-31 114452
    Barbara “Barby” Canright joined California’s Jet Propulsion Laboratory in 1939. As the first female “human computer,” her job was to calculate anything from how many rockets were needed to make a plane airborne to what kind of rocket propellants were needed to propel a spacecraft. These calculations were done by hand, with pencil and graph paper, often taking more than a week to complete and filling up six to eight notebooks with data and formulas.

    While not as precise, doctors are also human computers, in the sense that they confront a new case (problem), get inputs from the patient and the database of the patient’s history, make observations (like calling a data-gathering subroutine), search their memory for a standard to see what to do next (if X and Y, then do a blood test to see if Z). Depending on the results  of that test, there may be further branches (if-then-else) to see what other tests and procedures may be required. Finally you reach a diagnosis and a treatment plan. The results of everything including the diagnosis and plan are recorded in the EMR for the patient to form the basis of future medical interactions.

    All of these things are in medical text books, treatment protocols, check lists, medical databases and academic papers. They are all pounded into doctors’ heads by clinical training and apprenticeships. Doctors are expected to remember everything.

    The path to automation isn’t fancy. It basically amounts to getting a computer to do what a doctor does: interacting with patient (taking input and providing information), organizing and enhancing the records about the patient, standardizing and digitizing all the existing protocols, and creating digital channels to orders for tests, procedures and drugs. Most of which are already a feature of EMR’s.

    Most of the elements of this automation are already in place! WebMD.com, for example, has a huge amount of information about symptoms, diseases and treatments online. It’s medically reviewed, and organized for access by patients. Major hospital systems have similar websites. The websites are just the visible part of the iceberg, with vast underpinnings.

    The most obvious missing elements is the ability to request tests and procedures – for that you have to go to a human. But the ability to input requests for such things is already a feature of the EMR’s used by most doctors. Making the connection from the EMR to software instead of a human is a minor task.

    Automating doctor decision-making is the heart of the job. It’s essential that this be done using an editable, extensible decision tree. This can be enhanced with probabilities and ever-increasing amounts of personalization. This should not be created by training of any kind; it must be human editable and fully transparent, so that you always can know exactly how and on what basis every decision was made.

    Among the biggest missing elements are things that doctors learn during their clinical training and personalization.

    Once all these elements are put together and working, you would enter a parallel production phase, in which the computer would get the same inputs a human doctor would and propose what to do next. This would be recorded and compared to what the human doctor did in classic champion/challenger fashion. The system wouldn’t have to be 100% complete to be put into live operation, so long as a good system for bailing out of the computer and shifting to a human doctor was in place. But since such a large number of patient visits are routine, the computer is likely to be able to handle a large fraction of cases from early on.

    There is a huge amount more detail in the building of such a system. However, surprisingly little needs to be “invented” to make it work, given that large elements are already built and in production in limited ways.

    Related posts

    Doctors too often get the wrong answer. This is the kind of thing that makes some people hope that automation could do a better job:

    https://blackliszt.com/2016/12/what-can-cats-teach-us-about-healthcare.html

    Massive spending has gone into "cognitive computing" and healthcare, with nothing to show for it.

    https://blackliszt.com/2015/07/cognitive-computing-and-healthcare.html

    You don’t need AI or cognitive computing to discover or promulgate the new discoveries that humans make.

    https://blackliszt.com/2015/08/human-implemented-cognitive-computing-healthcare.html

    Health systems have trouble just making computers work. When they try to do something "fancy," the results are usually poor. But there are promising exceptions.

    https://blackliszt.com/2016/05/healthcare-innovation-can-big-data-and-cognitive-computing-deliver-it.html

    Healthcare systems spend huge amounts of money on things related to AI, but they don't know what they're doing and neglect to spend on simple things that could make an immediate difference.

    https://blackliszt.com/2016/09/healthcare-innovation-from-washing-hands-to-ai.html

    Avoiding error is hugely important.

    https://blackliszt.com/2017/06/how-to-avoid-cutting-off-breasts-by-mistake.html

    A major lesson from the above posts is this: while AI can certainly automate what doctors do, having the usual major corporations and medical systems be in charge of the effort guarantees failure — which billions in wasted spending to date demonstrates.

    The benefits of medical automation

    The potential benefits of automation are huge.

    Cost of medical care: As medical workers are replaced by software, costs will go down. Not just salaries, but also office space, etc.

    Medical care waiting times: The software doctor is available 24 by 7, no scheduling required.

    Accuracy of care: Medical people can’t be as consistent or up to date as data-driven software. Elaborate measures such as HEDIS for judging medical care after the fact will be applied as the care is delivered, assuring its accuracy.

    Transformation of care: Dramatically better health and lower costs will result once the system is in place and real-world evidence from it supplements, personalizes and replaces existing care practices.

    Automation of medical care isn’t without problems. The institutional obstacles are huge. Mountains of regulations and standard practices would have to be changed, with entrenched forces fighting every step of the way. The people whose jobs are threatened will resist. A large number of patients value interacting with a human doctor. Corporate forces will fight to have their interests supported in the rules and data of the automation. There will have to be a way to provide alternatives and avoid centralized government control, which will be a major struggle, and a danger I fear.

    Conclusion

               Automation of medical care has been underway for decades. All the technical elements to enable it are available. The benefits of automation are large, but so are the obstacles to implementation. Centralized control of medical diagnosis and practice is already strong, and automation would make it stronger and less visible. The path forward is  likely to remain slow. While there are substantial potential benefits in terms of cost reduction, better time and accuracy, the largest potential benefits of huge cost reduction and improved patient health are threatened by a version of the centralized control embedded in the current partly-mechanized system being translated to the automated one.

  • the Conspiracy to Prevent Innovation

    There is a conspiracy to prevent innovation. It’s widespread, strong and effective, more in some domains than others. Like all  conspiracies, it goes to great lengths to conceal its actions and goals. The members of the conspiracy are desperate not to be revealed, because they would certainly be subjected at least to ridicule, if not total ostracizing. More important to the active members of the conspiracy, they would lose power, if not their jobs.

    The anti-innovation conspiracy is at war with the transformative power of radical improvements of computer and networking technology. Computer evolution, which continues to follow Moore’s Law of exponential growth, without precedent in human experience, largely powers innovation, directly or indirectly. The exploding growth of computer power and network speeds with similarly amazing reductions in size and cost are an overwhelming force, practically begging people and organizations to tap its power and make things better for everyone.  See this:

    https://blackliszt.com/2013/12/fundamental-concepts-of-computing-speed-of-evolution.html

    As anyone can see with the transformation of rotary, wired phones into wireless, mobile computer phones, the forces of innovation are winning.

    Victories of the Conspiracy

    The entrenched anti-innovation forces refuse to concede defeat and give up. Decades into the computer revolution, the conspiracy lives on, showing few signs of weakening. It has established entrenched positions in a surprising array of places. Even when it can’t prevent innovation, the conspiracy manages to slow the pace of innovation to a crawl in places where conspirators maintain positions of power. They force innovators to slog through sodden, muddy paths, when in other industries, innovators are whizzing along on mag-lev trains.  There are also wide-spread pockets of resistance, in which the conspirators manage to hold off the application of proven innovations to their industry, often by decades.

    Skeptical? An amazing fraction of what appears to be innovations are little but taking advances that are proven in a narrow domain and applying them to a new one. Here is the story of an algorithm that was standard practice in oil refinery operation over 50 years ago that, decade by decade, is still crawling its way into new domains.

    https://blackliszt.com/2019/08/the-slow-spread-of-linear-programming-illustrates-how-in-old-vation-in-software-evolution-works.html

    Here is an example of a truly beneficial innovation proven over 50 years ago that is still not used in medical imaging.

    https://blackliszt.com/2019/07/barriers-to-software-innovation-radiology-1.html

    https://blackliszt.com/2019/08/barriers-to-software-innovation-radiology-2.html

    There are many more examples of the effectiveness of the anti-innovation conspiracy.

    It's Secret!

    Among the greatest strengths of the anti-innovation conspiracy is that it has not been “outed.” No one talks about an “anti-innovation conspiracy,” or even about “the forces that slow or prevent innovation.” The fact that innovation takes place widely, touching all of our personal lives, leaves everyone in awe of what innovation has wrought. And rightly so! What is not discussed are the deep forces that prevent and/or slow innovation. Without the resistance, the innovation that would be unleashed would make today’s by-itself-amazing innovation look sluggish by comparison.

    The anti-innovation conspiracy has two major wings.

    The first wing is particularly pernicious, since it establishes positions inside the people and groups who innovate and implement innovation. In this wing, the conspiracy concedes that innovation will take place, but assures that it will be as expensive, slow and ineffective as possible. The first wing is essentially an effort to cripple and co-opt the “offense,” the efforts of the innovators.

    The second wing operates silently and secretly, often waving flags supporting innovation and supporting innovation efforts. It operates inside the institutions that will be affected by innovation, and inside the organizations such as regulatory bodies that control the institutions that are targeted by innovation. The second wing is essentially an effort to bolster the “defense,” the ability of institutions that could be impacted by innovation to resist the efforts of innovators.

    One of the great strengths of the anti-innovation conspiracy is that the vast majority of people benefit to some extent by the innovation that does take place, and naturally compare  what is available to them today compared to ten or twenty years ago. Progress has happened, and it’s a good thing. Of course I agree with this.

    But meanwhile, the conspirators are rubbing their hands together and mumbling something like “heh, heh … still got ‘em.” The reason for the acclaim for the results of innovation and the near-complete lack of awareness of the conspiracy against it is simple: very few people are in a position to see the amazing innovations, true advances that would make everyone’s lives better, that have been and are being today prevented and/or drastically watered down as a direct result of the efforts of the anti-innovation conspiracy. If there were widespread knowledge of “what could have been,” people would see the results of innovation in a completely different light. It would be like a starving person in a room being given a bit of bread and water; they eagerly consume it and are glad to have it. But how would their attitude be different if they knew that, just through a door in the room, there was a sumptuous feast all laid out, ready to eat, and the person giving them the bread and water was preventing them not just from going through that door, but even from letting them know it was there? Gratitude for the bread and water would suddenly be transformed into fury at the person keeping them from the food they so desperately need, even hiding the fact that it’s there! The vast majority of people are like the starving person in the room, grateful for the food they now have, completely unaware of the feast they can’t eat because the conspiracy successfully hides it from them.

    I know these are strong claims. But after decades of writing innovative software and more decades of looking deeply at innovative companies and their spread of innovation in various companies and industries, this describes the patterns I have observed.

    Finally, let me make something clear: There is no cabal of anti-innovation conspirators who explicitly communicate among themselves that their goal is to prevent or slow innovation. If there were such a cabal, it would have been exposed and shamed long ago. The long-lasting conspiracy is fueled by a wide variety of goals that appear to have nothing to do with preventing innovation; the conspirators, as they deserve to be called, don’t say or even (for the most part) tell themselves that they are trying to prevent innovation. It’s always something else: they are trying to protect the public, prevent people from being harmed, assuring that things are built to proper standards, trying to maintain stability in a well-oiled operation, avoiding wasteful distractions, and on and on. Most of the people involved might sincerely claim they are promoting innovation everywhere they can. The trouble isn’t their thoughts or words; the trouble is their actions, which have the net effect of preventing or slowing innovation.

    I have already written about some of the issues involved in the anti-innovation conspiracy.

    https://blackliszt.com/2023/07/summary-software-innovation.html

    There is more to be written, for example to shame those who promote themselves and their institutions as being leaders in innovation. I hope others will take up this long-neglected issue.

  • Summary: Software Innovation

    This is a summary with links to my posts on software innovation. It includes posts on barriers to innovation and how to grow a winner.

    Software is taking over the world. The pace and scope of the transformation of human activity has no precedent. People often assume that this is the result of fierce innovation in software. While brand-new software is built every day, the actual innovation is the result of computer innovation – which does indeed proceed at an unprecedented pace.

    https://blackliszt.com/2013/12/fundamental-concepts-of-computing-speed-of-evolution.html

    https://blackliszt.com/2014/03/innovation-with-computers-and-slow-things-1.html

    The fact is that innovation in software is incredibly simple. It rarely involves fancy stuff like AI, but mostly figuring out the best way to accomplish things and getting the computer to do as much of the work as possible.

    https://blackliszt.com/2014/07/innovation-made-simple.html

    Here are some of the most important patterns of the small groups that innovate their way to success in software.

    https://blackliszt.com/2017/01/the-science-of-innovation-success.html

    The general impression is that software is all about innovation and rapid evolution. Programmers are on the front lines, constantly making new things. Sadly, this general impression, which is shared by most programmers, doesn't hold up under examination.

    https://blackliszt.com/2023/07/does-software-evolve-rapidly.html

    People love to brag about the software innovations they’ve invented. The fact is, most of the fundamental innovations in software are proven and in place; they’re ignored by practically everyone, including the experts.

    https://blackliszt.com/2015/06/fundamental-innovations-in-software.html

    This holds true even when you look at a narrow field of application such as financial technology.

    https://blackliszt.com/2016/03/fintech-innovation-the-drivers.html

    An amazing fraction of what appears to be innovations are little but taking advances that are proven in a narrow domain and applying them to a new one. Here is the story of an algorithm that was standard practice in oil refinery operation over 50 years ago that, decade by decade, is still crawling its way into new domains.

    https://blackliszt.com/2019/08/the-slow-spread-of-linear-programming-illustrates-how-in-old-vation-in-software-evolution-works.html

    Here is an example of a truly beneficial innovation proven over 50 years ago that is still not used in medical imaging.

    https://blackliszt.com/2019/07/barriers-to-software-innovation-radiology-1.html

    https://blackliszt.com/2019/08/barriers-to-software-innovation-radiology-2.html

    It’s not just fancy algorithms that are proven and waiting for application in new domains. It’s simple things like production human data entry, where widely proven “heads down” methods are at least 5 times more efficient than what is normally done.

    https://blackliszt.com/2019/09/simple-data-entry-technology-illustrates-how-in-old-vation-in-software-evolution-works.html

    Here are details of the advantages of “heads down” data entry and how it was ignored at a huge project at Sallie Mae.

    https://blackliszt.com/2019/10/software-professionals-would-rather-be-fashionable-than-achieve-10x-productivity-gains.html

    One of the common ways to ignore a major innovation such as “heads down” data entry is to concentrate on a method that is highly fashionable, even though it doesn’t do much good.

    https://blackliszt.com/2019/09/laser-disks-and-workflow-illustrate-the-insane-fashion-driven-nature-of-software-evolution.html

    Given this, it’s all the more amazing that companies have Chief Innovation Officers whose job is usually to “foster innovation.” Heh.

    https://blackliszt.com/2016/04/the-innovation-bubble.html

    Barriers and Resistance to Innovation

    Software innovation faces huge barriers from people and established practice, like innovation in medicine, where a cure for scurvy was proven in practice for decades before the authorities grudgingly accepted it.

    https://blackliszt.com/2014/02/lessons-for-software-from-the-history-of-scurvy.html

    Innovation has been strongly resisted for a long time.

    https://blackliszt.com/2020/01/luddites.html

    https://blackliszt.com/2016/05/innovation-some-history.html

    Have you heard the story of Samuel Pierpont Langley, the renowned expert on manned flight? He’s a case study in how experts prevent innovation.

    https://blackliszt.com/2016/07/innovation-and-experts.html

    Major advisory institutions reflect common thinking and prevent their customers from "making mistakes" with innovative companies.

    https://blackliszt.com/2017/05/the-value-of-computer-industry-advisory-groups.html

    Here is the story of how the British military resisted a huge innovation for combating submarines in World War 2.

    https://blackliszt.com/2021/10/deep-seated-resistance-to-software-innovation.html

    Even the experts in entrepreneurship resist innovation. Amazing. Experts!

    https://blackliszt.com/2020/05/experts-vs-innovation-new-book.html

    Perhaps you think that the big tech companies are great at innovation? When you look at how they actually innovate, they don’t look so great.

    https://blackliszt.com/2016/05/organizing-for-successful-innovation-recent-history.html

    Often important innovations don’t require software at all, which doesn’t seem to stop people spending loads on money on exotic software.

    https://blackliszt.com/2016/09/healthcare-innovation-from-washing-hands-to-ai.html

    https://blackliszt.com/2016/05/healthcare-innovation-can-big-data-and-cognitive-computing-deliver-it.html

    Healthcare is a rich source of examples of how to screw up and fail to take advantage of obvious, long-overdue “innovations” like electronic medical records.

    https://blackliszt.com/2016/06/healthcare-innovation-emrs-and-paper.html

    https://blackliszt.com/2016/05/healthcare-innovation-emr-procurement-is-broken.html

    https://blackliszt.com/2016/06/healthcare-innovation-emrs-and-data-quality.html

    While experts and big companies put up active resistance to innovation, regulations are an important source of passive resistance.

    https://blackliszt.com/2016/09/innovation-the-barriers.html

    The huge cost of medical imaging systems is a clear example of how regulations prevent innovation and keep costs high.

    https://blackliszt.com/2023/01/how-to-reduce-the-cost-of-medical-imaging-and-pacs.html

    When you look into sample sets of regulations, you see how onerous they are and how they hamstring innovation.

    https://blackliszt.com/2016/12/regulations-that-enable-innovation.html

    There’s a clear path to eliminating regulatory resistance while making the enforcement power of the regulations even stronger, by shifting from massive how-type regulations to simple but effective what-type ones.

    https://blackliszt.com/2011/12/regulations-goals-or-directions.html

    https://blackliszt.com/2012/03/lets-criminalize-our-regulations.html

    When you look at how innovation works in practice, it's hard to avoid thinking that there's a conspiracy to prevent innovation. It's mostly not a conscious effort on the part of the conspirators, but it has the same net effect.

    https://blackliszt.com/2024/06/the-conspiracy-to-prevent-innovation.html

    Growing the Innovation to Success

    Once you’ve got your software company up and running, there are strategic moves that will keep you on the track to success.

    https://blackliszt.com/2010/05/from-start-up-to-real-success.html

    https://blackliszt.com/2010/12/from-startup-to-success-costs.html

    Would you like to follow Facebook’s growth path? Their success wasn’t about great software development. It was due to a classic product/company growth strategy.

    https://blackliszt.com/2014/12/fb.html

    Your strategy may be good, but unless you build applications that can be changed quickly, you’ll lose the race.

    https://blackliszt.com/2020/02/how-to-build-applications-that-can-be-changed-quickly.html

    When you’re moving quickly, classically-trained programmers may start whining about the growing amount of “technical debt” and how important it is to pay it down. Here’s how to do it.

    https://blackliszt.com/2020/03/how-to-pay-down-technical-debt.html

    If you follow classic software development methods instead of fast ones, the chances are you could be hit with a big surprise at the end of the project.

    https://blackliszt.com/2014/06/building-software-the-bad-old-way-and-the-good-new-way.html

    There is a way to avoid disasters of this kind. It's practicing wartime software development, optimizing your methods for speed instead of meeting expectations.

    https://blackliszt.com/2023/07/summary-wartime-software-to-win-the-war.html

    When you’ve built a successful narrow-focus software company, how can you grow it further?

    https://blackliszt.com/2022/07/the-dimension-of-software-automation-breadth-examples.html

    The Book

    Here are highlights of winning growth strategies and the book with more stories and details.

    https://blackliszt.com/2016/04/software-business-and-product-strategy-book.html

    https://blackliszt.com/2016/05/innovation-from-startup-to-success.html

    Here are highlights of the companies used as examples in the book.

    https://blackliszt.com/2016/07/innovation-stories.html

     

  • How to reduce the cost of medical imaging and PACS

    Medical imaging devices like MRI's, CT and X-Ray machines are extremely valuable. They're also extremely expensive. So expensive, in fact, that health insurance companies typically require a pre-authorization for an MRI scan to make sure that it's "medically justified." The market is currently estimated at about $40 Billion a year, with more spent on proprietary PACS (Picture Archiving and Control Systems) for storing and managing the systems.

    The medical imaging market is highly regulated, with the design and construction of the devices subject to detailed requirements for how the hardware and software should be designed and built. The result of the regulations is that a small number of large companies control the market, effectively preventing innovation and new companies from entering the market.

    There is a proven path towards opening the market to innovation and dramatic cost reductions, while improving quality. We should break the iron grip of monopolistic companies and harmful government control to enable a medical imaging revolution.

    The Software industry case

    Something similar happened in the software industry. IBM mainframe computers and software once owned the world. Everyone bought from IBM, and were then required to buy IBM software and applications. They worked, but were incredibly expensive. A government anti-trust suit broke some of their monopolistic power, and new mini-computers changed the game. Then with computers built on multiple microprocessors, low cost, high quality and performance with intense competition ruled the roost in the computer industry. Separate companies built each part of the new world; each competed to be the best.

    The crowning touch was that for important parts of the software such as operating systems, open source software emerged and became the norm. Even IBM acknowledged this by porting the Linux open source operating system to its IBM mainframe computers.

    What should happen in medical imaging

    Medical imaging machines are like specialized mainframe computers. In addition to the physical hardware that does the scanning, there are processors with operating systems and application software. Software controls each step of the scanning process, collects the data, stores it and displays it. Today, every bit of that computer hardware and software is built by the hardware supplier. Just like it was for IBM mainframes before the anti-trust suit.

    The big difference is that no government agency exercised control over the details of how the IBM software was designed and built. Sadly, ignorant bureaucrats at the FDA exercise total control over this process, as I detail here. They require the use of methods that are so old and bad that even giant corporations have long-since moved on for their unregulated software.

    The argument is that this is about your health. Do you want imaging devices that don't work or give bad results? The FDA performs the essential function of guaranteeing quality and safety, they say.

    What they actually do is the equivalent of demanding that only hand saws be used for turning trees into lumber and refusing to allow nails or hammers to be used in house construction. Of course it can be done. But people using modern tools get far better results faster at lower cost. There is a simple way the FDA can assure quality, by shifting from lengthy HOW style regulations to simple WHAT style regulations as I explain here.

    The Result

    The result of this change will probably resemble what happened to IBM once their monopoly power was broken. IBM continues to this day to manufacture the successors of mainframe computers, now called the Z series. They support both their own operating system and a leading open source one. Applications that run on their systems are available from a wide variety of companies.

    Similarly, major vendors such as GE and Siemens will continue to do what they do, but all of the hardware and software will be open to competition by both new and existing vendors, and possibly also by open source efforts. It's likely that Linux would be ported.

    Image storage systems for imaging continue to cost many billions of dollars a year. They don't do much more than what you could with Dropbox or AWS S3 storage, for example. Each patient would have cloud folder that would hold all their records and images. The system would store each new file in the cloud, which would securely store it with full multi-site protection and backup. Sharing can be accomplished simply by creating and sending a link, something that can be done with a few lines of code or manually in seconds. The huge problem of medical imaging records storage and sharing that I demonstrated here would go away! Yes, you'd put some UI on top of the cloud storage to make it super-easy and not dependent on any one cloud storage vendor.

    Conclusion

    The essential and growing world of medical imaging and supporting systems form an indispensable part of modern medicine. It's long since time for them to catch up to the transformation of the computer industry four decades ago, dispense with harmful regulation and allow healthy competition to flourish. We would all benefit by the resulting increase in availability and dramatically lower costs. And yes, with better quality.

     

  • Revolutionize health by making medical data and studies open source

    Medical studies are essential to knowing what works and what doesn't work in medicine. There are a few problems, though. There aren't nearly enough studies, they are expensive and cumbersome, the funding is often by groups seeking an outcome, there isn't enough follow-up, most of the data is secret and they are rarely crafted for personalization. Among other things. What can we do?

    Often the cure for a problem isn't isolated genius, but finding a field that had a similar problem that got solved and adapting the solution. I propose that the problem of building software (expensive, cumbersome, takes too long, etc.) is similar to that of medical studies, and the solution of making software open source can be adapted to the problem of medical studies. If medical research and data were open source, most of the problems I listed could be solved.

    Open Source Software

    The open source software movement has revolutionized the industry. Operating system software, for example, was the proprietary crown jewel of computer manufacturers. IBM's 360 mainframe operating system software, for example, took over 1,000 people years to build. A well-known book by one of its leaders, Fred Brooks' "The Mythical Man-Month" went into detail explaining the nightmare.

    There's been a revolution since then. The Linux operating system completely dominates the operating system market; for example, it runs on over 95% of the top million web servers. This isn't new news — Linux was started over 30 years ago! Since then, even major profit-making software companies such as Google (Android, Chrome, Kubernetes) and Facebook (React) sometimes open-source valuable software they've built internally.

    Much (not all) open source software is built by volunteers and the resulting software is freely available. Sometimes company employees work on open source that is valuable to their employers. There are hybrid models such as Red Hat, which charge for services they offer to companies that want to use the open source software. After the early years of resistance and skepticism by traditional programmers and managers, open source software is broadly accepted as a fact of life — and a good fact! — in the software world.

    Open Source Medical Data

    The data from a research study is incredibly important to the people whose disease or condition is studied, to the medical professionals who treat it and to the device or pharma company that creates the new device or drug. The results of the study cause the patients (guided by medical providers) to take drugs, change their behavior or undergo procedures that can have a major impact on their lives. Shouldn't that data be freely available to anyone who cares to study it? Just as open source software hugely benefits by having large numbers of volunteers pore over the code looking for errors, limitations and omissions, so would open source medical test data benefit by having large numbers of people who are even more motivated than software contributors comb through the data — in software, we're talking about annoying bugs, while in medical data we're talking about life and death.

    Anyone with software experience knows that no amount of software testing in a lab environment can match what happens to the software when it's widely distributed. When things go wrong with open source software in the field, open source contributors have a real-life test case of error and have a reasonable shot of finding and fixing the problem, contributing their fix to the central source code. With thousands upon thousands of copies of the software working all over the world and motivated engineers responding to issues and pooling their solutions, open source software achieves a quality that can't be matched by dedicated groups of employees working for a company. Much less a government agency.

    The equivalent of this for medical testing is to start with opening all the test data to volunteer analyzers, withholding nothing. Releasing all the data that is now kept secret would be a big step forward.

    Dilbert trial

    But that's the equivalent of lab testing.The huge value in open source data will come from extending it to more people than were included in the study, and to include much more data about them, both before the formal start of the study to continuous aggregation of data over time. Among other things, this will enable surfacing factors that weren't considered by the original study designers, both from patient history and from medical events that take place after the formal end of the study. For example, this kind of extended data could surface the facts about the relationship between blood pressure pills and going blind, as I describe here and here.

    Open Source Medical Studies

    There is no reason why paid medical researchers couldn't continue to define and run medical studies in much the same way as they do today, much the same way as for-profit tech companies create software that they then open source. However, they would have to make 100% of their data open source and fully available anyone to investigate.

    The "open source" version would be first to expand the selected participants in the study far beyond what would normally be done with volunteers, and second to extend the data collected to everything that is knowable about the participants, both before the start of the study and continuing long after what would normally be its conclusion. I don't claim to know how best to accomplish this, but I know that today the cost of running study sites, qualifying participants and so on is high. A way would have to be found to enable participants to volunteer remotely, and to enable local volunteers to perform whatever actions like drug injection that have to be performed locally and physically.

    This process really kicks in when the new drug or procedure gets past the test environment and becomes more widely deployed. It would be good to emulate the open source software practice of having a careful staged roll-out of a new release instead of the current medical practice of unlimited distribution after approval. This would enable reports from the field, enhancing the open source data, to surface problems that weren't clear in the earlier, more limited testing of the new drug or procedure.

    Once the distribution gets very broad, there still needs to be a way to surface and report issues. For example, here is a message from Google to enable broad data reporting about one of their products:

    Google data

    Why shouldn't such permission be added to patient medical records, so that as those records are updated for any reason, the updates are added to any relevant open source data collections? This would make longitudinal tracking automatic and painless to everyone involved.

    Conclusion

    Medical studies and associated data strongly resemble the proprietary operating systems of computer vendors in the 1960's and 70's. Each body of code was created at great expense by employees of the companies. The code (like medical data) was considered a trade secret, never to be revealed to an outsider. Problems usually surfaced after the code was shipped, just as many problems with approved drugs only surface after they are distributed. Manufacturers kept spending more time and money to make their software bug-free in the lab before shipment, but never got it right — just as drug makers jump through endless FDA hoops prior to approval, and there are still problems. Makers of proprietary software have huge quality problems to this day, as I have documented, which the "free" open source software largely avoids.

    Applying open source software concepts to medical drug and procedure testing and tracking could greatly enhance the safety and effectiveness of augmenting the toolkit available to patients who have medical issues. As it became understood and widely used, patients would have reason to have confidence and trust in the medical profession far beyond what many of them have today. Instead of being constantly hammered about how some drug is "safe and effective," which kinda tells many patients that it probably isn't, the open source method would create a level of transparency and openness that would let people draw their own conclusions.

    I have been thinking of this issue for a long time; a discussion with Jonathan Bush at the recent HLTH conference inspired me to write it up.

     

  • Deep-Seated Resistance to Software Innovation

    Everyone says they're in favor of innovation. Some organizations promote innovation with lots of publicity. Many organizations even have CIO's (Chief Innovation Officers) to make sure it gets done. But the reality is that resistance to innovation runs strong and deep in organizations; the larger the organization, the greater the resistance usually is. The reason is simple: innovation threatens the power and position of people who have it. They feel they have nothing to gain and much to lose.

    It's not just psychology. Innovation resistance throws up barriers that are thick and high. See this for examples.

    A good way to understand the resistance is look at sound technologies that have been proven in practice that could be more widely applied, but are ignored and/or actively resisted by the organizations that could benefit from them. I have called these In-Old-vations. Here is an innovation that is still waiting for its time in the sun, and here's one that's over 50 years old that is still being rolled out VERY slowly.

    In this post I will illustrate the resistance to technology innovation in a little-known extreme example: the people in charge of Britain's war effort resisted innovations that could help them win the war. They were literally in war and losing and decided, in effect, that they'd rather lose. Sounds ridiculous, I know, but this is normal behavior of people in organizations of all kinds.

    The Battle of Britain

    Britain was at war, facing Hitler’s much larger, better-prepared military, who had already rolled over its adversaries. Life and death. Literally. The established departments did all they could do defend from attacks. The so-called Battle of Britain is well-known. What is not as widely known is the battle in the seas. German submarines were sinking British ships at an alarming rate. The Navy had no answers other than to do what they were already doing harder.

    The situation was desperate. If there was ever a time to "think outside the box" it would seem this was it. The response of the Navy to new things? NO WAY. Amazing new weapons developed by uncertified people outside the normal departmental structures? NO WAY. Once those weapons are built and proven, use them to stop the submarines that were destroying boats and killing men by the thousands? NO WAY!!

    Of course, you might think that someone would have known that the fairly recent great innovation in flying machines was achieved by "amateurs" flying in the face of the establishment and the acknowledged expert in flying as I describe here. You might think that Navy men would remember that perhaps the greatest innovation in naval history was invented by Navy-related people. But no. Protecting our power and the authority of our experts is FAR more important than a little thing like losing a war!

    The story of the new way to fight submarines is told in this book:

    Churchills

    Someone who was not part of the Navy establishment invented a whole new approach to fighting submarines. The person wasn't a certified, official expert. He was rejected by all relevant authorities and experts. Fortunately for the survival of England, Churchill made sure the concept was implemented and tested. The new devices were delivered to a ship.

    This all took time and it was not until the spring of 1943 that the first Hedgehogs were being installed on Royal Navy vessels. When Commander Reginald Whinney took command of the HMS Wanderer, he was told to expect the arrival of a highly secret piece of equipment. ‘At more or less the last minute, the bits and pieces for an ahead-throwing anti-submarine mortar codenamed “hedgehog” arrived.’ As Whinney watched it being unpacked on the Devonport quayside, he was struck by its bizarre shape. ‘How does this thing work, sir?’ he asked, ‘and when are we supposed to use it?’ He was met with a shrug. ‘You’ll get full instructions.' Whinney glanced over the Hedgehog’s twenty-four mortars and was ‘mildly suspicious’ of this contraption that had been delivered in an unmarked van coming from an anonymous country house in Buckinghamshire. He was not alone in his scepticism. Many Royal Navy captains were ‘used to weapons which fired with a resounding bang’, as one put it, and were ‘not readily impressed with the performance of a contact bomb which exploded only on striking an unseen target’. They preferred to stick with the tried and tested depth charge when attacking U-boats, even though it had a hit rate of less than one in ten. Jefferis’s technology was too smart to be believed.

    Here's what the new mortars looked like:

    450px-Hedgehog_anti-submarine_mortar

    What happened? It was transformative:

    Over the course of the next twelve days, Williamson achieved a record unbeaten in the history of naval warfare. He and his men sank a further five submarines, all destroyed by Hedgehogs. Each time.

    If resistance to change to true technological innovation is so strong when you’re desperate, literally at death’s door, how do you think it’s going to be in everyday life? The rhetoric is that we all we love innovation! The reality is that anything that threatens anybody or anything about the status quo is to be ignored, shoved to the side and left to die. Anyone who makes noise about it obviously isn’t a team player and should find someplace to work where they’ll be happier. And so on.

    Conclusion

    Innovation happens. Often nothing "new" needs to be invented — "just" a pathway through the resistance to make it happen. Here is a description of the main patterns followed by successful innovations. If you have an innovation or want to innovate, you should be aware of the deep-seated resistance to innovation and rather than meeting it head-on, craft a way to make it happen without head-on war. Go for it!

  • Experts vs Innovation New Book

    Experts and anointed authorities of various kinds, both academic and commercial, have been the front lines of resistance to innovation for centuries, up to the present. They are the firewall keeping what they consider to be rogue ideas outside the protected environments they oversee, protecting them from bad influences that their naïve but innocent charges might inadvertently adopt. It’s a good thing they’re on the job – otherwise things would be chaos and nothing would get done!

    This pattern is raised to a new level when the subject isn’t some specific business domain like healthcare, but the process of innovation itself. As you may have noticed, many organizations now have the expensive modern equivalent of “suggestion boxes,” departments devoted to fostering innovation in the organization, led by a Chief Innovation Officer. Government officials have gotten into the game, establishing centers for innovation, and “incubators” for startups. Eager not to be left behind, academia has jumped into the game, with august professors doing what they do best: pronouncing truths and activities designed to promulgate them.

    There was a time in my relatively innocent past when I was willing to give the experts a pass. Hey, how can you know everything? I know I don’t!  They’re probably just trying to keep their organizations from being taken down by harebrained ideas, and sometimes they fail to recognize a true innovation when it appears! I no longer believe that pleasant, forgiving fiction. They’re also not evil geniuses immediately recognizing a juicy innovation when it sniffs around the door, and stamping it out before it can start making changes. The truth is far worse: the vast, vast majority of them wouldn’t know a practical, effective innovation if it came up to them and slapped them in the face! See this and this for more.

    The bulk of front-line experts act this way to “protect” their organizations against scary change. They go to great lengths on multiple dimensions to assure that nothing upsets the status quo. Here is detail about the measures they take to prevent innovation, and simple methods to overcome the measures – which the experts are universally ignoring.

    But the elite of the experts are experts in entrepreneurship – people who are “expert” in enabling people who want to create and lead innovation to make it happen. These experts on innovation are like experts on being expert. We are definitely still in the middle of an “innovation” bubble, with everyone acting like it’s a new thing. In fact, it’s looking like less of a bubble these days, and looking more like something that’s going to stick around. Here’s some information about the bubble and how we got here.

    I’ve seen so much of this over so many years, and been so disgusted with the useless wisdom of experts, that I’ve put together some preliminary thoughts, drawn from experience and observation, about how innovation happens. See this.

    Imagine my surprise when I read an article about entrepreneurs that actually made sense! I'd never heard of the guy, Carl Schramm.  It appears from the article that he knows a bunch of stuff about entrepreneurs and innovation that matches up pretty well with my observations, but is actually backed by … real data! OMG! One of my initial shocks was learning that he was a real professor at a real university, even with a PhD, but that … he had worked as an entrepreneur! How is that possible? How did he manage to slip by the super-strict requirements that prevent anyone who actually knows something from experience becoming a Professor?? Maybe it helped that he had been the head of the Kaufman Foundation, the largest foundation devoted to the study of entrepreneurs and innovation, and that instead of just doling out grants, he did real studies to gather real data. What an idea. You think maybe the people who run Departments of Computer Science could get inspired? Sorry, forgive me, you caught me dreaming the impossible dream there…

    I’m not finished reading the book yet, but here it is:

    Burn business plan book

    As you might guess from the title, he talks a lot in the beginning about that near-universal requirement for getting innovation funded, the dread business plan. He trashes it. Vigorously and effectively. No, he doesn’t trash this or that business plan, he trashes the very idea that business plans are both essential and good. He’s right! For exactly the same reason (he doesn’t say this, I’m saying it) that project management in software development is not just brain-dead, but a positive impediment to getting good software done!

    If you are ready to learn about a different and better way to be an entrepreneur, check out this book.

  • Simple Data Entry Technology Illustrates How In-Old-Vation in Software Evolution Works

    Since when is “data entry” (entering data into a computer) a pivotal, innovative technology? When the difference between doing it the normal way and doing it with advanced technologies is a …ten-to-one productivity differencethat’s when.

    I’ve described how the Operations Research algorithm of Linear Programming is fifty years into an agonizingly slow roll-out through different applications, from scheduling oil refineries in the 1960’s to scheduling retail sales in the 1990’s, and now scheduling medical infusion centers and operating rooms in the late 2010’s. In each case, laborious and error-prone human scheduling was replaced by the algorithm, with improvements ranging from no less than 10% to over 50%. This is major! Why did such an innovation wait for decades to be applied, and for many applications, is still waiting!!?? This is the mystery of how what’s called “innovation” works in reality, and why it should be called “in-old-vation” instead.

    You may think that part of the cause is that LP is an exotic algorithm – even though it’s a standard part of the Operations Research engineering curriculum, most so-called normal people haven’t heard about it. While it appears that even the hosts of people who wax eloquent about AI and ML are clueless about LP, it’s not exactly a secret. So let’s see if obscurity is the reason why LP remains a “future innovation” in many potential applications, by examining the super-plain, ordinary, completely-understandable-by-normal-people case of data entry.

    Data Entry

    Just as process optimization is done by hand or stupid methods for decades until some genius comes up with the brilliant idea of applying tried-and-tested LP to the problem and dramatically improves it, so is Data Entry widely performed by primitive methods until some “innovator” comes along and applies “Heads-Down Data Entry” (HDDE) methods to the process – and typically gets improvements of 3 to 10X! The only difference is that while LP is taught in Engineering departments and studied by math nerds, Heads-Down-Data-Entry is “just” a collection of common-sense techniques that require no math and no professors to understand or implement. It’s so “common” that it doesn’t even have a generally accepted name – though it’s been implemented in many places and been thoroughly proven in practice. It’s far too humble to merit an academic department – and yet, when applied, has delivered truly massive gains, far higher proportionately than exotic Linear Programming has!

    The methods of HDDE were first implemented in places that had huge volumes of repetitive data to be entered into computers. Banks were early users for check entry, and so were the credit card companies who, at the start, had huge volumes of paper charge slips to process. Simple ideas like minimizing keystrokes and eye movement were implemented, and then taking advantage of the eye-to-fingers pipeline, when people noticed that showing clerks the next item to be entered before the current one was complete led to a big jump in speed. Other methods like double-blind techniques were invented, so that entry clerks just entered – whether their work was original or used to check someone else’s work was entirely handled by the Data Entry system.

    As soon as scanning and image display became practical, HDDE adopted them. That led to another jump in productivity, enabling large, complex forms to be broken up into pieces, so that a clerk would see the image on a screen of the same piece of data from a whole set of forms instead of entering a whole form from start to finish. No HDDE shop would even consider having the entry clerk think about anything on the form, stuff like “if this field is missing, do this instead of that,” because it would just slow them down.

    Finally, there’s ICR (Image Character Recognition), which is having the computer “read” the image instead of a human. This technology has existed for many decades. Once you’ve got HDDE in place, phasing in ICR is a natural, so that the proportion of entry done by humans gradually decreases as the effectiveness of ICR increases.

    Remember, applying LP to scheduling might result in a 30% improvement, which in most cases, is major to the point of being revolutionary. What about HDDE? Entering data from a paper form into a computer using primitive methods might get between 1,000 and 1500 KPH (keystrokes per hour). There are lots of stages of improvement, including things I’ve mentioned like eliminating the thinking and breaking up the form, but levels of 10,000 to 15,000 KPH in a professional environment are widely achieved – with superior quality. That’s a minimum of 5X! Typically much more. Of course, as you incorporate ICR into the process, it gets even better, gradually reducing the human factor, so that most fields are entered with no human involvement. At this point, the technology is probably best called ICR+HDDE, though there is no generally accepted term.

    Given all this, it would be insane to handle computer data entry by anything other than HDDE methods, right? Welcome to the software industry, where insanity of this kind is the accepted state of affairs. And where almost no one practices the most simple and basic of computer fundamentals, such as counting.

    How HDDE gets implemented

    I described how Linear Programming went from problem domain to domain, each time acting like an innovation, as indeed it was in that area of application. Once it gets established, it tends to stay. The case of HDDE is different, I think because it’s not a recognized “thing” in the halls of academia, or among the poo-bahs of big business. It’s the kind of thing that no self-respecting Professor of Computer Science would stoop to consider, assuming he ever encountered such a low-status thing – you know, the kind of thing that “merely” makes common sense and, well, works.

    HDDE has appeared in competitive, high-volume service businesses, where it has a major role to play in delivering results for the customers of the business. There have been software products that directly support HDDE, so that all you have to do is buy and implement them. It’s neither obscure nor hidden. But it’s never been talked about at conferences as the “coming thing.”

    Case Study: HDDE rejected

    In the early 1990’s, when document imaging and workflow technology were hot and something people talked about the way they talk about AI/ML and innovation today, the government-backed student loan organization, Sallie Mae, decided to apply the technology to improve the operations at the handful of processing centers they had at the time, employing many thousands of people and processing millions of documents a year. The popular thing to do at the time was to scan documents on receipt, and then send them to the same places the paper was sent, so that workers could process the images of the documents displayed on new big screens instead of paper. The job was basically to type the data into the right places of the software application they used.

    Everyone at the time said that converting the documents was important ONLY because it enabled wonderful workflow, the elimination of inboxes and outboxes for paper. And the bits of other stuff you could do, like having a group of people taking from a common inbox instead of each having their own. The common “wisdom” was that you could gain 30% productivity improvement by implementing this marvelous new technology.

    I got involved, since I was a recognized expert on document imaging technology at the time, and had personally coded one of the early workflow systems. I figured out and showed in detail that by canning the workflow and implementing HDDE techniques, they could gain a minimum of 5X productivity improvement. No one disputed my thoughts or detailed plan. They just ignored it, and proceeded to implement the standard stuff. I strongly suspect that after considerable time and expense, there were walk-throughs of the Sallie Mae sites showing visitors the big screens and absence of paper – what a big success the project was!

    Case Study: ICR-HDDE applied with success

    There are two current cases I know of where ICR-HDDE is being applied and winning. Each are classic, narrow service businesses where converting forms to data is the key value of the business, and where the companies buying the service just want fast, accurate data delivery, they don’t care how it’s done. Disclosure: each of these is an Oak HC/FT investment.

    At Groundspeed, insurance forms and reports are captured and the relevant data is extracted from them by the most effective relevant means, often involving forms of ICR-HDDE. There is lots of forms recognition from documents and images that are often computer output, with the relevant data appearing at varying places on a page. Nonetheless, Groundspeed is able to deliver the data stream the customer needs, quickly and accurately, The results are so powerful that new levels of analytics are enabled by the newly available stream of structured data.

    At Ocrolus, financial documents of all kinds including bank statements and pay stubs are converted to data in a standard format to enable fast and effective operations like giving loans for business and personal use, along with a growing list of other operations that also need good data. An effective combination of ICR-HDDE techniques are applied to get results for companies that need accurate data to make fast decisions.

    Conclusion

    HDDE is a collection of methods that have been proven in practice for many decades. The technology that is behind it continues to deepen and reduce human effort even more, with the addition of ICR. But it remains a niche technology, ignored by the numerous places that could benefit from it, even more than LP.

    The big difference between LP and HDDE is that LP is a formal piece of magic that’s in academia. HDDE is nowhere. In fact, it’s really just an example of classic industrial engineering applied to computer software and the people who use it. Which makes it all the more mysterious that it's largely ignored.

    In-old-vation is real. Most “innovations” are minor variations on things long-since proven and demonstrated in practice, but are unimplemented in the many situations that would benefit from them until some mysterious combination of circumstances arises to let them explode into practical reality.

  • Barriers to Software Innovation: Radiology 2

    Value-creating innovations are rarely the result of a bright new A-HA moment, though an individual may have that experience. A shocking number of innovations are completely predictable, partly because they've already been implemented — but put back in the vast reservoir of ready-to-use innovations, or implemented in some other domain. This fact is one of the most important patterns of software evolution.

    Sometimes the innovation is created, proven and fully deployed in production, like the optimization method Linear Programming, which I describe here. In other cases, like this one, the innovation is built as a functioning prototype with the cooperation of major industry players — but not deployed.

    In a prior post I described how I went to the San Francisco bay area in the summer of 1971 to help a couple of my friends implement a system that would generate a radiology report from a marked-up mark-sense form. We got the system working to the point where it could generate a customizable radiologist's report from one of the form types, the one for the hand. Making it work for all the types of reports would have been easy — we demonstrated working software, and wrote a comprehensive proposal for building the whole system. It was never built.

    True to the nature of software evolution, the idea probably pounded on many doors over the years, always ignored. But about 10 years ago, a pioneering radiologist in Cleveland came up with essentially the same idea. Of course, instead of paper mark-sense forms, the radiologist would click on choices on a screen, and would usually look at the medical image on the computer screen. This enabled the further benefit of reducing the work, and letting doctors easily read images that were taken in various physical locations. Tests showed that doctors using the system were much more productive than those who worked in the traditional way. Finally, they decided that mimicking the radiologist's normal writing style was a negative, and that the field would be improved by having all reports follow a similar format, with content expressed in the same order in the same way. This was actually a detail, because the core semantic observations would be recorded and stored in any case, enabling a leap to a new level of data analytics. It also, by the way, made the report generation system much easier to build than the working prototype we had built decades earlier, which enabled easy customization to mimic each radiologist's style of writing.

    The founding radiologist was a doctor, of course, and knew little about software. He did his best to get the software written, got funding, and got the system working. Professional management was hired. My VC group made an investment. Many people saw the potential of the system; it was adopted by a famous hospital system in 2015. But in the end, the company was sold off in pieces.

    Nearly 50 years after software was first written that was able to produce medical imaging diagnostic reports quickly and reliably while also populating a coded EMR to enable analytics, the system is sitting in the vast reservoir of un-deployed innovations. It can be built. It saves time. It auto-populates an EMR.

    Many people have opined on why this particular venture failed to flourish. It's a classic example of the realities of software innovation and evolution. The reasons for failure were inside the company and outside the company. For the inside reasons, let's just say that the work methods of experienced, professional managers in the software development industry lead to consistently expensive, mediocre results. Nonetheless, the software worked and was in wide production use, delivering the advertised benefits. For the outside reasons, let's say that, well, the conditions weren't quite right just yet for such a transformation of the way doctors work to take place.

    The conditions that weren't right just yet for this and uncountable other innovations add up to the walls, high and thick, behind which a reservoir of transformative innovation and "new" software awaits favorable conditions. In other words, the reservoir of innovations wait for that magic combination of software builders who actually know how to build software that works, with a business/social nexus that accepts the innovation instead of the standard no-holds-barred resistance.

    Corporations promote what they call innovation. They are busily hiring Chief Innovation Officers, creating innovation incubation centers, hanging posters about the wonders of innovation, etc. etc. They continue to believe the standard-issue garbage that innovation needs to be invented fresh and new.

    The reality is that there is a vast reservoir of in-old-vations that are proven and frequently deployed in other domains. All that's needed is to select and implement the best ones. HOWEVER, a Chief Innovation Officer is STILL needed — to perform the necessary function of identifying and breaking down the human and institutional barriers that have prevented the in-old-vations from being deployed, in many cases preventing roll-out for — literally! — decades!!

  • The Slow Spread of Linear Programming Illustrates How In-old-vation in Software Evolution Works

    There is loads of talk about “innovation.” Lots of people want to do it, lots of people think they’re doing it, consultants run courses in how to be innovative, and large organizations claim to promote innovation and be innovative. The assumption behind most of this “innovation” talk is that a wonderful bright idea that will change the world (or at least your organization or startup) can pop into anyone’s head. It’s new! It’s brilliant! We’re going to win big with this great new idea! See this for example.

    When you study software evolution, you get an entirely different picture of software-based “innovation.” Software evolution shows you that new ideas that work are extremely rare. Oh sure, there’s a flood of new ideas popping into people’s heads all the time. Mostly, they’re not new, and the new ones rarely work. The software concepts that make it big are, in the vast majority of cases, clear examples of existing patterns of software evolution, and have in most cases already been implemented in a different context.

    I first encountered the mystery of software evolution while in my first job programming software.

    I started programming in a course I took starting in 1966 in high school, taught by a math teacher who couldn’t himself program, but had convinced the school to let him teach the course, and had convinced a local company, a pioneering rocket engine company called Reaction Motors, to give us computer time on Saturdays. I had a textbook about FORTRAN, a steady stream of programming assignments and a computer on which to test my programs. It was great! I continued programming the following summer, as part of an NSF math camp I was able to attend. As I was nearing high school graduation the following year, I got lucky; Diane, a high school friend, talked about me with her father, who got me connected with a nearby company, and I landed a job there for the summer before starting at Harvard College in the fall of 1968.

    The company was EMSI, Esso Mathematics and Systems, Inc. in Florham Park NJ.

    1969 06 EMSI badge 1

    They were a service company, one of the about 100 units of the Standard Oil of NJ (Esso) companies, devoted to applying math and computers to improving every aspect of the company. I was immediately thrown into the group that was developing optimization models for oil refinery operation. Our focus was on the giant refinery in Venezuela.

    What we had running was an implementation of a classic OR (Operations Research) algorithm called LP (linear programming), solved via the simplex algorithm first devised by George Danzig in 1948. In this kind of model, there is a goal equation and a set of constraints. The goal equation calculated profits, using hundreds of contributing variables, including prices you could sell things for, and the costs of various inputs. The constraints were greater/less than equations, each essentially describing some tiny aspect of how the refinery worked. What the algorithm did was find the values of the variables that maximized the goal equation (profits) while satisfying all the constraint equations.

    The model was constantly being modified to make it more precise and applicable to actual refinery operations. I had a variety of jobs, including writing new code, fixing bugs, etc.

    Since prices were such a key part of the LP model, we had a separate program to calculate what they were likely to be in the future, a Monte Carlo model. I also made enhancements and fixed bugs in this body of code.

    I was fortunate to be able to hitch a ride to work and back with a PhD who worked there and lived in my town. During the ride I would tap his deep knowledge of things. He put me onto the various journals in which advances in various forms of OR were described, which I dove into. The math was often above my head, but I was motivated to teach it to myself on a rolling, as-needed basis.

    I thought this was a really cool way of running things. There were all sorts of controls in the refinery, controls that let you create more airline fuel and less heating oil, or any number of other trade-offs. It was amazing that you could compute the setting of all the control knobs that would produce the best mix of products that the market needed. Why would you ever run any operation any other way?? It would be simply ignorant and stupid.

    I finished school, learned more about the way the world worked, and searched high and low for implementations of LP. Anywhere! If they were out there, they were doing a great job of hiding.

    I was confused. How could this be? LP was math, doggone it. It yielded a provably optimal way of running a business. It’s proven in real-life production at Esso. Any other way of operating a business was clearly seat-of-the-pants, wet-finger-in-the-wind amateur hour; anyone whose operation was sizable enough to justify the effort would have to use this method if they weren’t plain-and-simple incompetent. But no one seemed to be using it! What’s going on here??

    This mystery was on my mind while I participated in one of the periodic AI crazes that has swept the world of with-it people. While I was still in college, Winograd published his MIT SHRDLU research, in which an “intelligent” robot would converse with a human in English about a world consisting entirely and solely of blocks. You could ask SHRDLU to “put the red block on top of the blue block” and it would do it; you could ask it questions about block world, and it would answer. Amazing! Super-practical! While this was happening, I wrote and submitted a thesis about how to structure knowledge inside of an intelligent robot. All of it useless compared to LP and the associated OR techniques.

    The craze was AI, and the mania lasted a few more years, generating a steady stream of "promising" results, none of them in the same universe of practicality and benefit as LP or any other OR optimization technique, which continued to be used only in "secret" little islands of astounding efficiency and productivity.

    Later in the 1970’s I first applied for a home mortgage. A key part of getting the mortgage was the interview with the loan officer. You had to pass muster to get the mortgage! Another 10 years passed before OR-type models started to be used for credit. When I next applied for a mortgage in 1981, I was interviewed by a loan officer. By my next mortgage in 1987 it was at least partly automated.

    When I got into venture capital in the 1990’s, I discovered that high-value repair parts were had recently been optimized in terms of inventory levels and locations. I looked in detail at the pioneering company ProfitLogic, which did inventory and sale optimization for retail stores, answering questions like when should which products be put on what kind of sale, questions that had traditionally been answered by the local marketing “expert,” just as oil refineries had previously been run exclusively by experienced experts.

    Only in the late 2010’s did exactly the same LP models start being applied to medical scheduling, to optimize the use of things like infusion centers and operating rooms. Just as oil refineries produced much more value from exactly the same crude oil inputs in 1968 as a result of LP models, so are infusion centers handling 30% more throughput using exactly the same capital and human resources using the very same LP models.

    Here’s the mystery: why did it take so blankety-blank long for LP models to be applied??? There has been no theoretical break-through. Yes, the computers are less expensive, but given the scale of the opportunity, that was never the obstacle. WHY??!! The answer is simple: there is no good answer. Except of course for the ever-relevant one of human ignorance, stupidity and sloth.

    The example of LP optimization and its agonizingly long roll-out through different applications and industries over more than 50 years – a roll-out that is far from complete! – is a prime example of the reality of computer/software evolution. Among other things, it illustrates the point that many of the most impactful "innovations" are really "in-old-vations," things that are just sitting there, proven in production and waiting for someone to apply them to one of the many domains which they would benefit.

    Here are a couple cornerstones of computer software evolution:

    • Software evolution resembles biological evolution only a little. Not-very-fit software species thrive in broad areas of application, unchallenged for years or decades, while vastly superior ones rule the roost not far away. The only reason why the superior software species don’t migrate to the attractive new place appears to be human ignorance and inertia.
    • Software evolution resembles the much-derided theory of “intelligent design” quite a bit, if you make a slight edit and call it “un-intelligent, un-educated design.” A “superior” (HA!) being does indeed do the designing of the software, in the form of highly paid software professionals.
    • When software appears in a new “land” (platform, business domain), it most often starts evolution all over again, first appearing in classic primitive forms, and then slowly re-evolving through stages already traversed in the past in other “lands.” This persistent phenomenon supports the “unintelligent design with blinders on" theory of software evolution.

    I will explain and illustrate these points in future posts and a forthcoming book.

  • Barriers to Software Innovation: Radiology 1

    There is a general impression that software innovation in one of its many forms e.g. “Digital Transformation” is marching ahead full steam. There are courses, consultants, posters hanging in common spaces and newly-created Chief Innovation Officer positions.  What’s new? What’s the latest in software?

    The reality is that there are large reservoirs of proven, tested and working software innovations ready to be rolled out, but these riches are kept behind the solid walls of dams, with armies of alert guardians ready to leap in and patch any holes through which these valuable innovations may start leaking into practice. Almost no one is aware of the treasure-trove of proven innovations kept dammed up from being piped to the many places that could benefit from them; even the guardians are rarely fully conscious of what they’re doing.

    If anyone really wanted to know what was coming in software, all they would have to do is find the dams and peer into the waters they hold back.  In spite of the mighty dams, it sometimes happens that the software finds its way into practice, normally in a flood that blankets a small neighborhood. Sometimes the flood has been held back for decades. There are cases I know of where an innovation was proven 50 years ago, and is still not close to being rolled out.

    The dams are built in many ways with many materials. The raw materials appear to include aspects of human nature: ignorance, sloth, greed — you know, the usual. The really high, solid dams have broad institutional support, in which “everyone” is fine with things as they are, and won’t so much as give the time of day to an amazing innovation that would change many things for the better – except of course for a key interest group.

    Here is one of the examples I personally know about. It was one of my introductions to what innovation is all about, and the sad fact that creating a valuable innovation is generally the easy part – the hard part is usually overcoming the human and institutional barriers to deploying it.

    Automating Medical Image Reading and Reporting

    When a radiologist gets an X-ray, there are two phases of work. The first is to “read” the X-ray and observe anything non-typical that it shows, anything from a broken bone to a tumor. The second is to generate a report of the findings. Most radiologists, then and now, dictate their findings; then someone transcribes the dictated report and sends it as needed. The details of the report can vary depending on the purpose of the X-ray and the kind of person for whom it’s intended.

    There has been technology first tested decades ago that appears to show that software is capable of “reading” an X-ray at least as accurately as a human radiologist. I will ignore that work for now, and focus on what should be the less threatening technology, which is translating the doctor's observations to an appropriate report.

    While I was in college, I worked on the early ARPAnet with an amazing group of people, one of whom was an MIT student from the San Francisco area who later went on to fame making major advances in integrated circuits, among other things. The summer after we did most of our ARPAnet work, he got involved with a new initiative to transform the way radiologist reports of X-rays were created. He knew that some of my skills in automated language parsing and generation were relevant, so he invited me out to pitch in. I went.

    GE, then and now, was a major maker of medical imaging systems. They were seriously experimenting with ways of enhancing their systems to make it easier for radiologists to produce reports of their findings. They created a set of mark sense forms, of the kind widely used at the time for recording the answers to tests, to enable a radiologist to quickly mark his observations of the part of the body in question. Here is the form for a person's guts: X form

    Here is part of the form for the hand, showing how you can mark your observations: X observe

    Here is part of the form for the spine, showing how you can customize the report output as needed: X type

    My friends had gotten most of the system together — all I had to do was build the software that would create the radiologist's report. Because of uncertainty about radiologist's accepting the results, I had to make the report generator easily customizable, so that the radiologist's typical style of writing was created.

    Leaving out the details, in a few weeks I created a domain-specific language resembling a generative grammar and rules engine to do the job — along with the necessary interpreter, all written in PDP-8 assembler language, which was new to me. My friends wrote a clear and compelling report describing our work and included an example of our working software in it. Here was the sample filled-out form: X ex pic

    And here was part of the report that was generated by the software we wrote from the input of that form: X ex rep

    The software worked! And yes, the date on the report, 1971, is the date we did the work.

    A major company, prominent in the field, had taken the initiative to design mark-sense forms, incorporating input from many radiologists. A few college kids, in contact with one of GE's leading partners, created a working prototype of customization report-generating software, along with a proposal to bring the project to production.

    Just as a side effect, this project would have done something transformative: capture the diagnostic observations of radiologists into fully coded semantic form. This is a form of electronic medical record (EMR) that still doesn't exist, even today! For all the billions of dollars that have been spent on EMR's, supposedly to capture the data that wil fuel the insights that will improve medical care, a great deal of essential medical observation is still recorded only in un-coded, narrative form — including medical imaging reports!

    The bottom line is that this project never got off the ground. Not because the software couldn't be written, but because … well, you tell me.

    See the next post for the continuation of this sad but typical story.

  • Use Advanced Software Methods to Speed Drug Discovery

    Drug discovery is like the worst imaginable, old-style software development process, guaranteed to take forever, cost endless amounts of money, and far under-achieve its potential. There are methods that the most advanced software people use to build effective software that works in the real world, quickly and inexpensively. These small groups invent all the new things in software, and then get bought by the big companies.

    Can these fast, agile, effective methods be applied to invent and test new, life-saving drugs and get them to the patients who are dying without them? Yes. The obstacles are the usual ones: the giant regulatory bureaucracies and the incumbents who would be disrupted. Yes, the very people who claim to keep you healthy and cure your ills are the very ones standing between us and speedy drug discovery.

    Drug Discovery and Software

    While I'm not an expert in drug discovery, I've learned more than I wish to know about the regulations through the software providers to the industry. And like many other people, I've learned from being a patient with a disease that could be addressed by drugs that I am not allowed to take, because they are deep in the labyrinth of the years-long approval process.

    I've explained elsewhere how a revolution in medical device innovation could be enabled by transforming the applicable regulations from complex, old-style software prescriptions to simple, goal-oriented ones.

    A similar concept can be applied to the process of drug discovery itself.

    Old-style Software is Like the FDA's New Drug Regulations

    The classic software development process is a long, expensive agony. It's an agony that sometimes ends in failure, and sometimes ends in disaster. It most resembles carefully constructing Frankenstein's monster. It starts with requirements and goes on to various levels of design, planning and estimation. Finally the build takes place. But wait — we can't "release" the software until we know that its quality is top-notch. And that it meets all the requirements. It's gotta work! So let's make absolutely sure that it's up to snuff before inflicting it on the innocent users. Here are details.

    Yes, those innocent users — who are, by the way, chomping at the bit to get at the long-awaited new software whose requirements they signed off on years ago, and that they actually need to get their jobs done.

    So is software development like drug discovery? Let's see.

    • Development that's a long, expensive agony. Check.
    • Don't release it until its adequacy is PROVEN. Check.
    • People who are just dying to use it. Check.

    But here's the difference: for software, usually one company both builds it and decides whether and when to release it. That means the business leaders of the company can balance the tension between adequacy and getting it out there. In the case of drugs, it is adversarial: the FDA declares how each step of drug discovery and testing has to be done, and has armies of people to impose its will on the companies that do the work.

    The FDA Nightmare

    The FDA nightmare has two main parts.

    The first nightmare assures that development and testing is performed in what is claimed to be the "safest" way possible — it's all about protecting patient health! In fact, this means incredibly slow and incredibly expensive. The overhead is far more burdensome than the work itself, which really tells you something. There is a multi-billion company, Documentum, that got started with and still is the leading provider of software to the pharmaceutical industry for handling the documents required by the FDA. Right away, this expense and overhead burden assures that no group of brilliant people will create a start-up and create a new cure for a disease.

    The second nightmare is that the process is incredibly high risk. The FDA can kill your new drug at any time, including near the end, after all the time and money is gone. This again reduces the number of groups performing new drug development to a tiny number of rich, giant, risk-averse corporations.

    This is like big-corporate software development — only far worse.

    Wartime Methods for Drug Discovery

    I've written a lot about wartime software development. A good way to understand it is to look at bridges in peace and war. In wartime, we build effective bridges while under fire in a tiny fraction of the time needed in peace. And the bridges work.

    The methods translate well to software. They are practical. They work. They are in regular use by groups that are driven to innovate and get stuff done. There are details in my book on the subject, with lots of examples and supporting material in my other books.

    It's very clear that the methods also apply to the FDA's regulation of software. Here is an example. There is no reason other than the usual obstacles to innovation that the principles couldn't be applied to drug discovery in general.

    Wartime Drug Development

    What we should try is Wartime Software Development morphed into Wartime Drug Development. Here are the principles:

    • Grow the baby.

    Instead of going through a whole long process and supposedly coming out with perfection at the end, you start with something that sort of works, try it (on volunteers), see how it goes, make changes and iterate.

    • Principles of e-commerce and social media

    When you think of buying a product, do you just walk into a store and trust the salesperson? If so, you're probably in your 100's and hope to get a computer someday. Everyone else goes on-line, checks reviews, and above all checks comments from real users. The sheer number of comments tells you how popular something is. Of course, you don't blindly believe everyone, and of course you translate what people say to your own situation. There could be awful risks and side effects, but if it sometimes works and your alternative is misery shortly followed by death, you might decide it's worth the risk.

    It's a decision that should be in your hands, informed by full sharing and disclosure, not decided on your behalf by a bunch of bureaucrats sitting in offices.

    • Open source and full disclosure.

    Of the top million servers on the internet, over 95% run linux, an open source operating system. Linux was created by an interesting nerd, and developed by an evolving band of distributed volunteers. It is superior to any commercial operating system. And operating systems are complex; linux contains more than 12 million lines of code! Why shouldn't we make drug discovery open to a similar process? With open source, everything about a drug and its results so far would be open and available for anyone, including patients, to see. Patients and researchers would all be active participants in the open discussions.

    • Continuous release

    The most advanced sites first bring up their software in extremely limited, volunteer-only releases. Everything is tracked. If things go well, more people can be invited in. Incredible tracking, lots of feedback, explicit and implicit. As software goes into wider release, a new version of it may be made available to a combination of new and existing users. Its use may be expanded, or it may be withdrawn. The process is continuous and iterative. It's called continuous improvement. We use it in lots of domains, ever since its use was formalized by W Edwards Deming in car manufacturing. It's not exactly weird or marginal. We simply refuse to apply its proven principles to drug discovery.

    Conclusion

    The FDA says its mission is to keep us safe. The gigantic bureaucratic monolith in practice assures that new drug development is performed by a tiny number of elite corporations at great expense, and rarely. Let's at least try a better way of doing things!

  • The Science of Innovation Success

    Most of what you read about how to innovate and how to achieve success as an entrepreneur is irrelevant at best, and a cargo cult at worst. The real success patterns are not well known. They work. If you want to be seen by the world as doing the right thing, keep doing what "everyone" says you should do. But if you want to … win … you may want to consider learning from the patterns that actually work.

    Patterns that work in health and fitness

    Let's look at a clear winning pattern in health and see if it can be applied to learning how to innovate. (Hint: it can't.)

    Sometimes you're struck down by an illness that no action of yours could have prevented. HOWEVER, there are proven patterns of behavior that greatly improve your health and resistance to disease, and related patterns that clearly result in your being able to run faster, jump higher and lift more weight. While the specific advice to achieve these things varies, the principles as understood by mainstream experts are largely valid.

    It's pretty simple: eat a variety of mostly un-pre-packaged foods with a minimum of additives and things like fructose, and balance exercise and eating so that you maintain a moderate weight for your body type. For fitness, it's exercise and practice.

    In addition to these common-sense patterns, there other things people do that make sense. If you see someone who has achieved what you want to achieve in terms of health and fitness, it makes sense to find out how they did it and emulate their actions. In addition, it's broadly known that motivation is a key factor, along with attitude and consistent behavior.

    In other words, study what the fit, healthy people do, and do it yourself. Pretty simple, at least in concept.

    Applying the Observe-and-Emulate Pattern to Innovation

    Most things you learn or achieve, you are doing again or for yourself something that has already been done, typically by millions of people. That's what education is all about, for example. When you get educated, you are walking down a well-trod road. What about science education? Same thing. You have to learn the facts, the concepts, the math.

    What about innovation? Is it just another thing you can get educated in and learn from the teachers, who learn from the experts? No! Innovation is different. Innovation is some combination of creation, discovery and adapting. It's being the first. It's creating something that wasn't there before. It's taking something that worked in a particular time and place, and making the substantial changes required to work in entirely new circumstances.

    Imagine taking a course in exploring new lands in Europe in 1480. Who were the experts? What did they teach? Who could you study and emulate? Of course, there were lots of self-styled, widely revered "experts" who knew all about it. Sure. Columbus had to do it without any help that was actually, you know, helpful!

    Innovation is not like health, fitness and most everything else. It's different.

    Winning Innovation Patterns

    I truly hope someone will figure out if there are winning patterns for innovation and make a science out of it. Until then, from years of observation of people trying hard to innovate, I've noticed a couple of things.

    • Pattern: Expert-phobia

    Successful innovators ignore the experts. They ignore (1) the experts in their field of innovation, and they ignore (2) experts on innovation itself.

    Sometimes experts in a given field, even widely-acknowledges ones, are actually good. While the vast majority simply assert and defend the common view, sometimes an unusual expert will innovate or be helpful to an innovator. But this is the exception. The invention of powered flight is a great example of what usually happens; how the "expert" approach never got off the ground, and the hard-working unknowns made the key innovations. Here's my description.

    Successful innovators just don't have time to waste on people who claim to be experts in "innovation" itself. They know that real knowledge is all that matters.

    With all the noise about "innovation" in the air, it may seem to make sense to dive in to the "innovation" pond. I've noticed that the people who actually end up innovating with success don't go there. They've got better things to do. If they largely ignore experts in their field, why would they pay attention to experts in generic innovation?

    • Pattern: Dive Deep, be the Best

    The people who create innovations that work started by diving real deep into some particular area of experience or knowledge. They became real-life, on-the-ground, go-to experts in something. Not famous. Not writing books and giving talks. Just knowing more and accomplishing more in some narrow area of activity.

    Knowing as much as they do, they stick their heads about water, and get dissatisfied. They see waste; or stupidity; or something that could be better or be done better. They set out to do it, from the basis of being the best at the status quo. They know how things should be done. They start by wondering how things could be done.

    • Pattern: Ignore the Big Picture, Focus on the Little Picture

    Most people who get known as experts spend most of their energy sharing their wisdom and broadening their knowledge. They don't innovate.

    The successful innovators can be remarkably clueless about the "big picture." Not their problem. They are absorbed with the day-to-day, with what confronts them in the here-and-now. They tend to be do-ers who can think, not thinkers who pretend they could do if they really wanted to.

    Often, the problems that inspire innovators are "trivial," from the big-picture point of view. It is just those problems that inspire practical, real-life innovation. Here's a description of "little picture" innovation, and here's an example of "little data" innovation.

    • Pattern: Innovate as Little as Possible

    Innovators like to innovate. They think of themselves as creative people. They love to solve knotty problems. This is the main problem of many creative people who fail to innovate with success. They can't stop!

    People who innovate successfully innovate something that matters. Then they stop innovating, and do what they need to do to make their innovation work in the real world. They reduce their risk. They stick with proven things. Because they want their innovation to work!

    • Pattern: Solve Real Problems of Real People

    Everyone knows that medical records have to go digital. They've know it for a long time. There were and are loads of experts and industry committees piously pontificating about the best way to do it.

    Then a programmer — yes, a real, live software engineer — went into the records room of a medical practice and learned how to do the job from the people who were already doing it. He did the work, not just for a couple hours, but for days on end. Long enough to see all the issues. Long enough to get bored, get annoyed, get ideas and get motivated to automate stuff.

    He didn't make a plan. He didn't create a strategy. He didn't run some ideas past some people. He wrote some code. Code that would make his life in the filing room better. He tried it out. He wrote more code. The people who really worked there asked if they could use it when he wasn't there — because it would make their jobs easier. What a concept! The code became Athena Health's highly successful clinical records management product — a rare example of innovation taking place inside an already-successful company.

    • Pattern: Apply Step Theory

    Successful innovators don't tend to have carefully-thought-out strategic plans. They don't lay careful foundations. They don't create detailed plans that account for a wide range of contingencies. They know that if they don't get through today, there will be no tomorrow. They know that there may end up being 1,000 steps in their journey, but they also know that if they fail on step 1, they have failed. So step 1 is the ONLY step that matters.

    This is "step theory." For more details and examples, see my book.

    • Pattern: Ignore Fashion, Except for Scale-up Marketing

    It's rare that people who jump on one of the fancy new bandwagons accomplish much of real value. In fact, most of the fancy new bandwagons are little but fancy new names for things that have been around, while others are fads that will fade out. Big Data? Old news. Machine learning? Been around. Blockchain? Great for Bitcoin, not much else.

    Nonetheless, to the extent that the fashionable thing happens to be applicable to a narrow, real-world problem and smart, go-deep people focus on real problems and solve them with urgency, innovation can result. Then, as the innovation starts to get traction, it makes perfect sense to embrace the fashion. Why not? If that's what it takes to get people to pay attention to you, you do it.

    Conclusion

    Here are a few examples of real-life innovation that I'm associated with. Here is a whole book of innovation stories, taken from real life and personal experience. I hope that these patterns of successful innovation will be further explored and help inspire future innovators.

  • Regulations that Enable Innovation

    Regulations that enable innovation? How can that be?? Don't regulations inhibit or even prevent innovation?

    Yes they do. Wouldn't it be nice if there were a way to write regulations that enabled innovation? Well, there is a way to do it! It's actually easier to write regulations that enable innovation than the usual way. There are fewer of them. They're easier to understand, and easier to keep up to date. They're more effective at regulating what you could reasonably want to regulate, while at the same time keeping the door open for inventive people to find better ways to get things done, while still conforming to the regulations.

    So why isn't this the standard way of writing regulations? Inertia. Lack of understanding. Fear. Bureaucratic intransigence. The usual reasons.

    Regulations that Enable Innovation

    Practically all regulations tell you, in varying levels of detail, tending to the excruciating, How you're supposed to do the regulated thing. The more detail, the less innovation.

    By sharp contrast, regulations that enable innovation tell you What you're supposed to do or avoid doing. The less said about how to reach the goal, the wider the door for innovation.

    Suppose the point of a regulation was to make sure you got to work on time.  Typical how-type regulations would tell you exactly when to leave your apartment and exactly what streets and avenues to walk until you got to the office. It would allow for red lights. The regulations would have to change to allow for construction and other changes. If you deviated from the prescribed route or used a different method of transportation, you'd be in violation.

    What-type regulations for the same thing are simple: dude, get to the office on time! How? You figure it out, it's your problem! But it's also your opportunity for learning and evolution. You could try walking, and try different routes. You could try the bus and subway. Taxi and Uber. Different ones under different circumstances. So long as you got to work on time, you'd meet the regulation!

    For more detail on What vs. How, see this.

    If this sounds crazy to you, you should realize that there is a whole, vast area of our legal system that works in just this way: the criminal law. See this for more.

    I wouldn't be advocating for change if how-type regulations worked. They usually don't get the job done. They prevent innovation. Worse, when you satisfy all the regulations, you're under the illusion that things are fine. Except that they're usually not. The ongoing cyber-security disasters we have experienced are prime examples of this.

    Cutting down the number of regulations

    Lots of people complain about regulations. Some people want to reduce their number. For good reason! Have a look at this to see the scale of regulations.

    I hope it's now clear that reducing the number of How-type regulations won't make a big difference. It may even make things worse. It's better to replace a whole pile of How-type regulations with a couple of simple, goal-oriented What-type regulations.

    An example of regulatory innovation prevention

    The rhetoric of regulations and licensing is that they protect us poor, innocent consumers from the awful products and services that would be inflicted on us in their absence. The reality is that they are a massive effort that increases the costs of everyone already providing a product or service, while putting up huge barriers to competition from fast, light-footed innovators who have figured out a better way to do things. Regulation, certification and licensing do almost nothing to protect consumers, but are remarkably effective incumbent protection programs.

    While this dynamic plays out in many industries, nowhere is it more harmful to our health and well-being as it is in healthcare.

    The FDA is supposed to protect our health. It's even what they say they do:

    FDA promoting health

    One of the many ways they do this is by heavily regulating the software that goes into all medically-related devices.

    The right way, the What-type way of regulating that software, would be like a criminal law:

    Your software has to perform all its intended functions in a timely and effective way, without error. When updates are made, no errors or other problems should be introduced.

    Now that's just a first draft. But I bet the final goal-oriented "regulation" wouldn't be too far from this.

    This simple regulation states what everyone really wants: the software should do what it's supposed to do. Period.

    The FDA does the opposite of simple and effective. It tells you exactly how you're supposed to develop software, and in gruesome detail. Here's the overview of the regulation:

    1a

    The sections are listed on the left. Each explodes into many sub-sections, some of which are further divided. Each one is long, detailed and brooks no variation (or innovation). On the right in the image above, you see just some of the bibliography, the many underlying documents you'd better get and understand if you're going to be in regulatory compliance.

    Here's a diagram that gives an overview of what is required:

    62304 fig 1

    Here are the section headings from the software planning part of the document:

    IEC 62304 requirements

    As this makes clear, you'd better not write a line of software until you've spent boatloads of time and effort in planning — exactly what people do when they build buildings using steel and poured concrete, but exactly the opposite of the iterative approach that is the standard among fast-paced, innovative organizations. I mean little upstarts with a high failure rate, like Google, for example.

    If the FDA were serious about their stated mission, "protecting and promoting your health," they would immediately blow up IEC 62304 and the who-knows-how-many-other mountains of how-type regulations they oh-so-lovingly promulgate and enforce, and replace them with simple goal-type, what-type "regulations." It would unleash a torrent of health-promoting innovation and open the lobbyist-loving incumbents to much-needed competition. To the benefit of nearly everyone, except a bunch of progress-preventing bureaucrats employed both by the government and by their corporate "homies."

    Conclusion

    We need regulations. The last thing any of us wants is for corporations to build crappy equipment that doesn't work or deliver services that deceive or hurt us. There are bad and incompetent people in the world, and without appropriate regulations that are vigorously enforced, we'd be worse off. And in extreme cases, dead when we could be thriving.

    Which is why it is so upsetting that major organizations like the FDA keeping waddling along, crowing about what a great job they're doing, when it's just not true.

    I wish it were just the FDA. Most major sectors of society that are supposed to be protected by regulations are instead hobbled by incumbent-protecting, innovation-killing, ineffective how-type regulations.

    The path to regulation that is both effective and enables innovation is clear. Let's do it!!!

  • Innovation: the Barriers

    It's hard to be an innovator. You have to come up with cool new stuff, make it work, and get people to use it. Not easy! Depending on your situation, there can be barriers, active and passive, to being a successful innovator. Lots of people in business and government love to talk about how they're innovative, and how they foster innovation. Hah! In all too many cases, what they actually do is build and sustain barriers so strong and so high that innovation is nearly impossible.

    If you look at my earlier posts on innovation, you may think that I'm a cynic. The reality is that I'm an enthusiastic, life-long believer in innovation. My sarcasm is targeted exclusively at the hollow, creativity-killing rhetoric that too often passes for support for innovation.

    Active barriers to innovation

    What about big companies who innovate? That's mostly rumor and self-promotion, rarely a reality.

    What if you're a small company trying to innovate? The barriers are mostly put up by the large businesses that dominate the field in which you want to innovate.

    Will the big business itself innovate? In spite of all the talk, probably not. It's likely they want to be seen as modern, with it and innovative. It's highly unlikely that they actually want change. This post goes into some detail about the reality behind giant companies that supposedly are great innovators. Why can't big companies innovate? Who knows, but I think the attitude of the pointy-haired boss is a hint:

    Dilbert

    There is lots of information and a few stories about how to out-fox the giants that want to keep you down in my book on building a growing business from a startup. But it's tough. The big guys hold most of the cards.

    Passive barriers to innovation

    Governments are the main source of "passive" barriers to innovation. The barriers are usually in the form of regulations — regulations that can quickly morph into active barriers once you get caught in the cross hairs of one of these innovation-killing agencies.

    You think those regulations are no big deal? The current code of federal regulations is massive, and getting bigger every day. Here's a quick glance at its size:

    CFR

    Of course, no government agency will ever admit that what they are doing is preventing innovation. They are protecting consumers! Enforcing fairness! Doing good stuff, the peoples' business! That's what they say. Sometimes it's even true. But in most cases, what they are really doing is protecting existing businesses and professionals from competition. They do this by putting increasingly burdensome and expensive barriers to new products and services entering the market, and competing with the establishment.

    Regulatory barriers to innovation are everywhere, in nearly every industry. Why isn't there a huge outcry? Simple:

    • The companies and people that are on the "inside," benefiting from the barriers, vociferously support "protecting consumers" or whatever the b.s. cover story is.
    • The people who would benefit from the innovation don't see the innovations, because they don't exist yet, and so can't really lobby against the barriers.
    • It's just the way things are. Who has the energy to "fight City Hall," particularly when the innovative benefits don't exist yet because of the barriers?!

    The barriers are everywhere, preventing innovation or worsening convenience and price. The barriers are in old, tangible things like a store being able to sell liquor or a car company being able to sell its cars. More importantly, they're in newer, life-issue things like nearly every aspect of healthcare.

    Barriers to innovation in healthcare are massive, and getting worse. The barriers aren't called that, of course. The government agencies are protecting our health and privacy! But when you lift the covers, it is easy to see that what is really going on is a rapidly metastasizing federal bureaucracy that prevents life-enhancing products and drugs from being invented, and massively increasing the cost and slowing down the relatively few innovations that squeeze through the gauntlet.

    Conclusion

    We're clearly in the middle of an innovation bubble. Everyone says they want it. Companies and government agencies claim to be fostering and promoting it. I'm someone who has worked in the innovation trenches for decades. I try to innovate myself, and help others to do it. It's not easy. That's why I get so cynical about all these innovation-smothering institutions who are so loudly in favor of innovation. Their words say one thing and their actions say another. All their innovation amounts to is a pile of marketing rhetoric, an attempt to make themselves appear to be modern.

  • The Healthcare Innovation Spectrum: From Washing Hands to AI

    There's a spectrum of ways to innovate in healthcare. On one end is simple stuff, like making sure things are clean and germ-free. On the other end is exotic stuff, like using AI: Artificial Intelligence and Cognitive Computing. Obvious questions: (1) where is the money going? (2) where is the value? (3) Is the money going where the value is? Simple answer: the "smart" money is going to exotic gee-gaws, ignoring near-term value and patient health.

    Where the Money is going

    The money is clearly going to exotica. Ignoring for the moment the billions IBM and others are pouring into what they call Cognitive Computing, VC's are investing heavily in healthcare-directed AI. See this:

    AI healthcare 1

    We're talking serious money here:

    AI healthcare 2

    While there are loads of conferences, trials, talks and articles talking about the great future here, there is an obvious conclusion to be drawn: while the money is being spent now, the benefits (if any) are in the future.

    That's about all you need to say about it.

    The middle of the spectrum

    While things like AI are clearly at one far end of the spectrum of healthcare innovation, there are intelligent, educated things in the middle of spectrum. Lots of people are pursuing these innovations with great energy. I've discussed an example of one such approach here.

    The Oak HC/FT portfolio company VillageMD is another clear example of data-driven innovation in healthcare. No new math or fancy computers are required. "Just" educated, dedicated people looking at the data and making required behavioral changes based on those facts. The founder of VillageMD, Clive Fields, just won a major award for his work, using all-organic and natural intelligence — no artificial ingredients! Guess what: it's here and now! The outcomes of real patients are being improved as you read this!

    The basic end of the spectrum

    On the other end of the spectrum from AI, we've got things that shouldn't need "innovation." They should be standard practice. They have huge impact. They are the shocking, scandalous modern equivalent of antiseptic surgery — things that no one seriously disagrees with, but which the important experts and leadership type people somehow can't lower themselves to pay serious attention to. Or when they pay attention, it's with actions that do nothing to solve the problems.

    A good candidate for the poster child of this end of the spectrum is what the CDC calls healthcare-associated infections, HAI's. In other words, getting sick from going to the hospital. Here is the CDC's summary of the situation:

    11 HAI

    I don't know about you, but this makes me sick. 75,000 preventable deaths in a year, preventable using non-exotic methods. No Cognitive Computing required! There are cures, demonstrated at multiple hospitals that have put serious effort into it. This article summarizes the efforts and approaches, ranging from simple changes of cleaning practices to fancy new machines.

    Conclusion

    There's a clear spectrum of innovation in healthcare, ranging from blocking-and-tackling basics at one end, to exotic new things based on various forms of Artificial Intelligence at the other end, with smart, non-exotic, data-driven methods occupying the middle ground. Most of the "smart" money appears to be going to the fancy exotic end, with results sometime in the indefinite future, while the rest of the spectrum trundles along, largely under the radar, delivering results to patients today.

  • Innovation Stories

    I worked at Oak Investment Partners for a long time until retiring from it at the end of 2015. Here is part of my page on the Oak website in 2015:

    2015 12 17 David B. Black - Oak Investment Partners

    During that time, I had the opportunity to dive into hundreds of tech companies over many cycles, and the further opportunity to be an insider at dozens in which we invested. I learned that what most people tell you about how to be a successful entrepreneur often doesn't match up well with the winning companies I saw.

    So what's a person to do?

    One thing you can do is read my book. It won't tell you how to win (that's on you), but it will clearly identify some of the most important success patterns to follow, and some of the popular failure patterns to avoid. It has dozens of examples from real life to illustrate the points.

    Here are some of the companies in the book and the points or patterns they illustrate.

    CRM co., OpenData, Sybase: Do NOT make your execution match your strategy! If you're going to invade a country, don't attack everywhere, pick a beach.

    Captura/Concur: Don't let perfection get in the way of making your product usable.

    Web services company: Pick something that you can finish, well and quickly.

    Smartdrive: When you think you're really focused, try making the focus even narrower.

    Inktomi: Don't move on to the next battle until the current one is totally wrapped up; mostly wrapped up may not be good enough.

    Workflow companies, collections: The customer defines the problem, not you.

    HNC/FICO: Using a platform to attack a narrow but important problem set.

    US Auto Parts: Does the customer have a problem right now?

    G-Market/E-Bay: Cross-border issues are more than language.

    Bank processors e.g. Fiserv: Customers aren't fond of risk.

    Nextpage: Are your benefits tangible?

    Fastclick: Can you deliver results quickly?

    Rebelmouse: Make each step towards a vision be usable.

    Athena Health: Adding a whole new service can be 1+1=3

    Radisphere to Candescent Health: Giving your customers to someone else can be a great idea!

    Company A: Using end-user products in a product/service can save time and money.

    Video Ad Network: Sell it first, then build and deliver it seems backwards, but it beats everything else in the right situation.

    Maestro Health: You don't always have to program everything; sometimes having people do some of the work is a big win.

    Evident: Methods that are great in one domain maybe be failures in a different one.

    The Innovator’s Dilemma book: Listening to your customers can hold you back.

    Smartdrive: Picking the right group of customers to listen to is key.

    TxVia, Feedzai: Building a tool and delivering an application or service with it can be an overwhelming advantage.

    MobiTV: Do you have large customers? The power relationship determines the outcome.

    Huffington Post: Pick a direction, go quickly, stop for nothing.

    Conclusion

    I'm kind of slow. It took me more than ten years to start noticing the patterns I've written about, and another ten years testing the patterns against the companies my partners looked at and/or invested in. But they've held up. I know I haven't discovered all the relevant patterns or explained the success of every company, but I also know that I don't read about the things I wrote in the book, which is why I took the trouble to write it. I hope new generations of innovators will improve their odds of success by following the path of the winners.

     

  • Innovation and Experts

    Lots of people want to promote Innovation these days. Why not get in a top expert to help? Answer: if you want to innovate, ignore the experts! With rare exceptions, "experts" are the enemy of innovation, and supporters of the status quo.

    Experts

    If you're doing something new and want to do it right, it's natural to seek the help of someone who's been there and done that. If you want to do the thing in an innovative way, that's all the more reason to seek expert help; the innovation you need may already be out there, and who's more likely to know it than an expert?

    Turning to experts is what we do. At a basic level, that's why we have schools, degrees and certification programs. A person with an MBA is supposed to be much more of an expert about business than the average Jane or Joe. But an MBA is just an entry-level expert. What many people want is an Expert, or even better, an EXPERT!!

    An expert is someone who knows loads and loads about a certain portion of common knowledge. They can tell you what are the common practices in a given area, what they would characterize as "best practices." There may be some weird, fringe people out there working at you've-never-heard-of-it places who do things differently and make wild claims about what they do. But can you take the risk of going out on a limb and failing, when all the top organizations do X? Of course not.

    Experts are herd dogs. They get everyone to make roughly the same choices that everyone else makes, and go in roughly the same direction.

    Think about the process of selecting an expert. Don't you want someone who is generally acknowledged to be an expert? Who advises major organizations to do what the "leading" other major organizations do?

    Think about being successful as an expert. The vast majority of the potential fees come from major organizations. None of whom want to be told they're doing things all wrong. Most of whom would like validation, and maybe some minor tweaks. That's where the client list and fees come from.

    Experts want to be recognized, hired and paid by rich, mainstream organizations. Organizations want experts to help guide them to not stray too far from the pack.

    In other words, the vast majority of large organizations are like sheep traveling in a herd. If they wander off from the herd, they may get lost or hurt! Experts are like sheep dogs who bark and nip at the sheep who wander off or lag behind.

    If you want to innovate, the last thing you should want is a typical "expert."

    An expert on experts

    To get the real story on experts, let's turn to the person who is, above all others, THE expert on experts. Richard Feynman boils the subject down to terms anyone can understand:

    Science

    An "expert" is someone whose knowledge we are supposed to accept based on the authority of the expert. It's not our place to question it. The whole reason to get an expert is that we assume we can't possibly figure out what to do ourselves!

    A scientist reacts to assertions by the expert saying things like "why?" "How do you know that?" "Where are the experiments that prove that what you are saying is true?" Scientists don't take things on authority. Feynman is saying that experts are nothing more than people who say, with deep voices and calm authority, "This is the truth, my child." In any situation in which you are supposed to take things on faith, the natural reaction of the scientist is: you're definitely ignorant, and probably wrong. Why do you need the take-it-on-faith stuff if you can prove it? Science replaces faith in people (i.e. experts) with reliance on facts, proof, numbers and math.

    Experts and flight

    One of the best examples of innovation and the expert effect is the history of manned flight.

    One of the most famous experts of his day was Samuel Pierpont Langley:

    330px-Samuel_Pierpont_Langley

    He built and launched a couple unmanned planes that flew thousands of feet. He was famous. He got major funding from the government, and everyone expected him to succeed. He was the ultimate expert in aviation.

    There was just one problem. His planes all crashed. Here's one that "flew" right into the Potomac River in 1903:

    330px-Samuel_Pierpont_Langley_-_Potomac_experiment_1903

    Nonetheless, belief in the expertness of the wonderful expert Langley remained so great, in spite of his complete and utter failure to even come close to controlled manned flight, that his reputation remained high and all sorts of aviation-related things are named in his honor, from medals to airports.

    We all know who actually figured out how to make a flying machine: the Wright Brothers.

    11

    These guys built bicycles in Dayton, Ohio! No fame. No government money. In no way were they experts. But: they were scientists! In the true sense of the word — in Feynman's sense. Here's a bit of what they did:

    22

    In other words, they figured out what the real problems were, did designs, built prototypes, ran tests, and … innovated!!!  Here is one of their flights in 1904:

    1904WrightFlyer

    The rest of the story tells us a huge amount about innovation and experts. Briefly, no one believed them! They went for years trying to get government interest. Years later they were celebrated as heroes, but at the time, even the local government and press ignored them. Finally their accomplishments were accepted in 1909, when they flew up and down the Hudson River for half an hour in front of an estimated one million people, circling the Statue of Liberty.

    No one could believe that these non-expert nobodies could have solved a problem that stumped the nationally recognized, accepted experts.

    Conclusion

    If you want to know what to do, you have two basic paths.

    One is to hire an expert to tell you basically what everyone else is doing. It's a good way to be "safe," and avoid innovation of any kind. But nothing stops you from crowing about how innovative you are, at least compared to the sheep staring at your back legs!

    The other way is to be a scientist and figure out what the real problems are and how to solve them. Then do it. It's what innovators do.

    You pick. I know what my choice is.

     

  • Healthcare Innovation: How to Achieve EMR Interchange

    EMR interchange has been a major goal of the tens of billions of dollars that have been spent to buy and install EMR's. The theory is that making it easy for the next medical provider you see to have access to your complete health record will improve health. It might! But the current methods for achieving integration are not working. Not. Working. It's easy to understand why they will NEVER work, and what can be done to achieve the same result.

    Not to be mysterious about it, here's how: forget EMR interchange. It's not working because it's hard and none of the people who build and control EMR's really want it to work. Instead, enable a new generation of personal EMR's. It's literally hundreds of times easier.

    My EMR vs. Integrated EMR's

    Everything is great if I go to a single integrated hospital system that uses a single EMR. I go from place to place in the hospital complex, and everyone knows who I am, where I've been and what's going on:

    1 EMR_0005

    No problem.

    The problem happens when I go to an office, a clinic and a hospital. They each have EMR's. What all the "experts" think is best, backed by tens of billions of dollars, is for the systems to talk with each other. What I suggest instead is MyEMR app, which gets the latest information from each EMR and uploads everything to the next place I visit. Here's the choice:

    1 EMR_0004

    They look pretty similar, right? There are three unique lines (data paths) connecting my EMR to each of the places I've visited, and there are three unique lines connecting each of the providers (H-C, H-O and C-O).

    When the numbers grow, they start looking not quite so equivalent. Let's look at six distinct EMR's. With My EMR, there are just six possible connections:

    1 EMR_0001

    But if the six have to interchange with each other, we're up to 15 possible connections.

    1 EMR_0002

    Hmmm. Not a good trend. What about when the number gets bigger? What if 100 EMR's had to talk with each other? How many unique connections (data paths) would there be then? Here it is:

    1 EMR_0003

    You may say there aren't that many vendors. But getting two different installations of EMR software from the same vendor to talk is still a lot of work! Not to mention the fact that there are many different versions, configurations and customizations of each piece of software. The real number is likely to be much larger!

    Conclusion

    Just installing an enterprise EMR tends to be an incredibly expensive, years-long disaster. There's a good reason based on simple arithmetic that many years and tens of billions of dollars have yet to achieve any meaningful amount of interchange between EMR's — there's a combinatorial explosion. The same arithmetic strongly favors the personal EMR approach.

    Incentives also favor the personal EMR as the center point of integration. How eager is one hospital CEO to make it real painless for patients to go to the competitor? Patients, on the other hand, are highly incented to want the data in their hands; not only would it save endless hours filling out paperwork and avoiding yet another history interview with its inevitable misinformation, but it's likely to help their providers avoid errors and keep them healthier. Of course, the vendors and systems have a death-grip on patient data, and really don't want to give it to patients, regardless of what they might say. But at least sending data to personal EMR's is a solvable problem without a combinatorial explosion of work to get it done.

    I want a personal EMR!

  • Healthcare Innovation: EMR’s and Paper

    EMR's are essential. They are going to bring healthcare into the digital age — finally! Healthcare organizations are spending billions of dollars to implement EMR's, and the government is doing the same.They're preparing the ground for the incredible benefits of Big Data and Cognitive Computing!

    There is no doubt that the money is being spent. EMR's are certainly being implemented. Are they working? Eliminating paper? Not so much. One thing they are certainly doing is making doctors spend less time with patients and more time with computer screens.

    I could go wild with statistics, but all this got tangible for me when I accompanied a family member to a surgical procedure with a top-flight provider at a first class facility in Manhattan recently.

    Here is the notebook of papers that accompanied the patient everywhere:

    Notebook

    Some of the papers were computer-generated, but most were not. We spend loads of time fielding questions whose answers had already been entered into various systems — including the provider's! Various papers whose text had nothing to do with medicine had to be signed — papers concerning regulators, administrators and lawyers.

    I heard the dialog in other booths, with huge amounts of time trying to get information out of the memory of patients and onto paper. Here is a nurse doing her job:

    Nurse paper

    I could see that there were also lots of computers all over the place. Not that it mattered.

    It turns out that the medical care was excellent, and the procedure successful. Good news! Would eliminating the paper have made it better? Hard to see. If the medical history had already been available, would it have saved some time? Well, the medical history was all available — the provider had already gotten everything required and entered it into his own system before agreeing to conduct the procedure! So everything done at the hospital was just a bunch of wasted effort anyway, whether it was on paper or on computer! Could the provider's EMR have transferred the information about the patient to the hospital's EMR for this scheduled procedure? Maybe. But it didn't happen, and we know from government statistics that it rarely does.

    Tens of billions of dollars are being spent implementing EMR's so we can experience the wonderful benefits of getting rid of paper. Sounds good, but I suspect that no true science or even engineering has been done here. How do we know things will be better in the gold-plated EMR future? Has anyone done patient outcome studies? How about time utilization studies? Has anyone tried alternatives? After all, EMR's can't possibly be a goal — who cares about EMR's except EMR vendors? EMR's can only be a means to an end; and the only end worth anything is better patient care at lower cost.

    What we know for sure is that we're achieving higher costs by implementing EMR's. We're not eliminating the paper. Too much of the data that ends up in the EMR is crap, and too much is missing or wrong. We're not getting accurate data into a single place. We don't have a clue whether we're making patients healthier as a result; we don't know whether we could make patients healthier by spending the money in a different way. Maybe it's time to apply some fresh thinking here.

    I'm computer guy. And a facts kind of person. I know that computers and software can make things better for everyone in medicine. I'm NOT saying we should forget this new-fangled computer thing. I'm saying we could get dramatically better results for a fraction of the money we're spending.

Links

Recent Posts

Categories