Category: AI, Cognitive Computing

  • The Healthcare Innovation Spectrum: From Washing Hands to AI

    There's a spectrum of ways to innovate in healthcare. On one end is simple stuff, like making sure things are clean and germ-free. On the other end is exotic stuff, like using AI: Artificial Intelligence and Cognitive Computing. Obvious questions: (1) where is the money going? (2) where is the value? (3) Is the money going where the value is? Simple answer: the "smart" money is going to exotic gee-gaws, ignoring near-term value and patient health.

    Where the Money is going

    The money is clearly going to exotica. Ignoring for the moment the billions IBM and others are pouring into what they call Cognitive Computing, VC's are investing heavily in healthcare-directed AI. See this:

    AI healthcare 1

    We're talking serious money here:

    AI healthcare 2

    While there are loads of conferences, trials, talks and articles talking about the great future here, there is an obvious conclusion to be drawn: while the money is being spent now, the benefits (if any) are in the future.

    That's about all you need to say about it.

    The middle of the spectrum

    While things like AI are clearly at one far end of the spectrum of healthcare innovation, there are intelligent, educated things in the middle of spectrum. Lots of people are pursuing these innovations with great energy. I've discussed an example of one such approach here.

    The Oak HC/FT portfolio company VillageMD is another clear example of data-driven innovation in healthcare. No new math or fancy computers are required. "Just" educated, dedicated people looking at the data and making required behavioral changes based on those facts. The founder of VillageMD, Clive Fields, just won a major award for his work, using all-organic and natural intelligence — no artificial ingredients! Guess what: it's here and now! The outcomes of real patients are being improved as you read this!

    The basic end of the spectrum

    On the other end of the spectrum from AI, we've got things that shouldn't need "innovation." They should be standard practice. They have huge impact. They are the shocking, scandalous modern equivalent of antiseptic surgery — things that no one seriously disagrees with, but which the important experts and leadership type people somehow can't lower themselves to pay serious attention to. Or when they pay attention, it's with actions that do nothing to solve the problems.

    A good candidate for the poster child of this end of the spectrum is what the CDC calls healthcare-associated infections, HAI's. In other words, getting sick from going to the hospital. Here is the CDC's summary of the situation:

    11 HAI

    I don't know about you, but this makes me sick. 75,000 preventable deaths in a year, preventable using non-exotic methods. No Cognitive Computing required! There are cures, demonstrated at multiple hospitals that have put serious effort into it. This article summarizes the efforts and approaches, ranging from simple changes of cleaning practices to fancy new machines.

    Conclusion

    There's a clear spectrum of innovation in healthcare, ranging from blocking-and-tackling basics at one end, to exotic new things based on various forms of Artificial Intelligence at the other end, with smart, non-exotic, data-driven methods occupying the middle ground. Most of the "smart" money appears to be going to the fancy exotic end, with results sometime in the indefinite future, while the rest of the spectrum trundles along, largely under the radar, delivering results to patients today.

  • Healthcare Innovation: Can Big Data and Cognitive Computing Deliver It?

    Most people seem to agree that healthcare is ripe for innovation, and badly needs it. Lots of people are talking up two potential sources for that innovation: Big Data and Cognitive Computing.

    I'm strongly in favor of data, the bigger the better. But is the Big Data movement going to make a difference? I'm strongly in favor of cognition, computing, and computing that is smarter rather than dumber. But is the Cognitive Computing movement likely to make a difference? Here's a summary of some thoughts.

    Process Automation and continuous improvement

    Here is a description of the core process automation process implemented by a company I've invested in, Candescent Health. It describes the process that can and should be applied to all of health care.

    The point isn’t that there’s data and analytics – the point is that there’s a closed-loop process of continuous improvement where actions are based on rules. This is the framework that is required to make anything happen. Without it, you can’t put your proposed new clinical action into practice with double-blind A-B test and see if the results of your analytics actually deliver benefits in the real world! Or even just deploy it!

    How about just making the basics work?

    Here is the story illustrated by Mt Sinai hospital about how everyone focuses on “innovation” and fancy new things, when just having the computer systems run reliability has a huge impact on patients – and unless those systems run, the results of fancy new analytics can’t be delivered to benefit patients.

    If the car won't start or run reliably, who cares how good the fancy sound and navigation systems are?

    How about making the computers work?

    I love data and analytics. But doesn’t it make sense to focus on getting the operational computer systems to actually run well before moving on to the fancy stuff?

    Paying top dollar for computers doesn't make them work

    In fact, just about anything you do with healthcare data that is going to be brought to the front line of care requires functioning computer systems to be able to pull off – the big healthcare systems pay Greenwich CT prices and get trailer park results.

    Clean data isn't easy to get

    Both data warehousing and the fancy new Big Data movement share the under-appreciated problem of getting good quality data in analytics-ready form. Sounds simple, but the difficulties make progress a grinding crawl on many efforts. See this for example.

    Big data sets tend to have Big problems

    Massive data sets have built-in problems that make it hard to get actionable results.

    AI: How about under-promise and over-deliver for a change?

    Skepticism about Cognitive Computing in health care is warranted. There is a rich history of over-promise and under-deliver for AI efforts in general.

    Real-world solutions waiting to be automated

    Meanwhile, there are proven gems in the medical literature just waiting to be disseminated to the front lines of health care via point-of-care computer systems that are languishing in journals.

    What can make a difference?

    There are lots of practical, tangible ways to make things better, in spite of all the obstacles to change pervading our healthcare system. Here are some examples of people doing the right thing, all them with investments by Oak HC/FT:

    • Candescent delivers better imaging results with less expense by applying basic continuous-improvement workflow automation.
    • VillageMD delivers better results with lower cost by feeding back results and advice to PCP’s.
    • Aspire delivers better results at lower cost for end of life – by having one person be in charge, managing everything from the patient point of view.
    • Quartet makes a difference by applying behavior health as needed to help other conditions.

    These companies embody some common themes:

    • Knock down the silos, have a patient-experience-centric point of view.
    • Applying common sense has huge benefits.
    • Focus on delivering results to the front line (patient) is hard but necessary.
    • A system of continuous learning and delivery is a pre-condition to delivering any results of analytics for patient benefit.

    Conclusion

    The big hot topics in healthcare of Big Data and Cognitive Computing are little more than fashion statements. Data, of course, is a good thing; so is having computers do smart things. But without doing some basic blocking-and-tackling and applying some practical common sense, a great deal of time, money and energy will be spent accomplishing nothing.

  • Human-Implemented Cognitive Computing in Healthcare

    I'm pretty skeptical about "cognitive computing." It's hard even for people to "cognitively compute." In fields like medicine, only the brightest, most educated and experienced people are capable of producing useful results. But when they do, the results can be powerful, useful and save lives.

    Cognitive Computing

    I've worked closely with computers since I was in my teens. I've seen their impact on society, and the huge productivity gains when properly applied. But ever since the early days, a subset of the computer industry and the public has insisted on seeing computers as versions of human brains. Periodically, the computer industry gets excited about how the latest computer hardware and software will enable computers to do things that only the smartest and most educated humans can do. When the excitement gets frothy, the movement is called something new, so no one will be "confused," and think it's the same as all the earlier, essentially identical movements that have failed and quietly faded away. The latest such movement is called "cognitive computing."

    Medicine

    Doctors who specialize in a field of study both contribute to advancing the state of the art and stay on top of advances made by others. Action-oriented review articles are particularly valuable — they both summarize advances made by many people, and advance clinical practice by making those advances practical and actionable.

    I'll take an example from emergency medicine, this journal in particular. JEM

    One of the articles that appeared in that journal earlier this year was about Horner's Syndrome in children. Horner

    There is lots of fascinating and useful information in the article. It's just amazing the things that go on in the human body. Knowing about all the things that sometimes go wrong makes it all the more impressive that most bodies work so well most of the time!

    Here's a chart in the article that boils it all down. You've got a child presenting with Ptosis. What's going on here? There are a couple options, and the chart guides you to figuring it out. 2015 03 12 ER graph 1

    One of the outcomes is Horner's Syndrome. What do you do next? Here's a chart that makes it all clear. 2015 03 12 ER graph 2

    This example of human-implemented "cognitive computing" is similar in principle to many valuable intellectual and scientific results. It's the result of years of effort and study by many people handling many cases, and publishing the results. The advance here is boiling it all down and translating it into simple, unambiguous flowcharts that guide you to do the right thing, without leaving out anything important.

    Can and should flowcharts like this be made available to front-line clinicians as they are seeing cases? Yes, of course, if only to enable the clinician to make sure something new hasn't emerged since last time he/she checked. Does it require fancy computing to make this happen? It does not.

    Conclusion

    Getting simple results like these is amazingly tough work for highly educated and motivated human beings. Meanwhile, computers are not yet capable of tying my shoes. When they are, I will gladly put them in the running for doing something more sophisticated and important, like picking who should have the lead role in the next James Bond movie. "Reading" the medical literature and figuring out how to respond when a child presents with ptosis? The crew that can't even keep a hospital's critical computer systems running is going to one-up humans? Maybe it's something you'll welcome for your child; for mine, I'll pass on the opportunity, thanks very much.

  • Cognitive Computing and Healthcare

    They say that cognitive computing, the term-du-jour for Artificial Intelligence (AI), is in the process of transforming healthcare. Billions of dollars of investment are behind the effort. Sadly, there are good reasons to believe that little good will come of it.

    Cognitive Computing

    Whatever it is, people are pretty sure it's BIG. Here's what a major investor and the former GM of IBM's Watson unit says about it:

    Cognitive

    $80 billion dollars! Before long, we'll be talking serious money here!

    Where's this money going? Lots of places. But there's one special target for the money. The same expert tells us:

    Capture

    Cognitive Computing in Healthcare

    Is Cognitive Computing really happening in healthcare? You betcha. IBM's Watson by itself is making major inroads into healthcare, with terrific-sounding projects at Sloan Kettering, Cleveland Clinic, MD Anderson and others. Good things are coming! For example, C. Martin Harris, MD, chief information officer of Cleveland Clinic, says:

    Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine.

    And here is how the CEO of IBM explained it in an interview:

    Capture

    It's so hot that IBM has created a separate division for Watson, investing more than $1 billion just to get it started, and will have a headquarters group employing more than 2,000 people.

    So What's the Problem?

    Big investments like this should mean that there's a big problem to be solved. What is it? Not enough doctors? The doctors are too expensive, and somehow automating what they do with this mega-expensive effort will help that? The doctors aren't as smart or educated as Watson will (by presumption) be?

    Someone involved should let the rest of us know.

    Meanwhile, count me a skeptic. The reason is simple: there is a decades-long history of researchers and big companies making claims more modest than the ones being made for "cognitive computing," and they've all failed, technically and/or in business terms. In the end, computers do get used for more and more, as we all know from personal experience. That's a trend that will certainly continue. But "cognitive computing," i.e., AI reincarnated and re-named? Uh-uh.

  • Human and Inhuman Analytics

    While people talk about analytics in general, there are really two distinct varieties: human analytics and inhuman analytics. First, there is analytics for and by humans, i.e., numbers, tables and graphs designed by humans for human consumption and consideration. Second, there is algorithmic analytics, originally designed by humans but then set off to make observations, decisions and perhaps actions on its own. I dub this "inhuman analytics," because that's what it is. It is incredibly important to understand the differences between these two things, related in name but little else.

    Human Analytics

    When most people think about analytics, they're usually thinking about things like Data Warehouse (DW), Online Analytic Processing (OLAP), Business Intelligence (BI), and related subjects.

    This is a subject that is broad and deep, with many products and vendors that have evolved over time. But there is a simple unifying theme: these are tools intended to provide information to people, often in the form of graphics, so that those people can understand what's going on and take any action that may be appropriate.

    Oracle, for example, has a wide variety of such tools:

    Oracle BI
    Microsoft also has a variety of such tools.

    Microsoft BI
    Note that both companies illustrate their approach using screens and people. That's what this type of analytics is all about.

    There are a wide variety of BI tools from many vendors, in addition to open source.

    Inhuman Analytics

    Inhuman analytics, a terms that no one else uses, so far as I am aware, is a whole different thing. This is also a subject that is broad and deep and undergoing constant innovation. It includes such diverse subjects as machine learning (ML), advanced statistics, operations research (OR) and related subjects.

    In general, inhuman analytics are far more specialized than human analytics. They are nearly impossible for anyone but a specialist to understand. There is often lots of math involved. They are not primarily about presenting information so that it makes sense to human beings — they are about figuring stuff out that most humans wouldn't be able to figure out at all, or figure it out with a precision that exceeds human capability.

    Because of this, there aren't great pictures to illustrate inhuman analytics. But here's an illustration of the ML process from one company's ML toolkit:

    ML
    Inhuman analytics are behind a large number of modern innovations, though they rarely get credit for it, since the way they work is essentially like magic to most people This is a vibrant subject with a rich history. I suspect I will come back to this in some future post.

    Conclusion

    Human analytics has many uses and is a good thing. The visual tools it emphasizes enables knowledgeable and motivated people to explore and understand a data set, and to track it over time. Sometimes you can even discover new things, particularly in the early stages of understanding and optimization

    However, inhuman analytics are the serious, heavy-duty tools to help derive value from data. They can and regularlly do figure things out and solve problems that are beyond human capability, even with the aid of human analytics.

    Human analytics has its place. But it's no substitute for inhuman analytics for serious value creation.

     

Links

Recent Posts

Categories