Category: Regulations

  • Summary: Regulations

    This is a summary with links to my posts on regulations.

    Regulations are supposed to make things better. Most of the time regulations make things worse by preventing innovation, increasing costs and failing to achieve the goals for which they were created. The negative impact of regulations, whether government or corporate, is greatly magnified in software and technology.

    The reason why our ever-growing number of regulations fail to protect us is simple. In the vast majority of cases, they spell out, often in great detail, how to accomplish the goal, instead of plainly and simply defining the goal and leaving it up to the person or company regulated to figure out how to get it done.

    https://blackliszt.com/2011/12/regulations-goals-or-directions.html

    Regulations that define a goal enable innovators to find new ways to accomplish the goal, whether it’s a better medical device, computer security, whatever.

    Another way to think about effective regulations is the criminal law, which regulates behavior. There aren’t ever-growing mountains of regulations telling you how to avoid murdering someone, just a simple statement that murder is something you must not do.

    https://blackliszt.com/2012/03/lets-criminalize-our-regulations.html

    Regulations are front-and-center in the bureaucracy-driven battle to prevent innovation. Regulations are ever-growing mountains of words written by lawyers and bureaucrats. The current federal regulations have more than 100 times the number of words than the collected works of Shakespeare.

    https://blackliszt.com/2016/09/innovation-the-barriers.html

    Regulations and standards can be good; without standard steering wheels and brake pedals, no one would be able to drive a rental car. Software is different. The misguided effort to impose standards and regulations on software development has played a key role in the nonstop cybersecurity disasters and software failures that most organizations try to minimize and ignore.

    https://blackliszt.com/2021/11/the-destructive-impact-of-software-standards-and-regulations.html

    Medical device regulations increase costs and prevent innovation. The FDA device regulations provide an excellent example, declaring in massive detail how exactly to achieve quality in classic how-type method. The what-type (criminal version) would simply declare that the device must perform its declared function, accurately and well.

    https://blackliszt.com/2016/12/regulations-that-enable-innovation.html

    A similar story plays out in the field of medical imaging. The essential devices could be vastly improved if the regulators got out of the way.

    https://blackliszt.com/2023/01/how-to-reduce-the-cost-of-medical-imaging-and-pacs.html

    The bureaucrats who write regulations for software are ignorant of software. It’s literally invisible to them! Their understanding, such as it is, tends to be based on false metaphors and is wildly inappropriate. They end up requiring expensive, obsolete methods for building software that no sensible company would use.

    The current mountains of regulations should be replaced by something like “We don’t care how you build your software, but it’s your responsibility to assure that the software performs its stated job without fail. If the software has errors that cause medical harm, you are responsible for the damage it causes, and you may be barred from supplying software to the medical market in the future.”

    https://blackliszt.com/2020/05/heres-how-the-fda-can-reduce-medical-device-costs-while-improving-healthcare.html

    Regulation is also about important things like making trains safe so that, even when something goes wrong, you don’t have a crash that kills people. That’s one the government has been all over. They devised a system called PTC (positive train control) to prevent crashes. The cost to implement it was tens of billions of dollars and took many years. Years after it was mandated there was a crash that killed people and injured hundreds.

    https://blackliszt.com/2015/05/an-app-to-prevent-train-crashes-like-amtrak-philadelphia.html

    The crash in Philadelphia wasn't a one-off. The problem is that PTC is built on computing technology that belongs in a museum, not supposedly protecting our lives.

    https://blackliszt.com/2016/10/scandal-hoboken-train-crash.html

    The technology exists to enable a more effective, inexpensive system to be built using modern technology. But of course the regulators ignore it.

    Whenever the government wants to step in to improve a company’s software, beware. When has such a move ever had a positive impact on anything, much less the stated goal of the regulation?

    https://blackliszt.com/2015/04/the-government-wants-to-help-ubers-software-quality.html

    There has been a recurring furor about the unfairness of the internet. According to the critics, regulators at the FCC should step in and impose “net neutrality” to make service fairer for everyone. In fact, what few problems there are have been caused by regulation. And where regulation has been imposed elsewhere, government-mandated severe censorship has quickly followed.

    https://blackliszt.com/2014/11/net-neutrality-it-aint-broke-dont-fix-it.html

    It’s not just the government. Big bureaucracies to “control and improve” software emerge in giant companies of all kinds. Even in a software company, the in-house regulators can impose insanity. Here’s a case involving Microsoft and digital goods.

    https://blackliszt.com/2014/05/giant-software-bureaucracies.html

    Regulations have an outsized role in causing the on-going disaster of computer security. See these for examples and explanations:

    https://blackliszt.com/2017/05/security-regulations-vs-security.html

    https://blackliszt.com/2014/05/bureaucracy-regulation-and-computer-security.html

    https://blackliszt.com/2013/07/cyber-security-standards-are-ineffective-against-insiders-like-edward-snowden.html

    For more about security and how regulations make things worse, here is the summary post.

    https://blackliszt.com/2023/05/summary-the-ongoing-problem-of-computer-security.html

  • How to reduce the cost of medical imaging and PACS

    Medical imaging devices like MRI's, CT and X-Ray machines are extremely valuable. They're also extremely expensive. So expensive, in fact, that health insurance companies typically require a pre-authorization for an MRI scan to make sure that it's "medically justified." The market is currently estimated at about $40 Billion a year, with more spent on proprietary PACS (Picture Archiving and Control Systems) for storing and managing the systems.

    The medical imaging market is highly regulated, with the design and construction of the devices subject to detailed requirements for how the hardware and software should be designed and built. The result of the regulations is that a small number of large companies control the market, effectively preventing innovation and new companies from entering the market.

    There is a proven path towards opening the market to innovation and dramatic cost reductions, while improving quality. We should break the iron grip of monopolistic companies and harmful government control to enable a medical imaging revolution.

    The Software industry case

    Something similar happened in the software industry. IBM mainframe computers and software once owned the world. Everyone bought from IBM, and were then required to buy IBM software and applications. They worked, but were incredibly expensive. A government anti-trust suit broke some of their monopolistic power, and new mini-computers changed the game. Then with computers built on multiple microprocessors, low cost, high quality and performance with intense competition ruled the roost in the computer industry. Separate companies built each part of the new world; each competed to be the best.

    The crowning touch was that for important parts of the software such as operating systems, open source software emerged and became the norm. Even IBM acknowledged this by porting the Linux open source operating system to its IBM mainframe computers.

    What should happen in medical imaging

    Medical imaging machines are like specialized mainframe computers. In addition to the physical hardware that does the scanning, there are processors with operating systems and application software. Software controls each step of the scanning process, collects the data, stores it and displays it. Today, every bit of that computer hardware and software is built by the hardware supplier. Just like it was for IBM mainframes before the anti-trust suit.

    The big difference is that no government agency exercised control over the details of how the IBM software was designed and built. Sadly, ignorant bureaucrats at the FDA exercise total control over this process, as I detail here. They require the use of methods that are so old and bad that even giant corporations have long-since moved on for their unregulated software.

    The argument is that this is about your health. Do you want imaging devices that don't work or give bad results? The FDA performs the essential function of guaranteeing quality and safety, they say.

    What they actually do is the equivalent of demanding that only hand saws be used for turning trees into lumber and refusing to allow nails or hammers to be used in house construction. Of course it can be done. But people using modern tools get far better results faster at lower cost. There is a simple way the FDA can assure quality, by shifting from lengthy HOW style regulations to simple WHAT style regulations as I explain here.

    The Result

    The result of this change will probably resemble what happened to IBM once their monopoly power was broken. IBM continues to this day to manufacture the successors of mainframe computers, now called the Z series. They support both their own operating system and a leading open source one. Applications that run on their systems are available from a wide variety of companies.

    Similarly, major vendors such as GE and Siemens will continue to do what they do, but all of the hardware and software will be open to competition by both new and existing vendors, and possibly also by open source efforts. It's likely that Linux would be ported.

    Image storage systems for imaging continue to cost many billions of dollars a year. They don't do much more than what you could with Dropbox or AWS S3 storage, for example. Each patient would have cloud folder that would hold all their records and images. The system would store each new file in the cloud, which would securely store it with full multi-site protection and backup. Sharing can be accomplished simply by creating and sending a link, something that can be done with a few lines of code or manually in seconds. The huge problem of medical imaging records storage and sharing that I demonstrated here would go away! Yes, you'd put some UI on top of the cloud storage to make it super-easy and not dependent on any one cloud storage vendor.

    Conclusion

    The essential and growing world of medical imaging and supporting systems form an indispensable part of modern medicine. It's long since time for them to catch up to the transformation of the computer industry four decades ago, dispense with harmful regulation and allow healthy competition to flourish. We would all benefit by the resulting increase in availability and dramatically lower costs. And yes, with better quality.

     

  • The Destructive Impact of Software Standards and Regulations

    In many fields of life, standards and regulations are a good thing. Standards are why you can get into a new car and be comfortable driving it; without standard steering wheels and brake pedals, no one would be able to drive a rental car. Software is different. The misguided effort to impose standards and regulations on software development has played a key role in the nonstop cybersecurity disasters and software failures that most organizations try to minimize and ignore.

    Do software standards and regulations literally cause software and cybersecurity disasters? Yes, in much the same way as looking at your cell phone while driving causes auto accidents. Everyone agrees that distracted driving is bad, but somehow distracted programming is ignored. It’s a classic case of ignorance and good intentions that have horrible unintended consequences. Seeing the bad consequences, the community of standards writers believes the problem is that their standards aren’t deep, broad and detailed enough — let’s distract the programmers even more!

    Software and Bicycles

    Suppose that writing a software program were like riding a bicycle, and that finishing writing the program was like reaching your destination on the bicycle.

    Even if you’re not a bike rider, you’ve likely seen lots of bikes being ridden. You’ve probably seen kids learning on training wheels:

    Youth+bike+riders+MS

    Kids eventually learn to balance and learn how to handle curves, hills and the rest. Then there are serious riders whose bodies are tuned to the task. They ride with focus and concentration, assuring that they handle every detail of the road they’re on to maximum advantage.

    11

    Because people who create standards and regulations on software are appallingly ignorant of creating code with maximum speed and quality, they impose all sorts of strange requirements on programmers. It’s as though the bicycles were invisible to everyone but the programmers, but the leaders have it on best authority that the bicycles they demand their programmers use are up to the most modern standards.

    They create increasingly elaborate processes and methods that are supposed to assure that bicycles reach their destinations quickly and safely, but in fact assure the opposite. The programmers are required to juggle a myriad of meetings, planning discussions, reviews and other activities while still making great progress.

    The oh-so-careful regulators go to great lengths to assure that at each stage of the bicycle’s journey the rider does his riding flawlessly and without so much as a swerve from the proscribed path. When developers are forced to follow regulations and standards, they don’t just pedal, quickly and smoothly, to the finish line. They constantly stop and engage in myriad non-programming activities. No sooner do they start to pick up speed after being allowed back on the bike than ring! ring! It’s time for the security review meeting!

    Riding a bicycle competitively is not the most intellectual of activities. Neither is being a batter in baseball. But both require exquisitely deep focus and concentration. The batter’s head can’t be swimming with advice about what to do when the pitch is a sinker; the batter has to be present in the moment and respond to the pitch in real time with the evolving information he gets as the pitch approaches the plate.

    In the same way and arguably even more so, the programmer has to be immersed in the invisible evolving “world” of the software around him, seeing what should to added or changed to what, where in that world. The total immersion in that world isn’t something that can be flicked on with a switch — though it can be brought crashing down in an instant by an interruption. In the same way, a biker doesn’t get to maximum speed and focus in seconds. It takes time to sink in to the flow.

    Meanwhile, all the managers judge programmers primarily by how well they juggle, like the skilled fellow in the picture above, blissfully unaware and seemingly uncaring that the unicycle is better for stopping, starting and going in circles than it is for making forward progress.

    Conclusion

    This is the ABP (Anything But Programming) factor in software — make sure the programmers spend their time and energy on things other than driving the program to its working, useful, high-quality goal. The managers feel great about themselves. They are following the latest standards, complying with all the regulations, and assuring that the programmers under their charge are doing exactly and only the right thing — doing it right the first time. When such managers get together with their peers, they exchange notes about which aspects of the thousands and thousands of pages of standards they have forced their programmers to distract themselves with recently. Because that’s what modern software managers do.

  • Here’s How the FDA Can Reduce Medical Device Costs While Improving Healthcare

    The FDA wants to keep us safe. They want the drugs we take to be what they’re supposed to be, and they want the medical equipment used on us to be safe and without fault or error. We all want that!

    However, the way they choose to achieve the goal for the software that is an essential part of most medical devices is deeply flawed, and leads to huge expense with only a small number of companies willing and able to follow the FDA’s regulatory regimen for software. The net result is medical equipment and software (which is increasingly a key component of medical equipment – think MRI machines) that is wildly expensive and uses seriously outdated technology.

    There is a simple, easily understandable reason for this horrible state of affairs, which the grandees of the FDA refuse to acknowledge or even understand. The root of the problem is that they don’t understand software. Which doesn’t stop any of them from being certain they can regulate it. Because of this inexcusable ignorance, they take the regulatory approach developed over many years for drugs and manufacturing and apply it with only cosmetic changes to software. Safety is safety, they probably say to themselves. We’ve proven our approach for drugs; why start from scratch for software?

    The mistake made by the FDA, along with nearly all the hard-charging graduates of MBA programs, law schools and bureaucrats everywhere is in thinking that the process of manufacturing is like the process of building software, except not as visible or physical.

    In manufacturing, you have a factory with raw material arriving, being processed in a series of steps, with quality checks along the way, and emerging as finished goods at the end. The important thing is to check the quality of the raw materials as they enter and the results of each step of processing to make sure it’s up to snuff. At the end all you need is a cursory quality check, because so long as everything is done right along the way, the result is probably good.

    Similarly, they think, in software you have a set of requirements, with lines of code and software components being produced along the way, with careful unit testing being performed at each stage, and more tests as the components are woven together. The end product is subject to more testing, but the important testing has already been done. The idea is that quality is designed in.

    This method for building software is exactly what, in gruesome detail, the FDA requires. It’s spelled out in highly detailed regulations. Sounds good, right? Why would anyone want crappy software, particularly when it comes to our health?

    The trouble is that this whole way is thinking is based on a blatantly false analogy.

    What they think is that the manufacturing process of converting raw materials to finished goods is just like the process of creating lines of code and combining them into a finished software product. People even talk about “software factories,” and how important it is to churn out quality code, on time and on budget. Still sounds good, right?

    Here’s the problem: a factory that produces finished goods, whether they’re drugs or cars, is making copies of a design that’s already been created and tested. In the drug development process a drug is created and validated through testing. All that’s done in the drug factory is assure that the copies that are made of the already-designed-and-proven drug are exactly and only what the drug creators intended.

    Designing and building a new piece of software is like the drug development process, not the drug manufacturing process. The software is created for the very first time, with changes made along the way. The software equivalent of a drug factory is trivial: it’s taking a piece of executable software and making a copy of it. There is a universal software “factory” that works on all software: the copy utility. It’s what happens when you go to the Apple or Google software store and download the software you want. The download makes a bit-for-bit-100%-accurate copy of the original software for your use. That is software “manufacturing!” There’s even a universal quality check – a checksum is always incorporated in the original prior to copying that the receiver can check. The checksum tells you whether the copy is perfect, just like all the drug manufacturing quality checks do, only with software, it’s easy. Yes, because software is different. Here is more about software factories.

    What the FDA regulations do is specify and control in gruesome, expensive and time-wasting detail the process of building the very first, original copy of the software – like creating the drug in the first place. This is a complete and total waste. The methods and processes of building software are constantly evolving, with the most innovative companies, the ones that actually create new software, at the forefront. These companies have small, focused teams who crank out great software, and do it quickly. They use what can be called “wartime” methods of building software.

    The FDA should scrap its mountain of software regulations and replace them with a simple set of regulations that achieve the same goal, more effectively. I describe this in detail here. The new regulations amount to something like “We don’t care how you build your software, but its your responsibility to assure that the software performs its stated job each and every time, without fail. If the software has errors that cause medical harm, you are responsible for the damage it causes, and you may be barred from supplying software to the medical market in the future.”

    This of course, shifts the entire burden onto the software creators – as it should. Inspections are no longer required. Employment at the FDA should go down, but of course, since it’s the government, it probably won’t.

    Changing the medical software regulations in this way will unleash a wave of innovative, low-cost medical software. It will be as though runners were required to carry 100 pound backpacks and walk on stilts; as soon as they can dump the pack and use real running shoes, just watch them set records! They will be a race with each other to see who can cross the finish line in style the fastest, with no stumbling along the way.

  • Security Regulations vs. Security

    Maintaining the privacy of personal data is important is many industries. This aspect of computer security has received lots of attention from regulatory agencies, who have issued massive bodies of regulations that must be followed in order to achieve security. They've done it in fintech (for banks) and in healthcare, for example HIPAA. But there's a little problem: if you follow all the security regulations with absolute perfection, you can still be hacked. In many cases, following the regulations makes you less secure! Here are details.

    Are you going to be secure, or are you going to adhere to the security regulations? It's a choice no one should have to make, but it's exactly the choice forced on us by supposedly well-intentioned government agencies and industry groups.

    The Simple Solution

    There's actually a simple solution to this problem, though it's likely that a well-known really hot place run by things with horns and tails will freeze over before it will be accepted. Why the resistance? It's simple! It's inexpensive! It's marvelously effective! It enables innovation! And most of the people currently involved in the regulatory nightmare would be out of jobs. Sound like a good reason to find fault?

    I've described this amazing solution here. It's pretty simple. The regulations declare that consumer's personal information shall not be disclosed without their explicit approval to any entity, whether on purpose or as a result of error or negligence. Make the penalties severe and personal. Exactly how this is accomplished is up to you. And unleash a torrent of fast, effective security measures. Ones that work!

    Why should such a radical approach be tried? Well, among other reasons, the current approach to mandating security just isn't working. Period. Here's a reminder of just how bad it was a couple years ago (and it's not getting better):

    WSJ hacked

    What to do while the hot place remains hot

    Ok, that's a nice fantasy, but what do you do now? I'm going to be inspected for regulatory compliance, and I've got to pass! Here are the basics of a sensible approach to pass audits and achieve actual security at the same time.

    Information and systems security is incredibly important. No one wants systems to be down for any reason or information to fall into unauthorized hands.

    Creating systems that can evolve quickly, scale and survive systems failures, while maintaining good performance and near-perfect up time, is really hard, but is core to business success.

    • Small organizations have trouble maintaining speed, flexibility and quality as they grow.
    • Large organizations rarely are fast and flexible.

    Achieving “basic” security (things like firewalls and access control) is easy and normal. Basic measures protect against most threats.

    When you go beyond basic security, there are measures that organizations can take that are sensible, proportional to realistic threats, and supportive of the business. The measures are in the spirit of fast, flexible and high quality systems development that lead to business success.

    As organizations grow, there are pressures on them to “grow up.”

    • In development, organizations adopt industry-standard development processes, and see costs explode, time-lines stretch out and quality plummet. The “solutions” usually make the problem worse.
    • In security, organizations call in the experts, get audited, and change lots of things in order to comply with all the lawyer-written regulations. The net result is normally an additional big tax on development (making it even slower and costlier), with dramatic reductions in the actual security of systems and information.

    The painful fact is that complying with security regulations is not highly correlated with actually being secure, whether it’s keeping patient records confidential in healthcare or financial information secure in banking and commerce.

    • Smart organizations can recognize this and have two efforts: one to maintain actual security, and another to achieve compliance that is “good enough” to pass any audits that may be required.
    • Large organizations are more likely to have industry consultants and security specialists who see their jobs as being expert in the regulations and complying with them. This can create the illusion of security without achieving it, while placing an ever-growing burden on the business.

    There are many reasons why security regulations are ineffective at achieving their goal. They include:

    • Bad guys are always inventing new ways to be bad, and the regulations tend to lag far behind them.
    • The regulations tend to be voluminous, detailed specifications for how to achieve security rather than plain statements of what to achieve, which would leave room for innovation and automation.
    • You can have highly automated, more effective security measures than specified by the regulations and still fail to be in compliance.
    • Achieving compliance tends to be so hard and costly that there is usually little appetite for supporting actual security.
    • Meeting the regulations is often so burdensome that compliance in practice tends to be tardy and/or incomplete, further worsening the effectiveness of the regulatory approach.
    • Many regulations are written assuming (demanding!) the obsolete, document-heavy waterfall style of software, making compliance while running fast, modern iterative development nearly impossible.

    Truly bad things happen when you have actual security breaches, not failures of compliance. Therefore:

    • Top priority should be achieving actual security, because failing to do so can seriously harm if not kill the business.
    • Second priority should be running the business effectively and efficiently.
    • Then should come achieving enough regulatory compliance to stay out of the news and out of serious trouble. There are ways to accomplish this.

    Basics of Effective Security

    The most important aspect of security is establishing a culture of security throughout the organization. Security is not principally a technical issue—it is a cultural one. It doesn’t matter if you use 128-bit encryption on all of your “data at rest” if your implementation associate puts a million patient records in a Dropbox or a customer service representative emails those records to someone pretending to be an employer. High "walls" don't protect against the bad insider.

    As a company with a larger profile, you do need to check the boxes—for example in healthcare, having an assigned HIPAA security officer, do an annual HIPAA risk assessment, go through an SSAE-16, etc. But you should fill that role with someone who truly thinks about it from a risk-appropriate basis. This is no different from the idea that the ideal Director of QA doesn’t fundamentally think of themselves as the police; the best QA organizations are the small, highly automated ones that are highly integrated with the development team and who are equally talented.

    The ideal Security Officer is someone with the intellectual flexibility and horsepower to understand and mitigate the real security issues in the organization while also being able to speak the language of the auditors. Those people do exist (although they are rare)—but they think less about how they are responsible for “policing” the organization (which quickly leads to multiple layers of dysfunction, cost, and distrust) and more about how they can work as part of the organization to mitigate issues.

    [Thanks to Ed Park for this formulation!]

    Security is tough, particularly when the regulations are burdensome and ineffective. The approach and realizations described here are the only ways I've found to be secure while minimizing the regulatory tax.

  • Regulations that Enable Innovation

    Regulations that enable innovation? How can that be?? Don't regulations inhibit or even prevent innovation?

    Yes they do. Wouldn't it be nice if there were a way to write regulations that enabled innovation? Well, there is a way to do it! It's actually easier to write regulations that enable innovation than the usual way. There are fewer of them. They're easier to understand, and easier to keep up to date. They're more effective at regulating what you could reasonably want to regulate, while at the same time keeping the door open for inventive people to find better ways to get things done, while still conforming to the regulations.

    So why isn't this the standard way of writing regulations? Inertia. Lack of understanding. Fear. Bureaucratic intransigence. The usual reasons.

    Regulations that Enable Innovation

    Practically all regulations tell you, in varying levels of detail, tending to the excruciating, How you're supposed to do the regulated thing. The more detail, the less innovation.

    By sharp contrast, regulations that enable innovation tell you What you're supposed to do or avoid doing. The less said about how to reach the goal, the wider the door for innovation.

    Suppose the point of a regulation was to make sure you got to work on time.  Typical how-type regulations would tell you exactly when to leave your apartment and exactly what streets and avenues to walk until you got to the office. It would allow for red lights. The regulations would have to change to allow for construction and other changes. If you deviated from the prescribed route or used a different method of transportation, you'd be in violation.

    What-type regulations for the same thing are simple: dude, get to the office on time! How? You figure it out, it's your problem! But it's also your opportunity for learning and evolution. You could try walking, and try different routes. You could try the bus and subway. Taxi and Uber. Different ones under different circumstances. So long as you got to work on time, you'd meet the regulation!

    For more detail on What vs. How, see this.

    If this sounds crazy to you, you should realize that there is a whole, vast area of our legal system that works in just this way: the criminal law. See this for more.

    I wouldn't be advocating for change if how-type regulations worked. They usually don't get the job done. They prevent innovation. Worse, when you satisfy all the regulations, you're under the illusion that things are fine. Except that they're usually not. The ongoing cyber-security disasters we have experienced are prime examples of this.

    Cutting down the number of regulations

    Lots of people complain about regulations. Some people want to reduce their number. For good reason! Have a look at this to see the scale of regulations.

    I hope it's now clear that reducing the number of How-type regulations won't make a big difference. It may even make things worse. It's better to replace a whole pile of How-type regulations with a couple of simple, goal-oriented What-type regulations.

    An example of regulatory innovation prevention

    The rhetoric of regulations and licensing is that they protect us poor, innocent consumers from the awful products and services that would be inflicted on us in their absence. The reality is that they are a massive effort that increases the costs of everyone already providing a product or service, while putting up huge barriers to competition from fast, light-footed innovators who have figured out a better way to do things. Regulation, certification and licensing do almost nothing to protect consumers, but are remarkably effective incumbent protection programs.

    While this dynamic plays out in many industries, nowhere is it more harmful to our health and well-being as it is in healthcare.

    The FDA is supposed to protect our health. It's even what they say they do:

    FDA promoting health

    One of the many ways they do this is by heavily regulating the software that goes into all medically-related devices.

    The right way, the What-type way of regulating that software, would be like a criminal law:

    Your software has to perform all its intended functions in a timely and effective way, without error. When updates are made, no errors or other problems should be introduced.

    Now that's just a first draft. But I bet the final goal-oriented "regulation" wouldn't be too far from this.

    This simple regulation states what everyone really wants: the software should do what it's supposed to do. Period.

    The FDA does the opposite of simple and effective. It tells you exactly how you're supposed to develop software, and in gruesome detail. Here's the overview of the regulation:

    1a

    The sections are listed on the left. Each explodes into many sub-sections, some of which are further divided. Each one is long, detailed and brooks no variation (or innovation). On the right in the image above, you see just some of the bibliography, the many underlying documents you'd better get and understand if you're going to be in regulatory compliance.

    Here's a diagram that gives an overview of what is required:

    62304 fig 1

    Here are the section headings from the software planning part of the document:

    IEC 62304 requirements

    As this makes clear, you'd better not write a line of software until you've spent boatloads of time and effort in planning — exactly what people do when they build buildings using steel and poured concrete, but exactly the opposite of the iterative approach that is the standard among fast-paced, innovative organizations. I mean little upstarts with a high failure rate, like Google, for example.

    If the FDA were serious about their stated mission, "protecting and promoting your health," they would immediately blow up IEC 62304 and the who-knows-how-many-other mountains of how-type regulations they oh-so-lovingly promulgate and enforce, and replace them with simple goal-type, what-type "regulations." It would unleash a torrent of health-promoting innovation and open the lobbyist-loving incumbents to much-needed competition. To the benefit of nearly everyone, except a bunch of progress-preventing bureaucrats employed both by the government and by their corporate "homies."

    Conclusion

    We need regulations. The last thing any of us wants is for corporations to build crappy equipment that doesn't work or deliver services that deceive or hurt us. There are bad and incompetent people in the world, and without appropriate regulations that are vigorously enforced, we'd be worse off. And in extreme cases, dead when we could be thriving.

    Which is why it is so upsetting that major organizations like the FDA keeping waddling along, crowing about what a great job they're doing, when it's just not true.

    I wish it were just the FDA. Most major sectors of society that are supposed to be protected by regulations are instead hobbled by incumbent-protecting, innovation-killing, ineffective how-type regulations.

    The path to regulation that is both effective and enables innovation is clear. Let's do it!!!

  • Innovation: the Barriers

    It's hard to be an innovator. You have to come up with cool new stuff, make it work, and get people to use it. Not easy! Depending on your situation, there can be barriers, active and passive, to being a successful innovator. Lots of people in business and government love to talk about how they're innovative, and how they foster innovation. Hah! In all too many cases, what they actually do is build and sustain barriers so strong and so high that innovation is nearly impossible.

    If you look at my earlier posts on innovation, you may think that I'm a cynic. The reality is that I'm an enthusiastic, life-long believer in innovation. My sarcasm is targeted exclusively at the hollow, creativity-killing rhetoric that too often passes for support for innovation.

    Active barriers to innovation

    What about big companies who innovate? That's mostly rumor and self-promotion, rarely a reality.

    What if you're a small company trying to innovate? The barriers are mostly put up by the large businesses that dominate the field in which you want to innovate.

    Will the big business itself innovate? In spite of all the talk, probably not. It's likely they want to be seen as modern, with it and innovative. It's highly unlikely that they actually want change. This post goes into some detail about the reality behind giant companies that supposedly are great innovators. Why can't big companies innovate? Who knows, but I think the attitude of the pointy-haired boss is a hint:

    Dilbert

    There is lots of information and a few stories about how to out-fox the giants that want to keep you down in my book on building a growing business from a startup. But it's tough. The big guys hold most of the cards.

    Passive barriers to innovation

    Governments are the main source of "passive" barriers to innovation. The barriers are usually in the form of regulations — regulations that can quickly morph into active barriers once you get caught in the cross hairs of one of these innovation-killing agencies.

    You think those regulations are no big deal? The current code of federal regulations is massive, and getting bigger every day. Here's a quick glance at its size:

    CFR

    Of course, no government agency will ever admit that what they are doing is preventing innovation. They are protecting consumers! Enforcing fairness! Doing good stuff, the peoples' business! That's what they say. Sometimes it's even true. But in most cases, what they are really doing is protecting existing businesses and professionals from competition. They do this by putting increasingly burdensome and expensive barriers to new products and services entering the market, and competing with the establishment.

    Regulatory barriers to innovation are everywhere, in nearly every industry. Why isn't there a huge outcry? Simple:

    • The companies and people that are on the "inside," benefiting from the barriers, vociferously support "protecting consumers" or whatever the b.s. cover story is.
    • The people who would benefit from the innovation don't see the innovations, because they don't exist yet, and so can't really lobby against the barriers.
    • It's just the way things are. Who has the energy to "fight City Hall," particularly when the innovative benefits don't exist yet because of the barriers?!

    The barriers are everywhere, preventing innovation or worsening convenience and price. The barriers are in old, tangible things like a store being able to sell liquor or a car company being able to sell its cars. More importantly, they're in newer, life-issue things like nearly every aspect of healthcare.

    Barriers to innovation in healthcare are massive, and getting worse. The barriers aren't called that, of course. The government agencies are protecting our health and privacy! But when you lift the covers, it is easy to see that what is really going on is a rapidly metastasizing federal bureaucracy that prevents life-enhancing products and drugs from being invented, and massively increasing the cost and slowing down the relatively few innovations that squeeze through the gauntlet.

    Conclusion

    We're clearly in the middle of an innovation bubble. Everyone says they want it. Companies and government agencies claim to be fostering and promoting it. I'm someone who has worked in the innovation trenches for decades. I try to innovate myself, and help others to do it. It's not easy. That's why I get so cynical about all these innovation-smothering institutions who are so loudly in favor of innovation. Their words say one thing and their actions say another. All their innovation amounts to is a pile of marketing rhetoric, an attempt to make themselves appear to be modern.

  • An App to Prevent Train Crashes like Amtrak Philadelphia

    Innocent people taking a train are dead. Many are injured. The government had an answer in 2008: spend billions of dollars and wait for years. There's a better answer: Build a smartphone app, with some cloud software, a couple sensors and cameras, and engine cab remote-control harness. It would be faster, cheaper and more effective than the existing partly implemented "solution," and lives would be saved.

    The Crash

    Here's the story of the crash in a nutshell:

    111Eight people were killed, and 43 still hospitalized days later.

    Reactions to the Crash

    The basic reaction has been typical all-politics-all-the-time. Here's the Reuters story:

    ZZZ

    Later in the same story, you learn that the engineer was driving at more than twice the speed limit for that part of the track, and that the accident would not have happened except for his error. But that's a detail, I guess.

    Technology Could Have Prevented the Crash!

    Then it turns out, we know how to prevent things like this! But according to the experts, it just hadn't been installed.

    Z

    This PTC ("positive train control") sounds like wonderful stuff. It turns out it's been around for awhile. Everyone seems to agree that it would go a long way to solving the problem of crashes like the Philadelphia one. So what's gone wrong?

    Government-Mandated Positive Train Control

    Here's a good summary of the issues and problems of the wondrous PTC solution, which was mandated by Congress in 2008. It was declared by Congress that it must be completed by the end of 2015. It won't be. And the cost? The GAO estimated somewhere between $6.7 billion and $22.5 billion.

    A brand-new system dreamed up by government bureaucrats in a short period of time — of course it takes billions of dollars and many years to implement! Of course it's a completely custom system, relying on railroad-only technology that will be generations behind the general computer industry before it's even deployed! Of course everyone assumes you can spec out a never-built-before system and get it right the first time!

    This is amateur-hour technology, and it is … killing! those of us unfortunate enough to be in the wrong place at the wrong time. This is a near-perfect example of bureaucratic "innovation." It is an example of the "what not how" problem of regulation: what should happen is simple declarations of goals (don't murder people) instead of gruesomely detailed directions for how to avoid murdering people. The bureaucratic approach mandated by Congress has already resulted in incredible expense and multiple avoidable deaths, just as its similar approach to computer security has resulted in some of the worst security breaches in history.

    The Modern Approach

    There is a better way. It leverages modern computing, devices, networks and software. "Experts" will pooh-pooh the approach, saying that anyone who proposes it doesn't understand the harsh and peculiar railroad environment. That's what experts always say in situations like this, standing on their little technology island, protecting their "expertise" and their jobs, until modern, high-volume technology gets the job done. Then, without further comment, they retire.

    I won't lay out the whole approach in this post; this blog has lots of the core ideas, and so do lots of modern computing technology people.

    Just as mapping software on a phone can track your location and speed when you're in a car, it can do it when you're on a train. Why shouldn't lots of people have this app? Why not publish the complete map of all the train tracks? Most of it already seems to be available to consumer mapping programs — they just need to be tweaked to allow travel on rails instead of on roads. Yes, there are areas where track maintenance is taking place where trains shouldn't go — just like with roads! Mapping software already exists to avoid such routes — just use it! Yes, there are switches — how about adding them to the maps, and making whatever controls them upload their state to the cloud? Yes, there are other trains to be avoided — how about the apps all upload their positions to the cloud, and give a view to where other trains are? Yes, there are things you should pay attention to when you're not looking at the app — navigation apps already handle this through audible alerts or talking to you.

    These simple steps, which could be built iteratively and deployed in weekly cycles, would go a long way to solving the problem. There remains the problem of overriding the train controls in case something terrible happens — but if all the conductors have the app and they have access to the engine car, many of the potential bad things could be avoided. The potentially tricky issue of automated speed control could then be addressed — but after all, airplanes are largely run by auto-pilot, why shouldn't trains? If auto-pilot works for vehicles that go hundreds of miles per hour, miles in the air with no tracks, surely it can't be too hard to make a version for relatively slow vehicles without steering controls, whose only variable is speed!

    While the government is mandating and regulating, billions of dollars are being wasted building systems that will be obsolete before they're installed, and meanwhile people are being killed and injured. There is a better, faster, cheaper way. Its cost to build is likely to be much less than the cost to simply maintain the PTS. So let's do it!

     

  • The Government wants to Help Uber’s Software Quality

    It's reported that New York City's Taxi and Limousine Commission (TLC) wants to pre-approve new software releases by ride companies like Lyft and Uber. Since the TLC is well-known to be heavily staffed with software experts, what can be bad about this idea? Other than just about everything, that is?

    The proposal

    Here's what they're saying:

    Uber

    Uber and Lyft have to buy smartphones and give them to the TLC because the Commission runs such a tight budget that there's no way it could afford the required thousands of dollars. Oh, wait … the planned 2015 revenue of the TLC is projected to be $545.6 million, with expenses of $61,045,000. That leaves just $480 million or so, which is undoubtedly already committed to something or other, which is probably terribly important.

    Let's assume it happens. How is it going to work? Uber gives a release to the TLC, which takes exactly how long to test it how rigorously by what means? By the time it gets around to organizing to test one release, another will have arrived. So the pressure will immediately come to have fewer, larger releases. Then will come the time when the TLC approves a release and there's a bug. There will be commissions, reviews, and a big operation will be set up to implement industry best-practices, government-style. Things will get even slower and longer, and government tentacles will start weaving their way into Uber's software development organization. In the end, New York will end up getting a small number of releases, way after the rest of the world has them, buggier than everyone else, and the costs will be passed on to the drivers and riders.

    Why?

    Why

    Right. Sure.

    The Reality

    Governments can't build software that works in any reasonable time. See this.

    No matter how hard they tried, software testing in the lab just doesn't work. See this.

    They will press to have fewer releases, when more frequent releases are the key to good software quality. See this.

    Finally, most important of all, we don't need to be protected, thank you very much. If it doesn't work, people will stop using it, and the company will either fix its problems or go out of business. That's the way the greatest wealth-creating and poverty-eliminating system ever invented works.

  • Net Neutrality: It Ain’t Broke, Don’t Fix it

    There is lots of talk about "net neutrality" now, after years of passionate advocacy by partisans. I have a simple response to the issue, driven by my simple-minded engineer's mentality. There's no problem here, so don't you dare try to "fix" it!

    Net Neutrality

    The way "net neutrality" is normally described, it's shocking that it's not already the rule of the land. Opposing net neutrality is described as being like a racist, something which is obviously unacceptable in a civilized society. (Just to be clear: discriminating on the basis of race, sex or any other human variation is totally unacceptable to me.) It amounts to evil internet service providers slowing down or discarding network packets from sources of which they don't approve, and speeding up access to approved sources. This could be done for commercial gain, to push some brand of politics, or any number of nefarious motives.

    The argument in favor of net neutrality is normally made in terms of simple fairness: preventing giant ISP's from preventing or impeding access to internet resources customers want. The feared consequences will range from high prices and/or poor service for companies whose services threaten the ISP's such as Netflix, to barring consumers from accessing politically or commercially threatening web sites. Anyone who opposes this view of enforcing simple fairness is accused of being paid off by corporate interests or morally corrupt. Or simply stupid, for not understanding how the internet works.

    I claim that I am none of the above: not bought off, not morally corrupt, not stupid, and furthermore relatively knowledgeable of internet internals.

    It would take a long paper or short book to lay out all the facts and arguments. I don't have the time or the patience. But here are some headlines.

    "Net Neutrality" is all about Innovation-Killing Regulation

    Net Neutrality may be a moral crusade about fairness and equality for many of those who promote it, but the proposed solution is that the same inept crew that raises costs, protects the powerful and stifles innovation in so much of our lives will now be able to wield their magic-killing wands on the internet. It's not about "fairness" — it's about control by a bunch of ignorant, remote bureaucrats.

    Here's a good summary, see the article for more:

    The Internet boomed precisely because it wasn’t regulated. In 1999 the FCC published a paper titled “The FCC and the Unregulation of the Internet.” The study contrasted the dramatic growth of the open Internet with that of the sluggish industries subject to Title II’s more than 1,000 regulations. Sen. Ted Cruz got it right last week when he tweeted that Title II would be ObamaCare for the Internet.

    Amazing as it seems, under these regulations federal bureaucrats in the 1970s decided whether AT&T could move beyond standard black telephones to offer Princess phones in pink, blue and white. A Title II Internet would give regulators similar authority to approve, prioritize and set “just and reasonable” prices for broadband, the lifeblood of the Internet.

    These guys don't know how to build technology. They are incapable of keeping it secure. Their regulations are certain to be obsolete before they're written, and counter-productive.

    You're Afraid Greedy ISP's Might Limit Internet Access?

    Really? Well, just wait until the government gets involved. Once a bunch of bureaucrats operating essentially in secret gets going, it's hard to stop them.

    It's well-known that South Korea has the world's fastest internet connections. But the internet there is anything but free and open. Government-driven censorship is severe. Here are some of the basics:

    Internet censorship in South Korea has been categorized as "pervasive" in the conflict/security area, and also present in the social area. Categories of censorship include "subversive communication", "materials harmful to minors", and "pornography and nudity". Internet censorship has been expressed by the shutting down of anti-conscription and gay and lesbian websites,[1][2] the arrest of activists from North Korea-sympathetic parties, and the deletion of blog posts by writers who criticize the South Korean president.[citation needed] Censors particularly target anonymous forums; South Koreans who publish content on the Internet are required by law to verify their identity with their citizen identity number. The most common form of censorship at present involves ordering internet service providers to block the IP address of disfavored websites. A government agency announced the planning of new systems of pre-censorship of controversial material in the future.

    ISP problems are Caused by Regulation. The Cure is More Regulation??

    The ISP's, like Comcast, Cablevision, Time Warner, Verizon and the rest, provide the "last mile" of access to the internet. They're the guys who bill you for use. All the rest of the internet just magically happens, supported by a variety of means, mostly advertising.

    The last mile is where the problem is. These guys are mostly descendents of the phone and cable companies. They exist and operate at the pleasure of various federal, state and local regulators. Just like the power companies, they have centers from which their wires weave out to sub-stations, down major streets, branching to local streets and eventually to houses and buildings. More agencies than you can shake a stick at stand in their way at every step, demanding this and that. In exchange, they get a monopoly or close to one.

    Are these nimble, creative, innovative guys? Duhhhh. How can they be? They go to all the trouble to put wiring in, and they try to keep it in service as long as they can, milking every advantage out of it they can. Given all that, I'm surprised things work as well as they do.

    Bottom line: the ISP's are already regulated. That's their problem. Let's not make it worse by adding in federal regulation and spreading it to more of the system. Since when has federal regulation made technology better?

    There are Fast Lanes and Slow Lanes on the Internet. And the Problem Is???

    Advocates of net neutrality are big on talking about how grubby issues of crass money will cause unfavored sites and consumers to be relegated to the slow lanes of the internet, while all the fat cats will cruise on the fast lanes.

    Exactly how is this different from, like, everything else in life?

    There is nothing like "NY Yankees neutrality," for example. Here's the price and the view from the expensive seats:

    NYY first row

    And here's the price and the view from the bleachers:

    NYY grandstand

    How unfair! How unequal! Someone should do something about Yankees neutrality!

    By comparison, all the "seats" on the internet offered by ISP's are just fabulous. Access rates are thousands of times faster than in the past, and at good prices. You can get even faster speeds if you're willing to pay — and that's OK.

    The Greatest Current Threat to the Internet is Apps and Mobile

    "Net Neutrality" is mostly a "what-if" threat, based on the minimal things ISP's have done, and the horrible things someone imagines they could do. Apps, driven by mobile, are a huge, here-and-now threat, growing by the day. As users shift their attention to mobile, they are shifting away from the open, highly competitive web to the walled gardens of the mobile world, which is exactly what monopolistic giants like Apple, Google and Facebook want.

    Here's a good summary, see the article for much more:

    It isn’t that today’s kings of the app world want to quash innovation, per se. It is that in the transition to a world in which services are delivered through apps, rather than the Web, we are graduating to a system that makes innovation, serendipity and experimentation that much harder for those who build things that rely on the Internet. And today, that is pretty much everyone.

    The Internet is Wildly Complex and Rapidly Evolving

    People who complain about net neutrality typically have no idea how the internet works and how it's evolved over time. There's a lot going on; it's not just a set of pipes that get bigger and faster over time.

    This is the part that's tough for me to limit what I say. While there are people who have spent more time inside the internet and its predecessors than I have, I was involved early, in the ARPA-net in 1970 and 1971 at a time when it had fewer than ten nodes, and periodically since then to the present. Here's the ARPA net in 1977:

    Arpanet_logical_map,_march_1977

    A good chunk of the fun stuff, both the power and the problems of the internet, comes from the fact that the "internet" is a network of networks, an "inter-network" that connects many networks together, sort of like the interstate highway system connects the states, though much less uniformly than that. Here's an early version of the network of networks:

    800px-InetCirca85

    If this were the interstate highway system, some things to note would be:

    • ISP's control the local roads and entrance ramps to the big roads.
    • There are different ways to drive cross-country.
    • If you care a lot about drive time, you get to know the best routes.
    • If speed is really important to you, you take the toll roads to avoid the choke points. This is the origin of Internap, for example; its big early customer was Amazon.
    • If you've got lots of stuff to deliver to many customers in many cities, you pre-deliver it to warehouses near the customers, so that when they order, delivery is fast. We call it a CDN, content delivery network.
    • Special sub-networks are constantly being developed to solve problems, and the people who use them pay for their use. Business as usual.

    The fact that the internet is an evolving web of variously connected networks is key to its vitality and astounding growth. Let's stand back and enjoy its continued unimpeded, unregulated growth.

    Worried about Comcast and Netflix? You Shouldn't Be

    Net Neutrality adocates like to create fear with all the things big scary ISP's could do that would be just awful — therefore we have to regulate them before they do those things, as in the Philip Dick story The Minority Report. They also love to recount the charges Netflix has made against Comcast as evidence of actual wrong-doing. In other words, they like to take the side of the monopolist of content (Netflix) against the local monopoly of access (Comcast). Once you dig all the way to the bottom, you realize that Netflix wanted to be able to dump content onto Comcast's network amounting to more than a quarter of its total traffic and demand that Comcast deliver it with uninterrupted regularity — for free, leaving Netflix to keep all the money it charged its customers. In the end, they cut a deal similar to typical CDN deals (see above).

    When you buy HBO, you expect that the cable company and HBO somehow split what you give them — why should you care what they work out? But then when you buy Netflix, net neutrality advocates demand that the cable company deliver it for free. Only on Planet Stupid is this anything like "fair."

    Summary

    A cardinal rule in engineering is "if it ain't broke, don't fix it." Enthusiastic young engineers break this rule all the time. Hard experience usually educates them. Applying that rule to the internet, we get: the internet is a big collection of moving parts and blobs, constantly evolving. It works remarkably well. Parts of it are crappier and slower than they could be, anywhere from 2X to well over 1,000X. Most people who operate various parts of the internet have no reason to care about the ultimate consumer experience and act accordingly. The slowest and crappiest parts of the internet stay in use way past their natural expiration dates, but eventually die off. The biggest entities and/or the most regulated and/or the most monopolistic tend to be the slowest and crappiest of all. They try to implement and/or enforce practices and technologies from many years ago, and do so poorly, at great expense to themselves and everyone involved. Sometimes they act in a nakedly self-interested or "principled" way and make things even worse. But all in all, the consumer experience on the internet has improved with remarkable speed and few glitches compared to almost anything else, and way better than if it had been regulated. So let's leave it alone, and worry about the true threats.

  • Bureaucracy, Regulation and Computer Security

    There always seems to be a bureaucracy ready to tell you how to keep your computer systems secure; or, worse, to tell you what you must do to be in compliance with the regulations promulgated by the bureaucracy. "It's for your own good," they say.

    If you are forced to comply with some regulation or other, you'd better comply. But you're a fool if you confuse compliance with keeping the assets of your business actually, you know, secure.

    Bureaucrats can't keep simple physical things secure

    Computers are complicated. Construction sites? Not so much. Fences, cameras, sensors, guards and an alert, well-managed staff should do the trick. But when bureaucrats are in charge? Forget it.

    David Velazquez was in charge of security at the World Trade Center construction site. Mr. Velazquez is a Columbia University graduate and had a 31 year career at the FBI, ending as head of the Newark field office. You might think well of the FBI, I don't know, but what I do know is that it's a giant government bureaucracy, and Mr. Valazquez appears to have applied the lessons he learned there on his new job.

    Here is one of the crack guards "on duty" at the work site:

    Sleeping guard
     

    That may explain why a group of guys was able to get to the top and jump off, recording video all the way down:

      Base jumper

    Then a kid slipped through a fence and made it all the way to the roof, unheeded by sleeping guards:

    Security kid

    The biggest, baddest bureaucrats of all can't keep their own computers secure

    Alright, maybe the FBI are amateurs. Let's go to the best of the best, the scariest cybersecurity experts of all, the NSA.

    NSA

    These guys are in charge of keeping us secure from the worst of the worst. A cover story in Wired Magazine told us all about it.

    Wired cover

    Loads of people using piles and piles of super-secret cyber magic are on the case:

    Wired story 1

    If anyone can achieve cyber-security, surely these guys are it:

    Wired story 3

    But we all know how that turned out. It just took one moderately clever person with bad intentions and all the vaunted cyber-wonderfulness was for naught. Among Mr. Snowden's myriad revelations was the previously secret budget of the cyber-bureaucrats of the NSA, an astounding $52 billion. Do you think if they doubled the budget they could have done a better job? Hmmmm.

    Bureaucrats and Security

    Why should you listen to someone who can't do it themselves? If you want to stop smoking, do you eagerly take the advice of someone who smokes? If you want to get rich, do you take advice from poor people? Bureaucrats are sure they're right — because they have no competition, and there's no one who has the power to tell them otherwise.

    Why this matters

    The laughable ineffectiveness of bureaucratic security in general, and cybersecurity in particular, can matter a great deal to you. Here's why:

    • If you do what the bureaucrats tell you to do, you'll spend a lot of money.
    • Following the regulations makes everything slower and less efficient. You'll hurt your business.
    • If you get conned into thinking that following the regulations means that you're secure, you're in big trouble. You will be more vulnerable to business-damaging breach than ever before.

    What you should do is simple: establish effective and efficient security by the best means available, which will typically be unrelated to what the authorities solemnly declare. Then, do as much regulation-following as you need to do, whether it's PCI or any of the rest of the alphabet soup, to avoid punishment.

    Is this cynical? Of course! But it's also real life.

  • Giant Software Company Bureaucracies

    It is the nature of giant bureaucracies to coerce and control the populations they "serve." Giant bureaucracies also tend to resist change, protect themselves at all cost, operate with laughable inefficiency, and become increasingly disconnected from their supposed mission. This is true whether the bureaucracy is a government agency (illustrated on a small, local scale by the wonderful movie Still Mine)

    Still mine

    or a software company. When the bureaucracies are giant software companies, the coercion is often masked in a sickly-sweet cover story about trying to help you, or assuring that things happen with high quality, which just rubs it in.

    I recently ran into an example of this with Microsoft. I was trying to play WMA (Windows Media Audio) files that I had created for my own use from CD's I had purchased. In other words, I was trying to do something I should have been able to do.

    Why CD's? I had bought them a long time ago, why should I purchase them again digitally when it's legal to create a personal digital copy. Why WMA? At the time, it was technically slightly better than the MP3 easily available to me.

    The Random House example (apologies to Random House)

    Imagine I had bought a paper book years ago. Now I was trying to open it to re-read a section. When I tried to open it, it won't open! The book was stuck, and there was a knock on my apartment door. There's a loud voice coming from outside: "Open up! Open up! This is Random House!" OMG! What's this about? I can't open my old book, and suddenly some publisher is pounding at my door??

    I go to the door, open it, and there's a couple scary-looking guys. They say, "We understand you're trying to open a Random House book. Before you open it, we need to verify that you have the right to do so."

    I say, "What do you mean? IT'S MY BOOK! I BOUGHT IT! I'VE OWNED IT FOR YEARS! WHAT RIGHT DO YOU HAVE TO POUND ON MY DOOR AND QUESTION ME?"

    They reply, "We're Random House. We're the publishers. You may think you own this book, but we're the publishers. How do we know you own the book legally? We've got to make sure you have the proper rights for this book. Until we receive that assurance, you will not be able to open the book you claim to own."

    "OK," I say guardedly. "What do I have to do to convince you I own the book I own?"

    "It's simple. Just replace all your phones and your phone service with Random House's. Then our book will be able to call our office and make sure you have the rights you say you have."

    "I've heard about the Random House telephone service. It's really crappy. It's full of static. That's why fewer people use it every month, even though it's free. Even worse, crooks have figured out how to use it to see when I'm not home, so they can break in and steal my stuff. If you insanely want to somehow have the book you published be able to 'phone home,' why not just use the phones I've already got, which work great?"

    "They're not Random House phones. We can't guarantee their quality or appropriateness. Random House books only work with Random House phones. You can say what you want — but we say that we put our name on it and we stand behind them — and they're the only phones we'll use."

    I get the message. I kick myself for being so deluded that I thought buying a book from Random House was a good idea. There's no way I'm trading my secure phones for ones that practically fly a flag to alert all the criminals in the area when the house is vulnerable. I hand the book that I bought and paid for, but which I cannot use, to the agents from Random House, and dis-invite them from my house.

    Microsoft and WMA

    This is what Microsoft did, acting just like the imagined Random House of my example.

    I tried to play my WMA file. It wouldn't play. Instead, just like the agents from Random house pounding on my door, I get this:

    Microsoft fail

    Note the copyright, literally ten years ago! Tens of thousands of supposedly super-bright programmers, and they can't manage to keep things up to date?

    They "don't support" my web browser, which (on this machine) is Firefox. They insist on using IE, which is of course their own browser. Whose utilization has plummetted from over two-thirds in 2009 to about the same as Firefox last year.

    Usage_share_of_web_browsers_(Source_StatCounter).svg

    Why do I care? First of all, they shouldn't care. It's outrageous that they do. Second, here's one reason among many why I care:

    IE vulnerability

    I might as well fly a flag from my house saying "hey, all crooks in the area, c'mon over, the pickin's are good!" And this isn't the first time — IE is famous for being about the most inept, dangerous-to-use browser in existence. Imagine, a free product with a plummeting market share!

    Conclusion

    This experience didn't teach me anything I didn't already know. Microsoft isn't unique. It's like every other giant, bumbling bureaucracy: it's an elephant, we're mice, and you'd better look smart and be careful or you'll get crushed. But somehow, when your nose gets rubbed in it, and they effectively steal something from you from your own house (computer), and there's nothing you can do about, I at least get aggravated in spite of myself.

     

  • Edward Snowden, Daniel Ellsberg: Ineffective Security, then and now

    In 1971, the New York Times started publishing excerpts from the closely guarded, highly top secret Pentagon Papers. It was an explosive public exposure of long-held secrets about the Vietnam War, and was a huge controversy. In 2013, the Guardian started publishing excerpts of closely guarded, highly top secret NSA operations. It was an exposive public exposure of the top secret operations of the most well-funded, computer-savvy security organization in the US. There is every reason to believe that security breaches will continue to happen, because the "experts" in charge of security just don't know how to get it done. They didn't know how 42 years ago, they don't know now, and they show no signs of even being interested in learning how to provide effective security.

    The RAND Corporation

    The RAND Corporation was one of the original top-secret research institutes. It was started after World War II to provide a place for top brains to figure things out that would help the military. In contrast to most places with top secret information at the time, the atmosphere inside RAND was purposefully academic and collegial. There were often open seminars and presentations anyone could attend, so that cross-disciplinary fertilization could take place. You had to have a very high level of background checking and security clearance to be admitted — but once you were in, you could go anywhere and talk with anyone, since everyone knew that if you were there, you had the appropriate clearances.

    People at RAND did truly pioneering work in econometrics, operations research, game theory and computing.

    The secrets at RAND needed to be faultlessly secure. While it looked like an ordinary office building close to the beach in Santa Monica, in fact it was a heavily fortified and guarded fortress, with armed guards at every entry point.

    Daniel Ellsberg

    Daniel-ellsberg-resized
    The story of Daniel Ellsberg and the Pentagon Papers is well known. Mr Ellsberg was a RAND employee, with degrees in economics from Harvard and a stint in the Marine Corps. He was involved in secret studies concerning the Vietnam war in the 1960's, and had access to what became known as the Pentagon Papers while at RAND around 1969. He made copies of literally thousands of pages at RAND … and walked out the door with them. Fortress RAND and all the armed guards kept the "normal" bad guys at bay — while letting the former corpsman with a PhD, dressed in a coat and tie and carrying a briefcase, walk calmly out with what they were supposed to be protecting.

    David Black

    1971 09 Harvard student ID card
    I was a scruffy-looking Harvard undergrad in 1970, and had gotten a summer job at RAND to work on the early ARPA net, the predecessor of today's internet. Before starting work, I had to undergo a thorough security clearance; agents actually visited many of my friends and asked probing questions. By the time I started work in July 1970, I had my SECRET clearance and was pending for TOP SECRET. I had a great time solving pioneering problems with the computers. RAND had an early IBM 360, and it was the first non-DEC machine to be connected to the ARPAnet, so we had to overcome a host of very basic issues, like resolving the conflicting coding schemes (EBCDIC vs ASCII), byte lengths (8 bit vs. 6 bit) and word lengths (32 bit vs. 36 bit), in addition to everything else.

    I was also amazed at everything else you could learn at RAND. While protests raged on the streets, inside the protected walls of RAND you could find out what was really going on in Vietnam and Cambodia, from people who had just returned from those places.

    In retrospect, I realize that I got a personal demonstration of how to conduct ineffective security that summer at RAND. The protestors had no chance of breaking into RAND and stealing its secrets. In fact, none did. The guards waved through most of the employees coming through the employee entrance. Except for the one who looked too much like the "hippies" outside. I got stopped and triple-checked every time. On the way out, all the clean-cut, well-dressed, brief-case-carrying employees like Daniel Ellsberg were similarly waved through — no danger there! But that tall, gangly, scruffy Harvard kid? Better stop him and search him thoroughly. He's just the kind of person who would steal our secrets. While they were doing everything but strip-searching me, Ellsberg was shopping the 7,000 pages of secrets he had already brazenly walked out with, under the friendly eyes of the clueless guards.

    The NSA leak of 2013

    The NSA is more of a fortress than RAND ever was. No way anyone could break in and come out alive. Cyber attack? Unlikely, for the same reason. A clean-cut employee-equivalent? Same story as RAND. Once on the inside, have fun! Do what you want, take what you want — we're too busy guarding against those scary outsiders to bother with you — you've got a clearance, you're OK! Except, like Ellsberg, Snowden was not OK.

    Ineffective then, Ineffective now

    I've previously discussed the standard methods for securing important things like bank and medical records. These methods have two fatal flaws.

    First, they take a fortress approach to security. They assume the attacks will come from outside the "walls" by outsiders. They ignore insider attacks, which are the most damaging ones by far.

    Second, they take a procedural, legalistic approach to security, assuming that if enough lawyers write enough regulations and procedures, and enough enforcement takes place through audits and certifications, the problem will be solved. They assume that complex, step-by-step procedures spelling out how to implement security are intrinsically better than simple definitions for what must be secured, with penalties for failures.The trouble is, no one executes the procedures perfectly, the procedures themselves are flawed, and the bad guys are always figuring out new ways to be bad.

    Either of these flaws is sufficient to explain our never-ending security crises, and our ever-spiralling costs for trying to be secure. Together, bad results are guaranteed.

    Summary

    Our security systems are straight from the time of castles and knights: we imagine that the threat is from the scary guys in armor charging around on big horses "out there." Then, with the wrong threat in mind, we .. get the lawyers on the case! We bury ourselves in policies, procedures, regulations, certifications and audits, all of which take time and money, and most of which is completely useless. Then the bad guy cleans up his act enough to get hired, ransacks the place, flees laughing all the way … and we're shocked?? The only shocking thing is that, 42 years after the Pentagon Papers, we're piling even more time and money into ramparts and moats, when the main threat has always been the traitor inside the walls.

     

  • Cyber Security Standards are Ineffective against Insiders like Edward Snowden

    The case of Edward Snowden, the fellow who ran off with a big pile of secrets from the super-secret NSA, illustrates a problem with the mainstream approach to computer security: it's expensive, it's burdensome, and it just doesn't work! Strengthening existing standard security measures, which is what usually happens after embarrassing episodes like this, will just make things worse.

    Securing what should be secure

    Other people can argue about what various agencies should or should not be doing and whether they should be secret. Putting all that aside, there are lots of things most of us want to be kept secret, for example our health and financial records, and for sure we want to prevent unauthorized use of that information. How hard is this to accomplish?

    Apparently it's pretty hard. There are huge security compromises that take place all too often, and smaller ones with great frequency. Security breaches resemble car crash deaths: there are so many of them (tens of thousands a year in the US!), that only the most gruesome of them make the news. If an agency with a secret budget probably in the billions, whose whole mission is about secrecy, can't stop an amateur like Edward Snowden, how is it that anything stays secret?

    Approaches to Security

    The vast majority of our thinking about security threats makes a couple crucial assumptions.

    Our thinking assumes that the threat comes from an outsider, and that the outsider attacks from the outside. The outsider (we think) probes to find a weakness in our defenses, and when he finds ones, smashes in and grabs what he wants.

    Regardless of the source of the threat, we assume that we can establish a procedure that will thwart any breach of security. We assume that if we are rigorous in our requirements for process, documentation, testing and much else, we can eliminate security threats.

    As the NSA case demonstrates, these assumptions are false. Regardless of your feelings about whether Snowden is a hero or a traitor, he clearly demonstrates the fact that our current approach to security is a waste of time.

    Insiders are the real threat

    The first assumption is the "bad guys out there" assumption. Huge amounts of money is spent on "intrusion detection," firewalls, and endless things that amount to building a castle wall that is high and thick so that our secrets can be protected.

    Here's what happens. The marauding knights come sauntering along and see those high walls. Naturally they check it out. They're impressed by everything about your wonderful castle: the moat, the guards, the mean-looking guys on the ramparts, the whole bit. So if you were a sensible bad guy, what would you do?

    You'd go to the nearest town, trade in your bad-guy clothes for a respectable suit or workman's clothes, or whatever the castle is looking to hire. Then you'd walk up to the employee entrance and apply for a job! Once you were inside, you'd keep your nose clean and figure out the lay of the land. Once you had it scoped, one day you'd leave at the end of your shift a much richer person than you were before, so rich that, well, you didn't bother to report to work at the castle any more.

    I was first educated about this by Paul Proctor, who gave me a copy of his 2001 book, The Practical Intrusion Detection Handbook. Most of the book is about what people want to buy, which is based on the "bad guys are out there" theory. But he has a whole chapter on "host-based intrusion detection," in which he spells out the methods and importance of detecting and thwarting bad guys who have managed to get a job working for you. This is what everyone should be doing, and all these years later, we're not!

    Tell me what to do, not how to do it!

    The second assumption is that we can define step-by-step procedures that will prevent security breaches. Hah! Not true! The vast majority of our security procedures have been written by people who are lawyers; if they're not, they're sure acting like they are!

    What we should do is tell you what to accomplish in simple terms, like "Don't murder anyone. No matter how mad or drunk you are, just don't do it. If you do, we'll execute you or put you in jail for a long time. So there." That's all you need, when you're telling someone what to accomplish.

    The equivalent for HIPPA would be something like: "Don't give anyone's health records to anyone except that person or their designated representative, like a parent if they're a kid."

    The equivalent for NSA would be: "Hey, everything we're doing here is real important stuff regarding national security, like what our name says. So don't let anyone who doesn't also work for NSA have it. Period. Ever. Otherwise, you're a traitor, and we'll nail you."

    Instead, what companies and agencies are required to do is conform to an ever-growing collection of detailed methods for supposedly getting secure. Except you spend so much time conforming to the regulations that some guy walks out the door with all your secrets!

    Here's the bad news: Snowden wasn't an exception; he's simply a particularly famous typical case in security-regulated organizations.

    Conclusion

    Edward Snowden is the tip of a security-breach iceberg. Credit cards are being stolen in spite of onerous security regulations. Health records are being compromised, in spite of increasingly onerous regulations. Our approach to security is flawed, fundamentally and by assumption. It's like we're in the water and we're trying to swim by blowing on the water. It's not working, and the solution is not to try blowing even harder. The solution is to take an aggressive, non-regulatory approach to the most likely perpetrators, insiders.

     

  • Let’s Criminalize our Regulations

    Our regulations are a problem, mostly not because of what they're trying to do, but because they tell us how to do things instead of telling us what to accomplish (or avoid). A better model to follow would be criminal law, which clearly spells out what to avoid doing. By "criminalizing" our regulations, everyone (except the regulators) would win: the regulations would be relatively easy to write and understand, and wouldn't need updating very often; they would be short; by concentrating on "what" instead of "how," the regulations would create a climate enabling innovation, instead of today's innovation-crushing impact.

    Here's an example:

    New Jersey Statutes – Title 2C The New Jersey Code of Criminal Justice – 2C:11-2 Criminal homicide

    a. A person is guilty of criminal homicide if he purposely, knowingly, recklessly or, under the circumstances set forth in section 2C:11-5, causes the death of another human being.

    b. Criminal homicide is murder, manslaughter or death by auto.

    L.1978, c. 95, s. 2C:11-2, eff. Sept. 1, 1979. Amended by L.1979, c. 178, s. 20, eff. Sept. 1, 1979.

    Here is the reference.

    The definition of murder leaves little to question. It doesn't need to be updated very often. The statute does not tell you how to avoid murdering someone — it just tells you what murdering is, and leaves it to you to avoid doing it.

    Think about it: with murder defined in this way, we don't need loads of regulators or regulations that somehow always seem to let the truly guilty walk away. Of course, we do need a criminal justice system to track murderers down, catch them and put them to trial.

    Short and sweet! … uhhh, well, anyway it's short for sure.

     

  • Regulations: Goals or Directions?

    The sheer bulk of our regulations is exploding. By any reasonable measure, our regulations are obese; our super-size body of regulations cost more to create, feed and implement — and they aren't getting their intended job done!

    When confronted with the huge bulk of our regulations, some people say we need more regulations, while others claim we need fewer. This is the wrong debate.

    The real problem is the kind of regulations we have — our regulations spell out how we're supposed to do things, when they should be telling us what we need to accomplish. 

    What is the goal of regulations?

    In most cases, regulations are created to assure things that you, I and most sensible people want.

    • When corporate officials cook the books or otherwise hide what's really going on, we want them to stop and to be held responsible — like, go to jail! That's SarBox.
    • We want hospitals and doctors to be careful with our medical records, and not pass them out to anyone who asks for them. That's HIPAA.
    • We want financial institutions to be careful with our records, and make sure our private account and transaction information are kept safe. That's PCI.
    • We want our money to be safe when invested with investment people and the stock market; we don't want people stealing or pulling strings to make themselves richer and us poorer. That's the SEC.

    These regulations and many more are sensible. I want them and you probably do too. I want the corporate bad guys to go to jail. I want the medical people to keep my records confidential. I want the banks to keep my finances to themselves. I want my banks to be sound and the financial reports generated by public corporations to not be phony.

    How are all those Regulations working out?

    Are the regulations and regulators doing their job? How about Bernie Madoff? All sorts of bad things helped create the financial melt-down we're suffering from — how many of the top execs got in any kind of trouble over it, not to mention went to jail? There are massive losses of credit card data, some of which make the news and most of which don't, causing endless trouble to consumers with identity theft. Who's held accountable?

    The one certain thing about regulations is that we pay the price for following them. What is optional is that they do their job. In fact, if you think about it, if your financial records aren't always stolen, it's not because the regulations are effective — it's because most people are honest, and don't want to steal your financial records!

    We're getting more and more regulations, and we're taking more time, trouble and money to create and follow them. But the resulting regulations are doing a poor job of stopping the bad things we want them to stop.

    Means vs. Ends

    The reason why our ever-growing number of regulations fail to protect us is simple. In the vast majority of cases, they spell out, often in great detail, how to accomplish the goal, instead of plainly and simply defining the goal. The regulators insist on giving us what amounts to detailed, turn-by-turn directions for driving from Lincoln Center in Manhattan to Ridgewood, NJ instead of simply stating that we should drive safely to Ridgewood, NJ, and leave the exact route to us.

    I usually like to drive up West End Ave to 72nd Street, turn left, and go up the West Side Highway to the George Washington Bridge, and so on. I wouldn't be surprised if that's the route the regulators would insist that I take.

    However, sometimes when there's traffic, I continue north on West End up to 96th St and get on the West Side Highway there. That's not how the regulators or the GPS would tell you to go, but it turns out to be the smart route to take in certain traffic conditions. Given that no one has regulated (at least yet) my route to Ridgewood, I am free to take the route I think best and adapt to changing conditions. I can learn and innovate, so long as I reach the goal safely. Makes sense. But that's not how regulations work!!

    If my trip were regulated the way HIPPA, PCI and other things are, I would be slapped with a fine or worse for "violating the regulations" by failing to take the 72nd St entrance to the West Side Highway. Because the regulations give me exact, turn-by-turn directions I have to follow for reaching the goal, rather than simply letting me drive to the goal in the most sensible way.

    Are Too Many Regulations a Problem?

    No! But it's a BIG problem that we are buried under mountains of micro-managing, often-obsolete, turn-by-turn-directions-type regulations, which are by their very nature bulky. Which wouldn't be so bad if they actually got the job done.

    If we had goal-oriented regulations, they would be concise, clear, and not need revision very often. They would be easy to understand and leave room for people to learn, adapt and be creative while still achieving the intended goal of the regulation.

    We Need Regulations 

    We need government to establish and enforce a common set of rules that make our lives better. In that sense, we need regulations. We benefit from having them and we benefit from having them be enforced. But not if they're costly, out-of-date, stifle innovation and on top of it all don't work!

    If regulations were written to define the goals they are intended to accomplish, they would be inexpensive, always relevant, enable innovation and have a much better chance of being effective. We should all want regulations that tell us what to do, and leave it to us to determine how to do it. Don't tell us how — tell us what.

Links

Recent Posts

Categories