• Systemic Issues Behind the Cyber-Security Disasters at OPM, Citi, Anthem, etc.

    Our personal data is stored in the computers at large corporations and government organizations. We now have abundant proof that these large organizations are incapable of protecting our data. This is not a string of bad luck that will soon pass. These large organizations never had good security — they just weren't being attacked. Unfortunately, the security flaws are a direct outcome of the dysfunctional technical and management practices that lead to large-organization IT failures across the spectrum.

    Recent Security Disasters

    The security disaster at the government Office of Personnel Management (OPM) has been in the news recently. Here is a summary, and here is a timeline. OPM knew all about security, and tried its darndest to be secure, spending over $4.5 Billion dollars on a system to prevent breaches, including a recent $218 million upgrade on the security system known as Einstein. All for naught. 

    In the private sector, there was the breach at Anthem, preceded by a string of security disasters at major banks and retailers involving tens of millions of consumer records.

    The Response to the Attacks

    We're seeing the usual responses to the problems.

    First and foremost, try to avoid letting anyone know there's a problem.

    Second, try to draw attention to all the attacks that were thwarted. The OPM is actually bragging about all the attacks they defend against! That's like, when the bank has been totally cleaned out, bragging about how many attempts had been thwarted.

    Finally, talk about how much you care, offer completely counter-productive services to consumers, and spend even more money on the stuff that didn't, doesn't and won't work. Ignore the fact that the incentives are all wrong, that in fact no one cares.

    No one is losing their job. No significant changes are being made. No one is running around like their hair's on fire. Ho-hum, it's business as usual.

    Systemic Issues are behind the Disasters

    Security in large organizations is broken. But that's just a side effect of the fact that IT in large organizations is broken. Not in detail — in principle. When the foundation of a building is made out of jello instead of concrete, you don't fix it by adding more jello, trying a new flavor of jello, or getting everyone to walk slowly and carefully. You replace it with reinforced concrete — pronto! When the foundations are the wrong kind of stuff, making new foundations out of jello will never help. Even if it's jello that costs billions of dollars.

    The Systemic Issues

    This is a subject that is long and deep. All the problems come down to two simple core thoughts: (1) computers are just like all the other things to which management techniques are applied, so standard-issue "good management" will solve any problems; and (2) computer security is just like all the other computer issues, and can be managed using the same standard techniques.

    Wrong and wrong.

    Computers and software in general are radically different than anything else we encounter in our normal lives, and evolve more quickly by orders of magnitude than anything else in human experience. Managing a software building project as though it were a home building project leads to results that are, at best, 10X worse than optimal methods, and at worst, complete disaster.

    Computer security in particular is not just another issue to be managed using standard techniques, which in any case yield horrible results. In computer security, we're dealing with smart and motivated attackers who are at war with us, and naturally use the latest "weapons" in a rapidly evolving arsenal. While our attackers are at war with us, we plod along at a peace-time pace, scheduling security issues like just the other items in prioritized lists. When the armed gang breaks through the back door of the warehouse, we eventually discover the break-in and schedule a response for sometime in the next couple of months. By the time we've installed new alarms, the gangs are already on their third generation of tools for defeating them.

    Computers are different than the other things we manage

    Computers evolve at a pace that is completely unprecedented in human experience.

    Most of the things that managers do to manage computers is modeled on what they do for everything else, and make things worse.

    Computers are incredibly complex! But somehow, we imagine that people with no actual experience with computers can manage them, when we would never let someone who never saw a baseball game manage a team, or someone who never wrote an article manage writers.

    The vendors of hardware, software and services have evolved to provide incredibly expensive, ineffective products and services that are packaged to make top managers feel great.

    Computer security requires war-time actions, not peace-time ones

    Translating from physical security, managers insist that security is about walls, guards and kevlar vests. The bad guys are out there, our job is to keep them out. Wrong. The vast majority of security breaches result from either conscious or unknowing cooperation of insiders. Including OPM.

    The bad guys are at war with us. By the time we've figured out that we've been robbed, the bad guys are long gone. By the time we're just wrapping up the requirements documents for our response, the bad guys have cleaned us out again.

    Once we finally deploy our best defense, the art of war has advanced and our defenses are useless, just like the Maginot Line in World War I.

    Conclusion

    We all know that the definition of insanity is repeating the same actions and expecting different results. In that sense, the approach that large organizations, private and public, take to computer security is insane. All the people in charge propose is doing what they've always done, only somehow harder and better. The alternative approach, while radically different from the current one, is simple, clear and actionable. The people in charge actively resist it today. They've got to embrace it if there is to be any chance at all of improvement in cyber-security.

  • Fundamental Innovations in Software

    As a result of the decades I have spent working on, in and around computers, I have learned many things from other people, from books and talks, from studying the results of other peoples' work, from trying to accomplish many things in various ways myself, and from following the course of many projects and products over time. During this period of time, the computer industry has changed dramatically in many ways, and not much in some ways.

    Most of the knowledge and insight I have gained from this effort over time match well with those of the industry as a whole. However, there are major subject areas which I have observed don't get much attention, or need major innovation. Here are some of those areas.

    Quality

    A good deal of attention has been paid to the quality that results from software development efforts. Products have been built to automate various aspects of the quality process, and there are techniques frequently incorporated into the software development process intended to assure good quality results.

    However, it is clear that there is a tremendous opportunity to enhance the quality process. There are conceptual and technical advances that can be applied to the quality process that greatly improve the results of software development and reduce the time and effort to attain those results. While it is likely that there are situations to which the optimal techniques do not apply in part or in full, it appears that they are applicable to most software development projects.

    Optimal results

    There are a few areas in computers where people focus on measures of goodness and generally agree on what those measures are, for example, total cost of ownership. But the concept of the best possible result in theory, comparable to Shannon’s result in communications, is rarely applied in computing. Yet, there are a number of areas where it is applicable and useful.

    Similarly, in computer hardware, people frequently reach a consensus concerning the “best” way to implement a certain feature, whereas in software development tools and processes, the thoughts about the optimal way of doing things evolve slowly, but rarely reach resolution. Moving beyond advocacy and thinking about what is truly optimal and how to attain it is very fruitful.

    History

    Software development is a field that pays remarkably little attention to history; everything is now and the next best thing. But in fact, a study of history in this field is very rewarding, because just like in real history, you find that some thing truly change, some of them extremely slowly and a few rapidly; and that other things go through recurring cycles. Knowledge of this history is interesting in and of itself, just like “real” history, and also enables you to predict the future within reasonable limits by extrapolating the patterns.

    Application and systems software

    If you look at every line of code that is executed in order to run a program, the lines fall into various categories, including systems software, standard libraries and applications. The “line” between these has been moving “up” very slowly over the last few decades. This glacial trend has impacts on operating systems, databases, application development tools, and related subjects. Understanding and exploiting this trend is a target-rich environment for innovation.

    Abstraction levels

    When we notice things repeating in computing, we build a level of abstraction to encapsulate the repetition and then work at the level of abstraction. Each abstraction is something that has to be built, adopted and learned, and because of these obstacles (and in spite of the benefits), abstractions propagate slowly. Some are so hard for most people, like those involving real math, that they can only be used by hiding the complications from practically everyone. Exploiting abstractions can lead to huge advances.

    Closed loop systems

    The concept of running an automation system “open loop” vs. “closed loop” is widely understood. But I find that few computer systems are run closed loop. Even though this is not exactly a novel concept, most people who work on building or operating the systems seem to be unfamiliar with it.

    Workflow systems

    The concept of workflow has been around for many years, and many systems have been built that embody the concepts. However, most people that I encounter seem not to understand the abstraction, and no good tools have appeared to ease the path to implementation.

    Application Building Methods

    The biggest, fattest target of all is the project management style of organizing and managing software projects. I have written a long book dissecting the theory and practice of applying otherwise reasonable project management techniques to software, and another one outlining the alternative approach. The larger the organization, the more likely software is going to be built the bad way. When software is built quickly and well, it is most often built in smaller organizations that are working under some kind of severe constraint. And of course, there are people here and there who simply have figured out better ways of building software, and just do it.

    Understanding the People

    Athletes are special people — they're people like everyone else, but the outstanding ones are different in important ways. To encourage them to do their best, you have to understand those differences and act in appropriate ways. Same thing for software. HR people, general managers and everyone else applies the same template to software people they apply to everyone else. They emphasize the commonality and ignore the differences. This is why, in general, management of software people is inexcusably bad. I've written a book about this.

    Summary

    Each of the subjects mentioned here could be a book; some of them are the basis of whole innovative companies. They're not just theoretical. Exploiting some of these subject areas can lead to rapid tactical execution benefits in organizations that build or use software.

  • How to Feel Better about Software while making it worse

    Everyone knows it's hard to build software. Even projects that are judged "successful" are often fraught with problems. The odd thing is that many of the steps people take to reduce the risk and increase the odds of success actually make things worse!

    Trying to reduce the risk of software projects

    At some level, everyone knows that software projects are risky and often fail. They really want to avoid failure, but the second one guy starts babbling about "object-oriented frameworks" and another guy rattles on about "Agile and a great SCRUM master," normal people get even more worried. "How can I avoid being road kill" is the fear causing the roiling of the intestines. So they insist on things that make them feel safe, all of which (perversely) are most likely to increase the time, cost and risk of failure!

    These safe-feeling but risk-increasing items include (but are not limited to):

    • Outsourcing the project using normal procurement channels and methods
    • Selecting a large vendor to do the work
    • Requiring lots of certifications among the organizations and people doing the work
    • Selecting independent auditing, testing and other functions to assure the work is done well
    • Interviewing the people in charge of the work, and accepting only those who make you feel comfortable

    Each one of these merits an essay explaining why such common-sense steps make things worse. Empirically, they do. The spate of failures among the Obamacare implementations are a recent poster child, since the implementations involved most of the above "safety-increasing" elements.

    Outsourcing

    Outsourcing is a favorite. Huge organizations outsource all the time, even their whole IT function. But there is no evidence that the organizations that do the outsourcing do any better than the flailing organization that outsources. There is exactly one guarantee: having the work done under a different roof means that you are largely free of the responsibility, and largely from the stress of seeing the sausage factory in action.

    Large Vendor

    Choosing a large vendor is a tried and true way to make the buyer feel better and safer. You wouldn't buy a car built by someone you'd never heard of, would you? Of course not! So sensible people insist on dealing only with large, well-established vendors. Unfortunately for those sensible people, the things that work in most of our lives cause failure in software. Too bad!

    Certifications

    You wouldn't go to a restaurant that had failed a health inspection, would you? Or go to a doctor who had lost his license? Of course not. So a good way to feel safe is to find out what certifications are floating around the software industry and make sure your vendor has lots of them. Nice idea. Makes sense in most fields. But not in software. In software, you can be pretty sure that the more certifications they have, the worse they are at building software.

    Independent checking

    How do you know if they're really doing the work they say they're doing? We get our books audited by an outside firm, so doesn't it make sense to have the software audited by an outside firm of experts? Makes common sense. However, this is yet another example of how common sense makes things worse in software.

    Personal interviewing

    When all else fails, use your in-depth knowledge and experience with people to do your selecting. The trouble with this nice idea is that the person you're dealing with deals with yokels like you all day long, and you're not nearly as good as you think you are. Worse, the person you're interviewing either personally does the work (unlikely), in which case you have no clue at all, or they're just a sales person (most likely), in which case you're seriously outgunned. Forget it.

    Conclusion

    If software were easy, everyone would learn how to do it as kids, and be able to pick it up again after years of not having done it. We all know how to make risky decisions and processes less risky. The trouble is that most of those methods, which work pretty well in most of our lives, come up short in the wacky world of software, frequently making things worse.

  • Don’t Know Much about History

    Software people generally know very little about software history, and that's OK with them. It's too bad. There's a lot to learn from software history. It can help you now!

    Wonderful World

    In 1960, Sam Cooke released a single called "Wonderful World."

    220px-Cooke_WonderfulWorld

    Here are some of the lyrics:

    Capture

    I sure hope you can win that girl or boy you're after in spite of all that not-knowing!

    The Wonderful World of History

    Politicians study history in general and the last election in particular. Fiction writers frequently read fiction, current and historic. Generals study old battles for their lessons; even today at West Point, they read about the Civil War. Learning physics is like going through the history of physics, from Galileo and Newton and through Planck and Einstein to the present. Even the terms used in physics remind you of its history: hertz, joules and Brownian motion. Math is the same way. Whatever you're learning was first established at some point in history, and remains as valid and applicable to the present as when first discovered.

    Software, by contrast, is almost completely a-historical. Not only are most people involved uninterested in what happened ten years ago, even the last project is unworthy of consideration – it’s “history.”

    History isn't just for historians

    How did we learn about biological evolution? By observing species and trying to figure out their history. How did we learn about genes and DNA? By trying to figure out the mechanisms that make organisms work through time. Geology? Gee, I wonder how those mountains got there? And what happened so that I'm finding fossils of creatures that lived in the ocean up there?

    A good deal of science is historical in nature. We try to construct theories that explain how things got to be the way they are; and then we run tests or make lots of observations.

    Software History is for the Birds

    Or so it appears, from the way that the vast majority of software people act. We're about to embark on a new project. How did similar projects work out in the past? What are we doing differently? The uniform response to questions like these? Crickets.

    One thing I've realized is that our determined effort to ignore history in software is a completely understandable defense mechanism. Suppose you're starting an hours-long road trip. At the end is near-certain disaster. Would you like to know that at the beginning of the trip, so that every second is miserable, building to a crescendo of terror? Or would you rather blissfully cruise along, and then be blind-sided at the end, leading to a mercifully quick death? Apparently, pretty much everyone agrees that blissful ignorance is the way to go.

    A Wonderful World

    Here's what I think would be a wonderful world:

    1. They both love each other, AND
    2. They know lots of software history together, leading not only to A's in school, but great jobs and successful projects.
  • Computer Troubles at the Hospital and at the Symphony

    We go to the symphony to hear great music. We go to the hospital when we’re injured or sick, and hope that the caregivers will heal us. When you’re sick, the only thing that matters is getting healthy. When you’re healthy, you have a huge array of activities to choose from, one of which might be going to hear great music.

    Both orchestras and hospitals use computers to do their jobs. In both cases, computers play an important supporting role, while people deliver the actual services customers/patients want.

    One of the great hospitals, Mount Sinai, and one of the great symphony orchestras, the New York Philharmonic, provide clear illustrations of how differently medical and cultural institutions think about the computers they use.

    Computer Trouble at the Symphony

    There was a computer outage at the New York Philharmonic. Along with many other subscribers and supporters, I received an e-mail on May 7th telling me about the problem. NY Phil down
    The Philharmonic is clearly embarrassed by the situation, and went out of its way to make sure their customers know about it, what the status is, and what they’re doing about it. By sending this e-mail, they clearly announced to many people who would otherwise have had no idea the computers were down that there was a problem. But to their credit, the Philharmonic’s priority was being open about the situation so that any inconvenience was minimized.

    Computer Trouble at the Hospital

    There was a computer outage at Mount Sinai hospital last fall. I personally experienced the problem and wrote about it here. In striking contrast to the Philharmonic, no public word was or has been issued about the situation, so far as I can tell – even though I’m a patient, and even though Mount Sinai is much more crucial to my health than the Philharmonic.

    Mount Sinai may be embarrassed. I have no way of knowing; they’re keeping a pretty tight lid on the situation. In fact, as far as I can tell, the medical profession combines suppressing all information about system outages with considering the whole subject to be a joke.

    Why do I think they think it's a joke?

    There is a list of the top 100 hospital CIO’s. There is a little blurb about each one of them. Among the 100 mini-bio’s I can find only one reference to whether their computer systems are working are working or not. Here it is. First of all, keeping computers running is beneath mention in 99 of the 100 cases. In the one out of 100, here's what they say. Hospital crash

    He "caused" a network-wide crash — but that's OK, he "played a role" in "recovering it" (sic) too, ha-ha-ha.

    Conclusion

    There’s an attitude problem and an issue of priorities among the people who run hospitals. Comparing them to their counterparts in the world of symphony orchestras illustrates the problem vividly. The people in charge should make sure that their computers are actually up, running and available, above all else. They should track their performance. They should be open and transparent about it. They shouldn’t suppress information. Above all else, they should get it done! Sadly they’re not getting it done, in spite of their monstrous salaries and budgets, and that’s not likely to change any time soon.

  • Healthcare IT Disfunction: the Secret Computer Outage at Mount Sinai Hospital

    When the computers go down in a hospital, patient lives are put at risk. Medical records aren't accessible, care orders can't be entered or received, and the staff runs around trying to make things work as best they can, in spite of the unavailability of the hospital's mission-critical system.

    Could anything be worse?

    Yes.

    The outages aren't tracked. They are hidden — literally kept secret. After all, reputations are at stake here! If it ever got out that people whose salaries run into the hundreds of thousands of dollars a year for running an operation that spends hundreds of millions of dollars a year can't even keep the computers running, who knows what might happen?

    The IT Horror Show at Mount Sinai Hospital

    I’ve already told the story of one of my personal experiences with horrible hospital software. Here’s another.

    When I arrived at the cancer treatment center at Mount Sinai in New York last Fall, I immediately noticed that things were different than they had been on my prior visits. Patients were anxious, and staff were madly rushing about. Here's the waiting area on a calmer day. Treatment center

    The problem was immediately evident when I checked in: the screen was blank, and everything was being done on paper. This was Wednesday, and the computers had been down since early Monday. Some departments were back up, but since some important ones were still down, lots of things were still being done with phone calls and handwritten notes. Among other comments, I heard “This isn’t the first time this has happened.”

    This multi-day outage didn’t take place in Podunk. It was at a premier medical center. Is it better at Mount Sinai than other places? Worse? I have no way of knowing.

    This was outrageous. The health and life of patients, the hospital’s primary mission, was compromised, to put it mildly. Everyone was anxious and upset, but no one was shocked. Was anyone fired? Did the CIO lose his job? The CIO deserved to be frog-marched to the nearest exit, along with anyone else involved. But last I heard, the news of the outage was suppressed, as usual, and the CIO and his whole crew continue to be richly employed.

    It appears to be a question of priorities. Hospitals and their CIO's issue press releases when they install a new version of the ridiculously expensive enterprise software they use, and move up another rung on the ladder of how heavily dependent your hospital is on its EMR (electronic medical record). Being more dependent on computers is considered to be a good thing in this industry! But simple things like tracking the up time of the system? Apparently it's beneath the level of the top people to pay attention to it — it nonetheless appears to be important enough to train everyone to hide the outages.

    Computer Availability

    The more dependent you are on computers, the more important it is that they actually work! The top people in any computer-using organization can be cavalier about system up-time. This isn't just something that happens in healthcare, as I've pointed out. The two most important things about any computer system are that it works and that the performance is reasonable. This is true times a large number for a system that is mission critical for an organization devoted to curing sick people.

    Conclusion

    Heads should have rolled after the outage that I personally experienced and can personally testify actually happened at Mount Sinai Hospital in New York City. Not only didn't they roll, they continue to crow about how wonderful they and their system are, while making sure to suppress all news and information about their IT malfeasance. To put it mildly: not acceptable.

  • An App to Prevent Train Crashes like Amtrak Philadelphia

    Innocent people taking a train are dead. Many are injured. The government had an answer in 2008: spend billions of dollars and wait for years. There's a better answer: Build a smartphone app, with some cloud software, a couple sensors and cameras, and engine cab remote-control harness. It would be faster, cheaper and more effective than the existing partly implemented "solution," and lives would be saved.

    The Crash

    Here's the story of the crash in a nutshell:

    111Eight people were killed, and 43 still hospitalized days later.

    Reactions to the Crash

    The basic reaction has been typical all-politics-all-the-time. Here's the Reuters story:

    ZZZ

    Later in the same story, you learn that the engineer was driving at more than twice the speed limit for that part of the track, and that the accident would not have happened except for his error. But that's a detail, I guess.

    Technology Could Have Prevented the Crash!

    Then it turns out, we know how to prevent things like this! But according to the experts, it just hadn't been installed.

    Z

    This PTC ("positive train control") sounds like wonderful stuff. It turns out it's been around for awhile. Everyone seems to agree that it would go a long way to solving the problem of crashes like the Philadelphia one. So what's gone wrong?

    Government-Mandated Positive Train Control

    Here's a good summary of the issues and problems of the wondrous PTC solution, which was mandated by Congress in 2008. It was declared by Congress that it must be completed by the end of 2015. It won't be. And the cost? The GAO estimated somewhere between $6.7 billion and $22.5 billion.

    A brand-new system dreamed up by government bureaucrats in a short period of time — of course it takes billions of dollars and many years to implement! Of course it's a completely custom system, relying on railroad-only technology that will be generations behind the general computer industry before it's even deployed! Of course everyone assumes you can spec out a never-built-before system and get it right the first time!

    This is amateur-hour technology, and it is … killing! those of us unfortunate enough to be in the wrong place at the wrong time. This is a near-perfect example of bureaucratic "innovation." It is an example of the "what not how" problem of regulation: what should happen is simple declarations of goals (don't murder people) instead of gruesomely detailed directions for how to avoid murdering people. The bureaucratic approach mandated by Congress has already resulted in incredible expense and multiple avoidable deaths, just as its similar approach to computer security has resulted in some of the worst security breaches in history.

    The Modern Approach

    There is a better way. It leverages modern computing, devices, networks and software. "Experts" will pooh-pooh the approach, saying that anyone who proposes it doesn't understand the harsh and peculiar railroad environment. That's what experts always say in situations like this, standing on their little technology island, protecting their "expertise" and their jobs, until modern, high-volume technology gets the job done. Then, without further comment, they retire.

    I won't lay out the whole approach in this post; this blog has lots of the core ideas, and so do lots of modern computing technology people.

    Just as mapping software on a phone can track your location and speed when you're in a car, it can do it when you're on a train. Why shouldn't lots of people have this app? Why not publish the complete map of all the train tracks? Most of it already seems to be available to consumer mapping programs — they just need to be tweaked to allow travel on rails instead of on roads. Yes, there are areas where track maintenance is taking place where trains shouldn't go — just like with roads! Mapping software already exists to avoid such routes — just use it! Yes, there are switches — how about adding them to the maps, and making whatever controls them upload their state to the cloud? Yes, there are other trains to be avoided — how about the apps all upload their positions to the cloud, and give a view to where other trains are? Yes, there are things you should pay attention to when you're not looking at the app — navigation apps already handle this through audible alerts or talking to you.

    These simple steps, which could be built iteratively and deployed in weekly cycles, would go a long way to solving the problem. There remains the problem of overriding the train controls in case something terrible happens — but if all the conductors have the app and they have access to the engine car, many of the potential bad things could be avoided. The potentially tricky issue of automated speed control could then be addressed — but after all, airplanes are largely run by auto-pilot, why shouldn't trains? If auto-pilot works for vehicles that go hundreds of miles per hour, miles in the air with no tracks, surely it can't be too hard to make a version for relatively slow vehicles without steering controls, whose only variable is speed!

    While the government is mandating and regulating, billions of dollars are being wasted building systems that will be obsolete before they're installed, and meanwhile people are being killed and injured. There is a better, faster, cheaper way. Its cost to build is likely to be much less than the cost to simply maintain the PTS. So let's do it!

     

  • How much is a computer science degree worth?

    The median annual wage of a college grad with a computer, math or statistics degree is over $70,000. This is better than the vast majority of college majors, and compares really well with the median annual wage of high school grads, which is under $40,000. The conclusions are clear:

    • Go to college
    • Major in computers, math, statistics, architecture or engineering
    • Otherwise, you’re screwed.
    • Well, all right, majoring in education or psychology leads to crappy salaries, but at least it’s better than being just a high school grad.

    Here is the data: Wages of college grads

    This is a test!

    Trigger Warning! From here to the end of this post could trigger feelings of inadequacy among certain people. Others could feel anger towards the author, causing potentially dangerous heightening of the pulse rate. Others could feel that the author is hopelessly arrogant or elitist, resulting in generally uncomfortable feelings. So read on at your own risk.

    This post is a test of whether you’re qualified to be a top computer programmer, or an outstanding achiever in any technical/quantitative field. The thoughts in this post up to this point summarize what the article accompanying the chart intends you to conclude, and what most people will think on looking at the chart.

    The author of the article clearly failed the test.

    Did you?

    Understanding the data

    If you haven’t already, look at the chart again. Note the big, fat explanation at the top. The endpoints of the lines represent 25th and 75th percentiles. The 75th percentile for high school grads is about $50,000. This means that a quarter of high school grads have salaries above that. The 25th percentile for computer etc. grads is roughly $50,000, perhaps a little more. Which means that a quarter of the computer etc. grads make less than $50,000. In summary: a quarter of high school grads have salaries that are greater than a quarter of college grads with degrees in computers, math or statistics. Read that sentence again. Get it? Did you figure it out before reading this?

    Implications for Hiring Computer Programmers

    I hope you’ve just seen why, when I’ve hired people, I really haven’t given a %^* about their education or their degree – in fact, the higher the education and the fancier the degree, the more concerned I am to weed out the folks with bad attitudes, the ones who have been granted the knowledge and the certification to prove it, and want to spend their lives resting on and/or milking their degrees. Some of the best programmers I’ve met in decades of programming did not have college degrees. Most of the ones who are less than excellent and/or have “risen” in management are experts at glancing at things and reaching the wrong conclusions. Like most people do when looking at the salary chart above. FWIW, here are some good examples of drop-outs who did pretty well. Including the Wright Brothers — after all, how hard can inventing the airplane be?

    The people who are best in computing combine big-picture, visual/conceptual abilities with an utterly uncompromising attention to detail. Computer programs shouldn’t have even a single byte wrong, and the bytes should be selected and arranged according to a deep conceptual understanding of the problem at hand. Amateurs and pretenders don’t do well at either of these jobs, much less in combination.

    Conclusion

    If you care about attracting, selecting and retaining the very best software people, you would be well advised to alter your hiring practices as required to select the people who … get ready for it … can actually do the work! Really well! Having degrees or whatever is not nearly as correlated to that outcome as you might think.

  • Excellence in Government IT

    Consider the sets "Excellence" and "Government IT." There is a great deal of evidence that these are non-overlapping sets. Put another way, the phrase "excellence in government IT" is an oxymoron. Of course, there are people who think otherwise. Mostly, these are government workers and their enablers.

    Digital Government Awards

    It appears there are organizations promoting and celebrating "digital government." Who knew?

    Part of what these guys do is hold awards ceremonies honoring the best, the brightest and the most accomplished. There was an awards ceremony for New York in 2014.

    Awards

    30 people were individually honored for Outstanding IT Service and Support. In addition, 10 awards were given in various categories. One of the categories is related to one of my favorite subjects. The award, "Demonstrated Excellence in Project Management," is a double killer: excellence in project management, which you mostly demonstrate by chucking it over the side of the boat, and excellence in government IT, which is pretty much the null set. So "government project management?" If there ever was a candidate for something emptier than the null set, that's got to be near the head of the line.

    One naturally wonders what magic project won this coveted award. This project was so good that the leader was also awarded the Best of New York Leadership Award. Here are the highlights: Won
    This is a bit hard to figure out. Mostly, it appears, he spent money and outsourced work. He put a little data center into a big central one, and by the way bought a bunch of new equipment (that's what "modernizing WCB's infrastructure" means), and he dumped thousands of cases to an outsourcer ("third-party administrator" sounds more official, doesn't it?), I guess because those poor government workers were just overworked.

    But I was unsatisfied. I really wanted to know how he got the top award for project management. So I clicked to find out: ZAnd I was rewarded with this page, from the organization that leads, promotes and awards excellence in digital government:

    ZZ

    I was truly impressed. I always wondered how all those government agencies, some of which are bound to have bright people who truly want to serve the public, managed to deliver such uniformly expensive, inefficient, labor-intensive systems that often don't work. Now we have the answer: they have an organization that leads them and shows them how its done!

    By giving awards, they in effect define excellence down. Think about this guy singled out for the leadership award: he bought a bunch of equipment (for less? more? who knows?), moved to another data center and outsourced some work. That's the best of the best! Think about what everyone else accomplished during the year!

  • Meals at Downton Abbey and IT in Healthcare

    It’s inconceivable that a meal wouldn't be served at mealtime at Downton Abbey. If the food were bad for so much as a single meal, those responsible would be seeking other work. Computer services at a hospital? They fail to be served all too often – the  users complain and race around making do. And are those services bad? Regularly. If Carson were the butler at a hospital, the IT staff would all be fired on the first instance of a meal of data not being served when and how it was supposed to be.

    Meals at Downton Abbey

    The kitchen staff works hard at Downton Abbey. They hold themselves to a high standard. The food is high quality, and it’s delivered on time. Every time. Downton dining
    It is literally inconceivable that the guests would be assembled, ready for their food, and none appears.

    It’s not just the lordly Lord Grantham and the stern Mr. Carson who expect and get these results. Downton-abbey-series-3-31e53281a83d2a49
    The kitchen staff, from Mrs. Patmore and everyone else, shows intense pride and ownership of their work. Downton kitchen
    Mrs. Patmore isn’t cowed into producing excellent results, on time every time. Mrs. Patmore accepts nothing less from herself.

    Meals in Real Life

    It isn’t just fancy television series for which is the case. We expect food for ourselves. We may give ourselves a little slack when it’s just ourselves, but when there are guests, for example at Thanksgiving? The kitchen and its staff may not be like Downton’s, 2008 11 27 Thanksgiving at G-ma's 009
    but it works and produces results. The results are appreciated by everyone at the dining table. 1968 11 Thanksgiving-16

    IT Services in Hospitals

    In hospitals, it appears that system availability and up-time is like he-who-must-not-be-named in the Harry Potter books. It is simply not discussed among civilized people. The greater your status, the more demeaning it appears to be to have the subject even raised.

    Partly because of the refusal to discuss this subject, there’s no good way of knowing how bad the problem really is. But lots of people, particularly those who work in hospitals, know the story – and they know that outages, slow-downs and crappy software are business-as-usual.

    The Mount Sinai Hospital IT Horror Show

    I’ve already told the story of the general horrors of the Mount Sinai computer system. I've also told the story of my personal encounter with the multi-day computer outage at Mount Sinai in New York. I have since made a diligent search for any public information about the outage I experienced, and computer outages at hospitals in general. Nada.

    Lots of people in IT appear to think that cooking and serving the data, high quality and on time, is not their problem. That’s like Mrs. Patmore or Daisy shrugging their shoulders when the second day of meals not served comes and goes, and flatly declare something like “we’re doing our best, struggling with inadequate kitchen systems and suppliers who have failed us.” If that’s unthinkable for serving food to healthy people, why is it acceptable for delivering medical services to the sick and injured?

    Conclusion

    Where are the adults? Where is the outrage? Why don’t people do their jobs, and why does no one get fired when they don’t? I know Downton Abbey is just a TV show, but why is it completely unimaginable that Mrs. Patmore and her crew would fail to serve a single meal, while even with a budget of over $240 million, the CIO and his crew at Mount Sinai (and I suspect at other hospitals) fail to serve meal after meal of data and still have their jobs?

  • Project Management: the Zombies have Won

    If you're a professional software project manager, I have a suggestion. Why don't you become a consultant with Mary Kay or Avon so you can do something more worthwhile with your life?

    Oh, boy, that was mean. But if you can stand it, read on.

    Project Management in General, and in Software

    Project management is a well-developed body of theory and practice. In most fields to which it is applied, it is the only responsible way to run things. Period.

    So you'd think it would be a winner in software, which badly needs something to get it manageable. It's really hard to believe that normal project management techniques and practices wouldn't apply to software development pretty much the same way they apply to other things. But they don't.

    We now have literally decades of experience showing that project management, when applied to software, simply and categorically does not work. I've covered this subject quite a bit on this blog, and devoted a whole book to exactly how and why it does not work.

    It is one of the many sad results of the mad refusal of the whole software industry to pay attention to history that this fact is not one of the first things taught in school.

    Project Management in Software

    As it is, project management for software is a skill you can acquire. There are piles of books. There are certifications. Many of the people who go into the field are nice, well-meaning people. I like most of the ones I've met. One guy I know even teaches courses in it; from his description, it sounds like his course would be great!

    But there's a problem. Not all programmers admire or even respect project managers. There are good reasons for not wanting your project to be infected with the disease of project management. But most programmers aren't particularly intellectual about it. They just want to be left alone! Some of them feel strongly about it. So I would advise project managers to watch their step.

    Dilbert2006026105911 project management

    And if you are going to get into project management and make a success out of, do try to take a course like the one my friend teaches, not one like Dogbert's:

    Dilbert2006915890209 project management

    If you avoid the Dogbert course, your life expectancy will be considerably longer.

     

  • Math and Computer Science vs. Software Development

    In a prior post, I demonstrated the close relationship between math and computer science in academia. Many posts in this blog have delved into the pervasive problems of software development. I suggest that there is a fundamental conflict between the perspectives of math and computer science on the one hand, and the needs of effective, high quality software development on the other hand. The more you have computer science, the worse your software is; the more you concentrate on building great software, the more distant you grow from computer science.

    If this is true, it explains a great deal of what we observe in reality. And if true, it defines and/or confirms some clear paths of action in developing software.

    A Math book helped me understand this

    I've always loved math, though math (at least at the higher levels) hasn't always loved me. So I keep poking at it. Recently, I've been going through a truly enjoyable book on math by Alex Bellos.

    Bellos cover

    It's well worth reading for many reasons. But this is the passage that shed light on something I've been struggling with literally for decades.

    Bellos quote

    When we learn to count, we're learning math that's been around for thousands of years. It's the same stuff! Likewise when we learn to add and subtract. And multiply. When we get into geometry, which for most people is in high school, we're catching up to the Greeks of two thousand years ago.

    As Alex says, "Math is the history of math." As he says, kids who are still studying math by the age of 18 have gotten all the way to the 1700's!

    These are not new facts for me. But somehow when he put together the fact that "math does not age" with the observation that in applied science "theories are undergoing continual refinement," it finally clicked for me.

    Computers Evolve faster than anything has ever evolved

    Computers evolve at a rate unlike anything else in human experience, a fact that I've harped on. I keep going back to it because we keep applying methods developed for things that evolve at normal rates (i.e., practically everything else) to software, and are surprised when things don't turn out well. The software methods that highly skilled software engineers use are frequently shockingly out of date, and the methods used for management (like project management) are simply inapplicable. Given this, it's surprising, and a tribute to human persistance and hard work, that software ever works.

    This is what I knew. It's clear, and seems inarguable to me. Even though I'm fully aware that the vast majority of computer professionals simply ignore the observation, it's still inarguable. The old "how fast do you have to run to avoid being eaten by the lion" joke applies to the situation. In the case of software development, all the developers just stroll blithely along, knowing that the lions are going to to eat a fair number of them (i.e., their projects are going to fail), and so they concentrate on distracting management from reality, which usually isn't hard.

    What is now clear to me is the role played by math, computer science and the academic establishment in creating and sustaining this awful state of affairs, in which outright failure and crap software is accepted as the way things are. It's not a conspiracy — no one intends to bring about this result, so far as I know. It's just the inevitable consequence of having wrong concepts.

    Computer Science and Software Development

    There are some aspects of software development which are reasonably studied using methods that are math-like. The great Donald Knuth made a career out of this; it's valuable work, and I admire it. Not only do I support the approach when applicable, I take it myself in some cases, for example with Occamality.

    But in general, most of software development is NOT eternal. You do NOT spend your time learning things that were first developed in the 1950's, and then if you're good get all the way up the 1970's, leaving more advanced software development from the 1980's and on to the really smart people with advanced degrees. It's not like that!

    Yes, there are things that were done in the 1950's that are still done, in principle. We still mostly use "von Neumann architecture" machines. We write code in a language and the machine executes it. There is input and output. No question. It's the stuff "above" that that evolves in order to keep up with the opportunities afforded by Moore's Law, the incredible increase of speed and power.

    In math, the old stuff remains relevant and true. You march through history in your quest to get near the present in math, to work on the unsolved problems and explore unexplored worlds.

    In software development, you get trapped by paradigms and systems that were invented to solve a problem that long since ceased being a problem. You think in terms and with concepts that are obsolete. In order to bring order to the chaos, you import methods that are proven in a variety of other disciplines, but which wreck havoc in software development.

    People from a computer science background tend to have this disease even worse than the average software developer. Their math-computer-science background taught them the "eternal truth" way of thinking about computers, rather than the "forget the past, what is the best thing to do NOW" way of thinking about computers. Guess which group focusses most on getting results? Guess which group would rather do things the "right" way than deliver high quality software quickly, whatever it takes?

    Computer Science vs. Software Development

    The math view of history, which is completely valid and appropriate for math, is that you're always building on the past, standing on the shoulders of giants.

    The software development view of history is that while some general things don't change (pay attention to detail, write clean code, there is code and data, inputs and outputs), many important things do change, and the best results are obtained by figuring out optimal approaches (code, technique, methods) for the current situation.

    When math-CS people pay attention to software, they naturally tend to focus on things that are independent of the details of particular computers. The Turing machine is a great example. It's an abstraction that has helped us understand whether something is "computable." Computability is something that is independent (as it should be) of any one computer. It doesn't change as computers get faster and less expensive. Like the math people, the most prestigious CS people like to "prove" things. Again, Donald Knuth is the poster child. His multi-volume work solidly falls in this tradition, and exemplifies the best that CS brings to software development.

    The CS mind wants to prove stuff, wants to find things that are deeply and eternally true and teach others to apply them.

    The Software Development mind wants to leverage the CS stuff when it can help, but mostly concentrates on the techniques and methods that have been made possible by recent advances in computer capabilities. By concentrating on the newly-possible approaches, the leading-edge software person can beat everyone else using older tools and methods, delivering better software more quickly at lower cost.

    The CS mind tends to ignore ephemeral details like the cost of memory and how much is easily available, because things like that undergo constant change. If you do something that depends on rapidly shifting ground like that, it will soon be irrelevant. True!

    In contrast, the Software Development mind jumps on the new stuff, caring only that it is becoming widespread, and tries to be among the first to leverage the newly-available power.

    The CS mind sits in an ivory tower among like-minded people like math folks, sometimes reading reports from the frontiers, mostly discarding the information as not changing the fundamentals. The vast majority of Software Development people live in the comfortable cities surrounding the ivory towers doing things pretty much the way they always have ("proven techniques!"). Meanwhile, the advanced Software Development people are out there discovering new continents, gold and silver, and bringing back amazing things that are highly valued at home, though not always at first, and often at odds with establishment practices.

    Qualifications

    Yes, I'm exaggerating the contrast between CS and Software Development. Sometimes developers are crappy because they are clueless about simple concepts taught in CS intro classes. Sometimes great CS people are also great developers, and sometimes CS approaches are hugely helpful in understanding development. I'm guilty of this myself! For example, I think the fact that computers evolve with unprecedented speed is itself an "eternal" (at least for now) fact that needs to be understood and applied. I argue strongly that this fact, when applied, changes the way to optimally build software. In fact, that's the argument I'm making now!

    Nonetheless, the contrast between CS-mind and Development-mind exists. I see it in the tendency to stick to practices that are widely used, accepted practices, but are no longer optimal, given the advances in computers. I see it in the background of developers' preferences, attitudes and general approaches.

    Conclusion

    The problem in essence is simple:

    Math people learn the history of math, get to the present, and stand on the shoulders of giants to advance it.

    Good software developers master the tools they've been given, but ignore and discard the detritous of the past, and invent software that exploits today's computer capabilities to solve today's problems.

    Most software developers plod ahead, trying to apply their obsolete tools and methods to problems that are new to them, ignoring the new capabilities that are available to them, all the while convinced that they're being good computer science and math wonks, standing on the shoulders of giants like you're supposed to do.

    The truly outstanding people may take computer science and math courses, but when they get into software development, figure out that a whole new approach is needed. They come to the new approach, and find that it works, it's fun, and they can just blow past everyone else using it. Naturally, these folks don't join big software bureaucracies and do what everyone else does. They somehow find like-minded people and kick butt. They take from computer science in the narrow areas (typically algorithms) where it's useful, but then take an approach that is totally different for the majority of their work.

  • The Government wants to Help Uber’s Software Quality

    It's reported that New York City's Taxi and Limousine Commission (TLC) wants to pre-approve new software releases by ride companies like Lyft and Uber. Since the TLC is well-known to be heavily staffed with software experts, what can be bad about this idea? Other than just about everything, that is?

    The proposal

    Here's what they're saying:

    Uber

    Uber and Lyft have to buy smartphones and give them to the TLC because the Commission runs such a tight budget that there's no way it could afford the required thousands of dollars. Oh, wait … the planned 2015 revenue of the TLC is projected to be $545.6 million, with expenses of $61,045,000. That leaves just $480 million or so, which is undoubtedly already committed to something or other, which is probably terribly important.

    Let's assume it happens. How is it going to work? Uber gives a release to the TLC, which takes exactly how long to test it how rigorously by what means? By the time it gets around to organizing to test one release, another will have arrived. So the pressure will immediately come to have fewer, larger releases. Then will come the time when the TLC approves a release and there's a bug. There will be commissions, reviews, and a big operation will be set up to implement industry best-practices, government-style. Things will get even slower and longer, and government tentacles will start weaving their way into Uber's software development organization. In the end, New York will end up getting a small number of releases, way after the rest of the world has them, buggier than everyone else, and the costs will be passed on to the drivers and riders.

    Why?

    Why

    Right. Sure.

    The Reality

    Governments can't build software that works in any reasonable time. See this.

    No matter how hard they tried, software testing in the lab just doesn't work. See this.

    They will press to have fewer releases, when more frequent releases are the key to good software quality. See this.

    Finally, most important of all, we don't need to be protected, thank you very much. If it doesn't work, people will stop using it, and the company will either fix its problems or go out of business. That's the way the greatest wealth-creating and poverty-eliminating system ever invented works.

  • The Distributed Computing Zombie Bubble

    Distributed computing is a trend whose time has come … and gone. Well, not completely. If my computers have to ask your computers a question, that's best done using something like "distributed computing." But to be used by a single software group to serve their organization's needs? Fuhgeddabouddit.

    The early days of distributed computing

    In earlier days, there were lots of computing problems that were too large to be solved in a reasonable period of time on a single computer. If it was important to cut the time to finish the job, you had to use more than one computer, sometimes lots of them. This was frequently the case during the first internet bubble period, for example, when the concept of “distributed computing” really got traction. The idea was simple: in order to serve lots and lots of people with your application, a single computer couldn’t possibly get the job done without making everyone wait too long. So you wrote your application so that it could use lots of computers to serve your users; you wrote a “distributed” application.

    It’s always been harder to write distributed applications than non-distributed ones, and of course there’s lots of overhead in moving data from one computer to another. But if you can’t serve your users with a single-computer application, you bite the bullet and go distributed.

    Distributed computing today

    The most common form of distributed computing lives on today, more often called "multi-tiered architecture." This is when you have, for example, computers that are web servers, front-ending computers that are application servers, front-ending computers that run a database. That's a simple, three-tier architecture. The idea is that, except for the database tier, it's easy to add computers to handle more users, and by doing much of the computing on something other than the database server, you make it handle a higher load than it otherwise would be able to.

    There's a more elaborate form of distributed computing that also has a strong fan base, sometimes centered around a service bus. Other people call it SOA (a service-oriented architecture). These are slightly different flavors of distributed computing, often found together in the same application.

    Like most ways of thinking about software, the people who love distributed computing learned to love it and think it's right. Period. Just plain better, more advanced, more scalable, more all good things than the stuff done by those amateurs who run around being amateurish.

    The impact of computer speed evolution

    As I've mentioned a few times, computers evolve more quickly than anything else in human experience. Do you think that the computers of today can handle more than computers could at the time distributed computing took its present form? Is it just possible that, for most applications, a simpler approach than distributed computing in any of its forms would get the job done?

    Multi-core processors

    We all know about Moore's Law, I hope. But people don't think so much about the impact of multi-core processors. Simply speaking, "cores" put more than one computer on the chip. Physically, you still have a single chip. But inside the chip, there are really multiple computers, one per core, each running completely independent of the others. And the way they've built the cores, you actually get two threads per core — each thread can be considered a execution of a program. So, in a sense, you’ve got “distributed computing” inside the chip!

    Let's take a quick look at one of those chips. Here's one of the latest from Intel.

    Intel 15 core
    This is one awesome chip! It's got

    • 15 cores, supporting
    • 30 threads, and can support
    • 1.5TB of RAM
    • 85GB/s memory speed, plus
    • over 32MB of on-chip cache

    This is incredible. In the past, you might have 3 computers on each of 3 tiers, each with a robust 16BG of RAM (who would ever need more??), for a total of 9 computers with about 150GB of RAM. Connected by dirt-slow (by comparison) ethernet. Here, you've got 2-4 times the number of threads, 10X the amount of total RAM, all in a single chip, no bopping around on the ethernet slow lanes required. Who needs distributed computing when you've got one of these babies?!

    Conclusion

    Clearly, all the folks who regularly attend services at the Church of Distributed Computing didn't get the memo. This is not new news — except to the SOA and enterprise bus enthusiasts! There's no way mere facts are going to cause them to stray from their life-enhancing faith!

    But for the rest of us, it's clear. Use those cores. Use those threads. Make sure there's lots of RAM. And enjoy the numerous, multi-dimensional benefits of the simpler life.

  • Math and Computer Science in Academia

    Math and music are incredibly inter-related, as has been understood at least since Pythagoras. But they are never studied in a single academic department. Math and music are arguably more intimately bound than math and computer science. But math and music are never in the same department, while math and computer science frequently are. Hmmm….

    Math and Computer Science are joined at the hip in Academia

    Math and Computer Science are so intimately related in academia that they are frequenty part of the same department. This is true at elite institutions like Cal Tech. Mathcs caltech

    Math and Computer Science are in the same department at private liberal arts schools, too, like Wesleyan. Mathcs wesleyan

    They're a single department at major state universities, like Rutgers. Mathcs rutgers

    Same thing as lesser state schools. Here's how it goes at Cal State East Bay. Mathcs CSEB

    I make no argument that this is universal. Don't need to. If you search like I did, you'll find that putting math and computer science in a single department is a common practice.

    Why are Math and Computer Science so Academically Intimate?

    Most people seem to think that math and computer science are pretty much the same thing. Consider this:

    • Most "normal" people who try either of them don't get very far.
    • The people who are way into either of them are really nerdy.
    • If you're good at one of them, there's a good chance you'll do well at the other.
    • They are incredibly detail-oriented. They're full of symbols and strange languages.
    • What you do doesn't seem to be physical at all. What are you doing while programming or doing math? Mostly staring into space or scribbling strange symbols, it seems.
    • You can write programs that do math, and math applies broadly to computing.

    Meanwhile, there are other remarkably similar things that don't end up in the same department. Consider the "life sciences." They all have loads of things in common. Everything they all study starts life, develops, lives for awhile, maybe has offspring, and dies. DNA is intimately involved. Oxygen and carbon dioxide play crucial roles. But since when have you ever seen a department of botany and zoology? Like never, right? In the humanities it's just as extreme. Ever hear of a department of French and German? Academics already fight enough among themselves without that…

    Academics clearly think that math and computer science aren't just similar or highly related. If so, they'd treat them the way they do languages or life sciences. A broad spectrum of academics think they're so interwoven that there are compelling reasons for studying them together. Thus a single department that has them both.

    Math and Computer Science, a Marriage made in ????

    It's a common practice for math and computer science to be studied together. Obviously, most people have no trouble with the concept. Of all the things to question or worry about in the world, this seems pretty low on the list.

    I would like to change this. I'd like to cause trouble where there is none today — or rather, I'd like to EXPOSE the deep-seated, far-reaching, trouble-causing consequences of the fact that everyone thinks it's quite alright that math and computer science are thought of as pretty much two halves of the same coin. In fact, I will argue that the math-computer-science-marriage is just fine for math — but the root cause of a remarkable variety of intractable problems that plague software development.

    Note that I did a quick shift there. I have no problem with math and computer science being together. They kinda belong together. My problem is that everyone thinks that you study computer science in school so that you're qualified to do software development after graduating. And that software development shops require CS degrees, and pay more for advanced degrees in CS, on the theory that if some is good, more must be better.

    I will flesh this out and explain why it's the case in future posts. But I thought throwing down the gauntlet was worth doing. Or at least fun!

  • High-IQ Programmers: the Problems

    You've got really smart programmers. Problem solved, right? NOT! They can have issues, too. Just different ones.

    Smart Programmers

    Everyone knows that programming is hard. Everyone knows that really smart programmers can be many times more productive than average programmers. Everyone knows that your project's chance of success go way up if you have smart programmers working on it. But not everyone knows that there is a collection of flaws to which really smart programmers are particularly susceptible.

    The reason is pretty simple. Programmers are people, and people have problems! But different kinds of people often have different kinds of problems, and truly exceptional people may have problems many of us are not familiar with. A person who's really tall can have problems with bonking his head on entries that most of us don't have. A person who's really famous can have problems having a quiet meal in a restaurant most of us don't have. And there can be a dark side of a really smart programmer's most admirable qualities.

    The "Problem" of being good at solving really hard problems

    My new book on Software People (here's a description, and here's the book on Amazon) has a whole section on the problems endemic to high-IQ programmers. Here is an excerpt:

    Are good at solving hard problems. The ability to solve hard problems distinguishes them from other people. They ignore simple problems. They disdain working on them. When a simple problem can’t be avoided, they go to great lengths to turn it into a hard problem. People who are good at solving hard problems like hard problems, and can find them in places where other people see no problems at all. Sometimes this is a good thing, like when you encounter a genuinely hard problem that can’t be avoided. Smart people get bored easily. Smooth, straight roads are boring. Some smart people will actively change directions and seek out a problem they suspect will be hard because it is hard. Often, smart people seeking hard problems overlook sophisticatedly simple solutions because they are simple, or spend loads of time solving a really hard problem that actually didn’t need to be solved.

    If you've got a hard problem, you darned well better have people who are good at solving problems that are hard. Otherwise, you're screwed. But it happens often enough that programmers who are good at solving hard problems are really proud of that fact (why shouldn't they be?). Their self-identity is tied up in that ability. 

    What you really want is a programmer who is capable of solving really hard problems, but feels no need to demonstrate that ability unless it's really needed. I've definitely met people like this, but boy are they rare! You're talking about a super-nerd who is amazingly humble.

    Climbing a Mountain

    Suppose there's a mountain your team has to ascend. There's only one good mountain-climber in your group, and he's an amazing one — famous for his ability to tackle near-impossible climbs. You and your team are standing at the foot of a mountain. Naturally, you turn to your expert.

    Your expert, being an expert, scopes out the mountain. He sees lots of things that the normal people in your group miss. He spots a hard-to-see, tricky path that avoids the tough parts and makes the ascent a piece of cake. He also sees a route that starts out looking smooth, but has a pulse-pounding section that no one could make without his expert knowledge, experience and guidance. And then there are the other routes.

    The expert has to choose between these two reactions at the end of the climb:

    1. Boy, what an easy climb! That mountain wasn't so tough after all!
    2. We got to the top, but we almost died on the way. If it hadn't been for X's amazing skills, we would be calling for helicopters to remove the injured and the dead at this point. Thanks, X!

    Choice number 1: X's amazing skills are nearly invisible, because it "wasn't so hard after all" — but only because X uniquely saw the hard-to-see route that avoided the difficulties.

    Choice number 2: X's amazing skills are on full display, demonstrated in vivid 3-D to the team members, as he accomplishes something no normal mortal could pull off.

    Hmmmm. Choice 1: make a hard thing simple, which only I could do, but in the end, everyone is left with the impression of how simple and easy it was. Choice 2: take a tough-but-possible route, in which my amazing powers are on full display. The smart person may not even be aware of how his guts and ego are pulling him to Choice #2. It's just human nature.

    Conclusion

    There's lots more in the Software People book, where this came from. On the one hand, outstanding software people are people. On the other hand, they have issues that are unique to their smartness. You want smart people on your team. Definitely. But you also want to help your smart people be even better than they already are by confronting and overcoming their unique problems.

  • Internet Driver’s Licenses Needed for Users

    We give kids sex education. We give them driver education, and require a driver test and license before driving. But we let any fool onto the internet to wreak whatever havoc they can on themselves and others without a second thought. It's time for a change!

    Education for Meaningful Use

    Education on the basics of how the internet and associated technologies work and how to control, respond to and interpret what you see is totally neglected. There are no significant efforts that I know of to make people educated consumers of this important, ubiquitous service that is so widely used. But there is a more important issue…

    Education for Safety

    By far the most important subject for internet education is safety. Maintaining internet safety has some similarities to general safety, but is different in important ways.

    Internet "driving" safety

    The most important aspect of safety while driving is avoiding driving while impaired in any way, and paying sharp attention to the road and other vehicles at all times. Driving while impaired by drugs or alcohol or while engaged in texting or talking Image-3-4
    are recognized factors.

    So imagine how hazardous internet driving must be when people don't even know how to read the road signs (the URL's) and can't tell that they've wandered onto a road constructed by criminals specifically for the purpose of enabling them to steal your car, drive it to your bank and take out a big withdrawal! But that's exactly what it is! Here's an example of a more brazen attack (image from a good guy, Yoo Security), demanding that you send the money yourself: ICE
    Unfortunately, there are criminals out there who have grown far beyond simple smash-and-grab operations. These sophisticated criminals with a long-term view trick you to "drive" onto their criminally-constructed "road" for the sole purpose of making your car an instrument for stealing from other people or organizations. They can make your computer into a zombie to participate in botnets. It can serve that purpose for minutes or years without your awareness. Is the problem big? You betcha. There are more computers that have been hi-jacked into botnets (maybe yours!) than most people are aware of:

    Botnets
    Sometimes, of course, the criminals are stupid, greedy or malicious — I guess those are the drop-outs from the "criminals should be good citizens" certification program. So your hi-jacked device could slow to a crawl, do weird things, look over your shoulder as you type until they get the information needed to drain your bank account or max out your credit card, or even (just because it's fun!) wipe out your machine while leaving some cute "it was me! Have a nice life!" Message on your screen.

    Internet E-mail fraud

    How often do you get a letter purporting to be from your bank asking you to send them a letter containing your account number just so they can verify that everything's OK? If you got one, do you think you'd respond as requested? Apparently you're not alone — criminals are the supreme capitalists, and abandon efforts that are unprofitable before long.

    But how about letters on the internet, i.e., e-mail? Along with everyone I know, I get an amazing number of criminal solicitations, ranging from the laughable (at least to me) to the amazingly credible every day. Data-driven capitalists that they are, the only explanation for the persistence of these efforts is that more than enough of them work to cover the costs and trouble of running the schemes, certainly more than getting a legal job. I've seen fewer solicitations from Nigeria lately, but the slack has been taken up by Libya.

    Here's one of the new breed from Libya:

    Libya

    Here is a somewhat more plausible one from a place that really could be your bank:

    Chase

    Conclusion

    Uneducated internet users cause billions of dollars of harm to themselves and others every year. You think this would result in outcry by those users and people who know them for education. You might think this might merit a bit of attention from the institutions who so assiduously and expensively educate, authorize, license and otherwise keep us on the straight and narrow. When I'm in Central Park in New York, there are rangers watching my every move; they set me straight when I ride my bike where I'm not supposed to, or walk in one of the ever-changing restricted areas. The conclusion is obvious: every move I make in the Park is more worthy of watchful restriction by people in uniforms than the millions of actions on the internet that seem, at least to me, far more destructive. I must be missing something.

  • Software Problems: the Role of Incentives

    When lots of human beings work at something for a long time, they tend to figure out how to do it. Building software appears to be a huge exception to that rule. With decades of experience under our belt, why is it that we still can't build good software?

    One of the reasons software projects so often fail and improved methods aren't used appears to be that the people involved have perverse incentives.

    Incentives

    Everyone knows about incentives. They work. Even when we know someone is using incentives to get us to do something, we're more likely to do the thing with incentives than without them.

    Perverse Incentives

    Whether an incentive is perverse or not is in the eye of the beholder. From the incented person's point of view, an incentive is an incentive, and as we know, incentives work. But we normally call incentives "perverse" when they incent people to do something that most other people would agree is a bad thing.

    Perverse Incentives: Mortgages

    The housing boom leading up to the financial crash of 2007 was clearly driven by perverse incentives on multiple fronts. Borrowers were tempted to take what seemed to be easy money. Mortgage companies could make piles of money in fees by packaging up risky mortgages and passing them on. Rating agencies could collect loads of fees by not looking too closely. And the bankers at the top of the food chain made themselves lots of money by creating and selling fancy instruments that ignored the underlying realities and the ultimate consequences of their actions. Then it all came crashing down. Many were hurt, the big guys who made the most money least of all.

    Perverse Incentives:The VA System

    It has recently come out that more than 120,000 veterans are experiencing long waits for care at VA hospitals, even while official reports showed minimal wait times, enabling managers to collect incentive payments. If there ever was a case of perverse incentives leading to bad behavior, this is it.

    VA incentives

    Perverse Incentives in Software

    Software is so rational, so organized, the people involved are so smart and well-educated — surely perverse incentives aren't driving behavior in software, are they?

    Sorry, sweetie, perverse incentives are a human issue. Humans respond to incentives, perverse or otherwise. And as it turns out, there is a rogue's gallery of perverse incentives operating in software — I will only scratch the surface here!

    Estimates

    Estimates are perverse all by themselves.

    They are also a GIANT BILLBOARD incenting EVERYONE involved in the process to make any estimate as long as they can possibly get away with; and since very few people (often including the programmer involved!) has any idea how long something *should* take, the estimates are typically accepted as is; but then, manager often double the estimates before passing them on. Why is this perverse?

    The organization probably would like to get something done in the shortest reasonable time. But the programmers and project people are measured on whether they beat or miss the estimate. The longer the estimate, the better the chances of avoiding failure. It's that simple. It just makes it all the more maddening that, even with inflated estimates, things still go wrong!

    Requirements

    The whole modern software development process starts from requirements. Gamesmanship around requirements is therefore front-and-central. Estimates are based on requirements, and therefore controlling and fixing the requirements is central to the effort of creating "success." The system may fail, the users may hate it, but if it meets the "requirements," the people running the project get to declare "success." What you'd like is for the project to succeed when the needs of the business are met. The perverse incentive is for the people delivering the system to define "meeting the requirements" and then control the requirements to assure that they're met, regardless of what disasters happen to the business.

    False reporting

    Just like at the VA, project managers are highly incented to avoid reporting problems — typically using big fancy reports that are chock full of meaningful-seeming stuff but are in fact just garbage. Just like in the mortgage-driven financial crisis, everyone involved is incented to declare success, take their rewards, and kick the can down the road for the next guy. Eventually, with shocking speed, it all comes crashing down, just like the financial system, and just like the mere 4 days between the laudatory article about how great Cover Oregon was going to be and the admission of total failure.

    False Assessments

    Here's where the rubber meets the road. Who is incented to blow the whistle on a failing software project? How, when and by whom is a software project judged to have failed? Most importantly: what are the consequences of having failed?

    We all know the answer. Who has even heard of a software engineer who was fired for failure to deliver? And the people in charge? Never. It wasn't their fault! And the project didn't fail anyway! The requirements changed every month, the target kept moving, and blah, blah, blah.

    Conclusion

    Your kid comes up to you and asks, "can I play my video game now?" You briefly think about how your question when you were that age was "Can I go out and play now," but the kid isn't interested, and is bouncing around waiting for your "sure." Being the aspiring adult you are, you act responsibly and ask "Have you done your homework?" There's a brief pause. The kid is doing a quick risk-reward ratio calculation. If he says "yes," he probably gets to do what he wants. But you might ask to check. Hmmm.

    This is the breeding ground of perverse incentives. We all learn to balance honesty, openness and getting what we want. Some of us go for honesty and openness, deciding that anything else just isn't worth it. But loads of people make an informed judgment on a case-by-case basis, much like the kid and his homework.

    Whatever the morality of the case, the facts are clear: software projects fail left and right, and perverse incentives are a significant factor in making them fail. Without changing the incentives, we're unlike to abandon the Bad Old Way of building software and achieve success.

  • Software People: Book just Published

    I've just published my book on Software People — an insider's look at what programmers are like. It's got the same tacky cover design as the three books already publicly available:

    BBSB cover People

    I attempt to cover material in the book that I haven't seen elsewhere. Here are some of the topics:

    • A description for outsiders of all the stuff you've got to know in order to be a programmer — learning a language is just a tiny bit of it!
    • A statement of the programmer's dilemma — how all-consuming mastering even a slice of software usually is, and the difficult trade-off's you're then faced with involving the other skills you need to succeed in an organization and in life.
    • A discussion of how there are levels and levels of software skill — it isn't like learning to drive a car. Similarly with productivity.
    • A extensive discussion of the cultural divisions and wars that blaze through the software community, with mutually incompatible "religions" living in separate colonies, looking with disdain and pity at those who follow false gods.
    • How people who are excellent at software, far from being honored, are often diminished and marginalized.
    • Lots of material about hiring. Who decides, on what basis, common mistakes.
    • A discussion of the deep-seated cynicism that infects a large number of programmers.
    • Technology organizations, managers and decision making.
    • Typical patterns I've seen in software people.
    • An extensive discussion, with examples, of the flaws that are characteristic of high-IQ programmers.
    • Finally, a discussion of the role of the CEO in a company where software plays a key role.

    I've been at work for a long time on my series of books on how to Build Better Software Better. The books in the series have circulated in draft form, and each has undergone multiple revisions over a period of years. I've already released my basic books on Software QA, Software Project Management and Wartime Software. The one on People underwent 9 major revisions. Software People is less technical and more readable by civilians than the others.

    I have a couple more that are no longer undergoing revisions and are about ready for general circulation. They are:

    • Software Business Strategy. There are some things that are unique to running a software business that apparently are not taught in business schools, and are common errors in the software businesses I see. I spell out the problems and solutions in this book.
    • Software Product Design. You'd think we'd have it down by this point. But I see software product design happening all the time, and mistakes made over and over. In this book I describe the best methods for creating successful software products and avoiding the common mistakes.
    • Software Evolution. When you see software built over decades and decades, patterns emerge — and it's far from just onwards and upwards! These software patterns are strong and they repeat, like the well-known Innovator's Dilemma, only much more software-specific. They have amazing predictive power.

    I will publish the rest of the books as time permits. Meanwhile, I'm pleased that I've finally released the Software People book for Kindle, more than 12 years after I circulated version 1.

  • How to Achieve Cybersecurity: Motivation

    The problem is big. It's getting bigger. Here's one summary of what's been happening:

    Hack Attacks

    What's the problem here? Is it really so hard to achieve cybersecurity?

    I suggest that the issue is clear and simple: the people in charge of keeping your information safe are not motivated to keep it safe. The consequences to them personally of failing to keep it safe are minimal, and so they simply don't take the trouble to do it.

    Motivation and consequences

    Whether we like it or not, people are motivated on the positive side by rewards, and on the negative side by punishments. If you see people acting in a certain way, you ask, what is the incentive that is encouraging that behavior? The incentive could be positive (you get something good) or negative (something bad that used to happen when you did that thing no longer happens). A great deal of human behavior can be explained by personal incentives: rewards and punishments.

    Incentives in Cybersecurity

    So what happens to people in the companies when one of these big data thefts happen? Are the front-line drudges punished but the executives given a free pass? Do the people where the buck supposedly stops lose their jobs but the worker bees who were just executing according to a bad plan let off lightly? Answer: there's some bad publicity, but no one loses their job, no one's pay is docked, nothing!

    If no one at the companies even went through the motions of trying to keep your data secure, the publicity might be bad. But that's what regulations are for — CYA. The company claims it was following all the regulations that are supposed to keep data secure. So how is it their fault if, in spite of all their excellent, by-the-book efforts, the data walked out the door anyway? Case closed. The company and all its employees, from top to bottom, are off the hook!

    Incentives and Motivations

    When a company loses money and market share, the CEO is likely to lose his job. When a person in accounting delivers bad data, they're likely to lose their job. When a department does really well, the people in charge are frequently given bonuses or promotions. They get better jobs and make more money. In most industries, sales people are incentivized by commissions — if they sell more, they make more money. It's everywhere. To encourage good behavior, reward it. To discourage bad behavior, punish it.

    Everyone says they're concerned about protecting your data. They use as evidence the fact that they conform to all relevant regulations and spend lots of money on security. So if, in spite of all this, the data is lost, it can't possibly be their fault!

    Does that mean the regulations themselves are bad or ineffective? No one is claiming that (except for me and a few other voices in the wilderness), but think about this: when has any regulator lost anything because they were doing a bad job at regulating? The very notion boggles the mind!

    Bottom line: they have no incentive to protect your data! We know this because, when people are properly motivated to get a job done, they somehow find a way to get it done. The fact that they are unmotivated and have bad theories practically guarantees failure.

    Conclusion

    Lack of motivation.

    No incentives.

    Ineffective regulations.

    Therefore, cyberthefts will continue unabated until this changes. Q.E.D.

Links

Recent Posts

Categories