Author: David B. Black

  • Methods for Effective CyberSecurity

    The methods for achieving effective cybersecurity for a large class of applications are simple and obvious, but almost never implemented. If the methods were implemented, they would prevent the kind of massive, high-profile data loss that has been increasingly in the news. The methods make common sense to most normal people – but as we all know, computer “experts” are anything but normal. The industry needs to get it together, stop spending massive amounts of money on futile efforts to secure consumer data, and start implementing common-sense measures that work!

    The current approaches to CyberSecurity are fundamentally flawed

    That’s why they don’t work! It’s like if you’re playing pool, missing a lot of your shots, and spend lots of effort gesturing, jumping and grunting as your shot fails to achieve its objective – do you think your problem is not jumping vigorously enough or grunting loud enough? That’s what most enterprise responses to cyber-insecurity amount to. Increasing the money spent on things that don’t work won’t suddenly make them start working.

    The basics

    No matter what methods we use, if we continue to deploy large numbers of security guards who are nearing retirement against small, smart, fast-moving ninja bad guys, we’ll lose. If we continue fighting the last war, we’ll lose. If we continue to think that this game is all about how high and thick the walls of the castle are, we’ll lose.

    New approaches, new methods

    They’re not really new – like most good ideas, they’ve been thoroughly proven in other domains. We know they work. It’s a matter of adapting them so they apply to our computer systems.

    A lot of smart computer people have worked on the security problem for a long time. The issue isn’t something abstruse like better encryption algorithms. It’s simple!

    First, realize that anybody who walks in the door could be a bad guy.

    Second, monitor and track the valuable stuff that you don’t want walking out the door.

    Both of which, believe it or not, we fail to do today inside computer systems!

    How retailers do it

    Retailers with lots of low-value goods like grocery stores have store monitors and checkout areas. Anyone could be a thief, so people are assigned to monitor actions accordingly. Some goods may be valuable and easy to hide, like razor blades. Those are often displayed, but require a store employee with a key to let you get them.

    Clothing stores frequently have security tags on every single item. The tags are removed using a special tool during the check-out process. If you try to walk out of the store with an item that is still tagged, alarms ring and security people grab you.

    Stores with very high value goods like jewelry stores have locked cases, and a heavily human approach to security. Basically, at least one person watches each customer (and sales person!) with jewels at all times. They are disciplined to manage the number of items that are outside a locked case carefully. While the guards watch the customers (i.e., the potential thieves), what they really do is watch the jewelry. They track each item until it’s been bought or safely returned to its case.

    The retail approach to securing valuable items is clear: using whatever combination of automated and human means that make sense, track every valuable item, and assure that when the item goes out the door, it has been cleared to go out with the person it’s going out with.

    Applying Cybersecurity methods to retail

    What would retail look like if we used the kind of methods used by computer experts?

    First, every store would be surrounded by thick, high walls. No display windows! There would be strictly controlled ways of getting in – think TSA security at an airport. Further imagine that the world was awash with fake and stolen ID’s, so that while getting in the store legitimately is odious, for a skilled bad guy, not too hard.

    Now imagine that once you’re in, there is no one watching the goods, there are no security tags on the clothes, no security cameras and no guards. You can grab a string of shopping carts, pile them high with goods, and wind slowly through the aisles. At check-out – well there is no check-out! You’ve been thoroughly vetted on the way in, after all, so you must be OK. When you’re done “shopping,” you can just leave! With your mountains of goods!

    Of course, most visitors to this imaginary store are legitimate. They put up with the horrible entrance gauntlet because all stores have something like it. They get what they need and somehow arrange with the store to pay for it. There’s nothing to stop thousands of bad-guy visitors from walking out with thousands or millions items each, or millions of visitors to walk out with normal-sized shopping carts. Whatever works.

    You might think I’m exaggerating. I wish I were.

    Applying Retail methods to Cybersecurity

    It’s a bit more technical and less visual to see how retail methods can be applied to computer systems, but the basic concepts are clear. While current cybersecurity focuses on perimeter defense (like TSA security for stores), the retail approach would be a bit looser. After all, if the bad guys get in but can’t get away with anything valuable, they haven’t accomplished much, have they? How proud is a bank robber who’s broken into the safe but can’t leave with the dough? How fruitful is his career of crime if, every time he passes the demand note to the teller, she just smiles and says “next customer, please?”

    Applying the retail method to computers requires a completely new approach to tracking what visitors do when they’re inside the computer. While tracking their actions is important, what really needs to be done is track the “goods,” the valuable data items. The retail approach would differ according to the value of the items. If they’re like clothing, each item would be checked on the way out to make sure it’s authorized to leave. If they’re like jewels (for example, personal information), each item is watched like a hawk the moment it’s “picked up” by a “customer” (program). Does the customer have a couple of jewels? That could be OK, but we’re more alert. Does the customer have ten or more? Quietly circle the customer, watch the doors, and make sure there’s no escape.

    The method needs to be extended to apply to the unique circumstances of the computer. Computer bad guys can easily assemble thousands of confederates to do their bidding. The bad guys can dress and act however the boss wants them to. However, they are unlikely to act just like normal shoppers. But I don’t want to take this too far in a blog post – we’re coming up to the edge of methods I’d rather not disclose.

    Conclusion

    Computer systems, corporate and government, will continue to be breached at an alarming rate, which is of course much higher than is publicly disclosed. More money will be spent and people hired. More standards will be set, regulations promulgated and enforced. As should be obvious by now, most of the money will be wasted, most of the people will accomplish nothing, and the regulations will increase costs while making things worse. Unless something changes.

    The problem of cybersecurity can be solved. But it can only be solved if: we acknowledge we’re at war and act accordingly; we apply within the guts of our systems common-sense methods whose principles are clear, obvious and proven in other domains; and we start acting as though we actually want to solve the problem, as opposed to the current strategy of denial, cover-up and blame-shifting.

  • My Anthem Account was Hacked

    I get my health insurance through Anthem. Corporate Anthem was hacked, and the company has made a mess of their customer relations after the hacking, as I've described from receiving their "help." I now see evidence that my personal information was accessed, and Anthem has never told me.

    Anthem and HIPAA

    Anthem is really committed to HIPAA. Here's how they explain it on their website.

    Anthem hipaa

    It's clear from this that Anthem is very committed to privacy and security. Both! Here's some of what they say about privacy.

    Anthem privacy

    And here's some of what they say about security.

    Anthem security

    Anthem clearly had all the bases covered. Except they didn't. What's mind-blowing to me is that, in spite of all the security-privacy-lah-de-dah, someone walked off with the personal information of tens of millions of customers — and no alarm even went off! The breach was actually discovered by an alert grunt in the trenches.

    Anthem sys admin
    Hacking David Black

    Anthem has communicated to its members that they would let them know when they discovered whether any particular member was among those who had been hacked. I haven't heard a thing from them. But I now know that it's likely that my information was stolen.

    I went into the standard Anthem consumer portal a little while ago.

    Anthem header

    I poked around a little, and discovered this little bombshell:

    Anthem last visit

    In other words, "I" had logged in at quarter after one in the morning on Saturday, Jan 31, 2015. However, I personally wasn't logged into Anthem at that time. I was asleep.

    The Good News

    There's good news here! I already knew that Anthem either didn't know whether I'd been hacked or had decided to not tell me, so no change there. My opinion of Anthem was already subzero, so it didn't get noticeably lower. Furthermore, in spite of all this, Anthem executive management will continue to rake in millions, and they're pretty sure that profits won't be harmed:

    Anthem won't hurt earn
    What a relief!

    Conclusion

    Nothing new here. Big corporations comply with all the burdensome regulations, and tens of millions of private records somehow get stolen. The result: lots of face-saving talk that does no one any good, and increased competition-stifling regulation that does nothing to solve the problem. Nothing to see here, people … move along…

     

  • The Anthem of Cyber-Insecurity

    I'm hoping that people will start writing songs about cyber-insecurity, and that a good one will emerge that will be acclaimed as the "Anthem of Cyber-Insecurity." It will be sung quietly by groups of computer users who hold hands as they hear the details of yet another massive computer breach. While singing, some of the much-abused users will be silently praying that their "protectors" get bombed by Facebook friend requests by identity-thieved replicas of themselves, while others will pray for the end of "help" that isn't.

    The Anthem Attack

    I'm one of those praying users, because I'm a member of Anthem, the company that "lost" the personal information of "tens of millions" of its members sometime in 2014; they're not sure how many, whose records were "lost," or when it happened. Here's a personalized communication I received from Anthem:

    Anthem When

    Anthem has made a priority of communicating with its customers about the attack. When you're in the glare of publicity like this, I'm sure great care has gone into each statement on the case. That's probably why I have received more than one missive with the same date that spins things in different ways. For example, the Feb 13 note above refers simply to "cyberattackers" who "tried to get" private information, raising the possibility that their efforts were foiled by the valiant workers at Anthem.

    Check out the identically-dated but substantially different Feb 13 note below.

    Anthem 1
    In this second attempt, Anthem tells us about "cyber attackers" (now two words instead of one) who executed a "sophisticated attack," and "obtained personal information" "relating to" their customers. I guess it was successful? But maybe not, because the behavior of these guys isn't a felony, it's merely "suspicious activity" that "may have occurred." Furthermore, they carefully state that the personal information wasn't the customer's actual personal information, but merely "related to" said personal information. Hmmm….

    What "May Have Been" Lost

    So what information may have been lost during this incident that may have occurred at some unknown time? A fair amount.

    Anthem 2

    Again, what's clear is that Anthem isn't clear. The information "accessed" (wasn't it stolen?) "may have included names, …" But maybe not, we are led to believe. If the information that may have been accessed may have included my Social Security number, why isn't it possible that all sorts of other information was also accessed? We are supposed to be reassured that "there is no evidence at this time" that this actually took place — a nearly ideal way of phrasing something that is supposed to sound like reassurance, but provides full CYA.

    Anthem Provides Protection

    Anthem has a whole website set up to let its members know what's going on, and to let customers know how they can get protection against the possible unauthorized access of their personal information.

    Anthem header

    Here's what Anthem will do: they'll pay a third party to help you out.

    Anthem protections

    If you get in trouble, you can call the service, and they'll help you out. Meanwhile, your personal information may be in the hands of people who were unauthorized to access it. If they are the kind of people who will do "unauthorized" things, who knows what perfidy they'll stoop to?

    Anthem's Additional Protection

    The basic service you get isn't protection at all, as they make clear. Nonetheless, "For additional protection…" — on top of the non-protection they already provide — you can sign up for more. What exactly is this more? Quite a bit! Here's some of it:

    Allclear features

    Wow, and all for free! Let's sign up!

    So you enter your e-mail, and get a code, go to the website, enter the code, and finally get to register for protection.

    What happens next? Here's the page:

    Allclear register

    Wow, this is amazing!

    I have a chance to enter into a website a good fraction of the private, personal information entrusted to a giant insurance company which, while under their stewardship, "may have been accessed" by "unauthorized" entities.

    The security geniuses who kept my information secure want me to give it again to a company that they endorse as being wonderful security experts. Anthem was just terrific at keeping my information secure — it goes without saying that their endorsement of the security of this partner they've just picked is rock-solid.

    These guys are bureaucrats. Read this about bureaucratic security cred. And for more, this.

    Summary

    Anthem's revenues are greater than $60 Billion. They can afford to keep customer data secure.

    Anthem's executives are paid enough to do their jobs well. Last year, the CEO made over $16 million and the CFO over $7 million.

    And yet…

    It took a guy at the bottom rung of the ladder to pay attention and notice something was wrong; had he not cared, the outflow of personal data would still be going on, as it had been for an indeterminate amount of time before the alert employee's observation.

    No system or procedure established by the rich, giant entity had anything to do with noticing the breach, much less preventing it.

    Everything about what they've done since exhibits the same lack of attention to detail and I-don't-care attitude that made the breach possible. What they mostly seem to want is to dash off letters riddled with errors and assurances, focused above all on their public image.

    Their offer of "protection" is a cruel joke, exposing the gullible who accept the offer to further dissemination of their private information.

    Conclusion

    I'm waiting for that anthem as I sit, holding hands in a circle with my fellow users, thinking dark thoughts. And I'm as likely to enter my personal data into the Anthem authorized "protection" service as I am to publish it on this blog.

  • Facebook’s Software Quality: the Implications

    I have pointed out Facebook's lack of desire or ability (who cares which?) to deliver software that actually works. I've pointed out that they're hardly alone in this respect. It's important to accept this observation as true, so that you can change behaviors that may have been unconsciously predicated on the supposition that Facebook delivers great software, effectively and efficiently. They don't. So don't hire their people and expect great things to happen, and don't mindlessly emulate their methods or use their tools!

    The Unspoken Assumption

    Facebook is a wildly successful company, worth over $200 billion. I'd like my company to be worth even 1% of Facebook. So I better find out what Facebook did, and learn from it. Facebook is a software company, so their engineers must be smart and effective. I better get some of them in so they can teach us the "Facebook way." And their tools — wow. If Facebook uses something, what an endorsement that is. My guys had better have a real good reason to use something else; I look at what FB's worth and what we're worth — don't we want to be like them? If a tool or method is good enough for FB, it should be plenty good enough for us.

    The role played by software in FB's success

    Here's the logic:

    FB is wildly successful.

    FB is built on software.

    Therefore, FB software must be wildly excellent.

    We already know by examining the quality of FB software that it's crappy. So we have reason to suspect that the virtues of FB software may NOT be a driver of FB's success. Consider this thought: What if FB is wildly successful IN SPITE OF its crappy software? If that's true, the LAST thing you'd want to do would be to infect your reasonably healthy engineers with disease vectors from FB.

    Explaining FB's Success

    There are lots of reasons software companies can become very successful other than having great software. In fact, by the time a company gets large, bureaucracy and mediocrity normally take over, and any great qualities in the software are normally eliminated. The most common reason a software company gets and stays successful is the network effect, the self-validating notion that "everyone" is using the software, therefore I should too.

    The network effect becomes even more powerful when there's a marketplace. E-Bay is a great example. If you're a seller, you want to sell in the place that has the most buyers. If you're a buyer, you want the greatest choice of things to buy. Similarly, if FB is where all your friends are, you'd better sign up — which makes the network effect even stronger.

    FB, by chance or plan, leveraged the network effect for growth brilliantly. Harvard already had a physical book with everyone's pictures in it, called the Facebook by students. The basic education and promotion problem was solved out of the gate: Harvard students knew what a "facebook" was; they all had a physical one, and used it, if only because their own information was there. For example, here's me in the 1968 edition: FB 1968_0002
    However straight-laced those Harvard freshman looked, a fair number of them were hackers and troublemakers. Here's the very last page of the 1968 FB. Look at the last guy listed. FB 1968

    There's a similar entry, with a different photo, at the start of the book.

    Zuckerberg was solidly in the long-standing Harvard hacker tradition. He had already illictly grabbed student photos for a prior application, which both got him in trouble and made him famous on campus. So when he launched "thefacebook," of course all the Harvard students would check it out. He did this in January. It was used by about half of all Harvard undergrads within a month.

    His next smart move was to open it just to students at a couple more elite schools, and then Ivy League schools. Once established there, he expanded. He did NOT open the doors and let anyone join — he moved from one natural community to the next, letting the network effect do its magic before moving on. Finally, alumni were allowed to join, but only if they had a .edu address proving their affiliation. That's when I joined. Only after a whole generation of students had made it the standard did FB allow their parents to join.

    The quality of the software had nothing to do with this. If people had to pay for it, FB would have flopped. Feature after feature came pouring out of the self-declared brilliant minds of the top people at FB, many of them flops, mixed in with scary experiments with privacy. But it was "good enough" most of the time, it's free, it's where your friends are, what can you do?

    The conclusion is clear: FB grew to be a huge success IN SPITE OF having rotten software quality and development methods that are just horrible.

    The FB environment and yours

    Facebook software development methods and tools are NOT something a small, fast-moving, high-quality software shop should want to emulate. Their quality methods in particular are not only trashed by their users, but also by a fair number of ex-employees. The same thing goes for the computing and server environment.

    If you find a talented ex-FB-er, by means hire him or her — but only after verifying that they're sick of how things are done at FB and want to work at a high-quality place.

    Above all, don't emulate the actions of FB's leadership. It's the network-effect flywheel that continues to bring eyeballs to their applications, NOT their great software.

    And think about this: if they're so brilliant and such great developers, why have they done about 50 acquisitions in their short life, a couple of which are important to their growth?

  • Facebook’s Software Quality: the Facts

    Facebook is an incredibly successful company, one of the most valuable on the planet. It is natural to assume that a main reason for this is that they've got a boatload of great programmers who produce code that users love. This assumption is wrong. In fact, the widespread adoption of Facebook masks deep, long-term quality issues that are not getting better.

    Facebook Success

    Facebook recently passed $200 billion in market value. Amazing! It has billions of users world-wide and has no serious competition. No one can question FB's success in user count and market capitalization.

    FB 200B

    Facebook Mobile App

    Mobile device use is going through the roof. We are in the middle of a massive, rapid migration from workstations and laptops to tablets and smart phones. This trend impacts FB just like everyone else. At the recent Money2020 conference, a top FB executive laid out the numbers, which are stunning; in short, FB mobile use nearly equals normal web use. If anything is important at FB, it's got to be getting the mobile app right.

    FB mobile

    Facebook Mobile App Quality

    So how is FB doing, this premier, ultra-successful company with no lack of resources to do an excellent job? They've got to be doing way better than the rest of the industry, right?

    Let's start by looking at user reviews:

    FB 3

    Not too bad, 4 stars out of 5, right? But out of more than 22 million reviews, more than a quarter gave 1, 2 or 3 stars, more than 6,000,000 reviewers! Let's look at a few of those reviews. (I didn't scan for exceptionally bad reviews; I just picked off ones that were near the top of the Play store.)

    Here are a couple reviews. Cindy gave 1 star because the app doesn't work at all, and Johnny gave 2 because he suddenly can't avoid being buried in notifications.

    FB 1

    Here are a couple more reviews. The third reviewer gave 3 stars even though the app is basically disfunctional.

    FB 2

    These are educational:

    FB 4

    The 3 on the left describe things that worked on a prior release that no longer work, which is the cardinal sin of quality testing. Look at Bratty's review awarding 4 stars, even though he/she can't use the app at all. Makes you wonder if anything but 5 stars is good for FB. Jeremy's review sums it up: "you're still not listening to your users." If only 5 stars represents satisfied users, the ratings mean that about half of FB app users have a serious bone to pick. Which is quite a statement.

    FB App Quality in Context

    Compare the performance of the FB app to the performance of your car. Getting a new release of the app is similar to getting your car back from the repair shop, only with little trouble on your part and no expense. Most cars run pretty well — they start in the morning, run through the day, and rarely break down. When you get your car back from the repair shop, it's even better, even less likely to break down.

    Not true for FB. Even though it's "in the repair shop" pretty frequently, the FB "mechanics" all too often find a way to break things that used to work, and fail to fix things that didn't work when it went into the "shop." FB programmers and managers think they're way smarter than auto mechanics, but if the car people performed even a little bit like the FB crew, they'd be out of business. The reality is that, with all their oh-so-highly-educated-and-smart mountains of cool (mostly) dudes, the FB crowd can't come close to delivering the quality that nearly every corner-garage mechanic delivers every day.

    FB quality stinks, and it stinks for their fastest-growing, flagship product. In saying so, I'm simply summarizing the expressed experiences of literally millions of their users. There are ways to achieve high quality software. FB does not lack the resources. The fact that they don't deliver quality and aren't even embarassed about it tells us that they just don't care.

  • Net Neutrality: It Ain’t Broke, Don’t Fix it

    There is lots of talk about "net neutrality" now, after years of passionate advocacy by partisans. I have a simple response to the issue, driven by my simple-minded engineer's mentality. There's no problem here, so don't you dare try to "fix" it!

    Net Neutrality

    The way "net neutrality" is normally described, it's shocking that it's not already the rule of the land. Opposing net neutrality is described as being like a racist, something which is obviously unacceptable in a civilized society. (Just to be clear: discriminating on the basis of race, sex or any other human variation is totally unacceptable to me.) It amounts to evil internet service providers slowing down or discarding network packets from sources of which they don't approve, and speeding up access to approved sources. This could be done for commercial gain, to push some brand of politics, or any number of nefarious motives.

    The argument in favor of net neutrality is normally made in terms of simple fairness: preventing giant ISP's from preventing or impeding access to internet resources customers want. The feared consequences will range from high prices and/or poor service for companies whose services threaten the ISP's such as Netflix, to barring consumers from accessing politically or commercially threatening web sites. Anyone who opposes this view of enforcing simple fairness is accused of being paid off by corporate interests or morally corrupt. Or simply stupid, for not understanding how the internet works.

    I claim that I am none of the above: not bought off, not morally corrupt, not stupid, and furthermore relatively knowledgeable of internet internals.

    It would take a long paper or short book to lay out all the facts and arguments. I don't have the time or the patience. But here are some headlines.

    "Net Neutrality" is all about Innovation-Killing Regulation

    Net Neutrality may be a moral crusade about fairness and equality for many of those who promote it, but the proposed solution is that the same inept crew that raises costs, protects the powerful and stifles innovation in so much of our lives will now be able to wield their magic-killing wands on the internet. It's not about "fairness" — it's about control by a bunch of ignorant, remote bureaucrats.

    Here's a good summary, see the article for more:

    The Internet boomed precisely because it wasn’t regulated. In 1999 the FCC published a paper titled “The FCC and the Unregulation of the Internet.” The study contrasted the dramatic growth of the open Internet with that of the sluggish industries subject to Title II’s more than 1,000 regulations. Sen. Ted Cruz got it right last week when he tweeted that Title II would be ObamaCare for the Internet.

    Amazing as it seems, under these regulations federal bureaucrats in the 1970s decided whether AT&T could move beyond standard black telephones to offer Princess phones in pink, blue and white. A Title II Internet would give regulators similar authority to approve, prioritize and set “just and reasonable” prices for broadband, the lifeblood of the Internet.

    These guys don't know how to build technology. They are incapable of keeping it secure. Their regulations are certain to be obsolete before they're written, and counter-productive.

    You're Afraid Greedy ISP's Might Limit Internet Access?

    Really? Well, just wait until the government gets involved. Once a bunch of bureaucrats operating essentially in secret gets going, it's hard to stop them.

    It's well-known that South Korea has the world's fastest internet connections. But the internet there is anything but free and open. Government-driven censorship is severe. Here are some of the basics:

    Internet censorship in South Korea has been categorized as "pervasive" in the conflict/security area, and also present in the social area. Categories of censorship include "subversive communication", "materials harmful to minors", and "pornography and nudity". Internet censorship has been expressed by the shutting down of anti-conscription and gay and lesbian websites,[1][2] the arrest of activists from North Korea-sympathetic parties, and the deletion of blog posts by writers who criticize the South Korean president.[citation needed] Censors particularly target anonymous forums; South Koreans who publish content on the Internet are required by law to verify their identity with their citizen identity number. The most common form of censorship at present involves ordering internet service providers to block the IP address of disfavored websites. A government agency announced the planning of new systems of pre-censorship of controversial material in the future.

    ISP problems are Caused by Regulation. The Cure is More Regulation??

    The ISP's, like Comcast, Cablevision, Time Warner, Verizon and the rest, provide the "last mile" of access to the internet. They're the guys who bill you for use. All the rest of the internet just magically happens, supported by a variety of means, mostly advertising.

    The last mile is where the problem is. These guys are mostly descendents of the phone and cable companies. They exist and operate at the pleasure of various federal, state and local regulators. Just like the power companies, they have centers from which their wires weave out to sub-stations, down major streets, branching to local streets and eventually to houses and buildings. More agencies than you can shake a stick at stand in their way at every step, demanding this and that. In exchange, they get a monopoly or close to one.

    Are these nimble, creative, innovative guys? Duhhhh. How can they be? They go to all the trouble to put wiring in, and they try to keep it in service as long as they can, milking every advantage out of it they can. Given all that, I'm surprised things work as well as they do.

    Bottom line: the ISP's are already regulated. That's their problem. Let's not make it worse by adding in federal regulation and spreading it to more of the system. Since when has federal regulation made technology better?

    There are Fast Lanes and Slow Lanes on the Internet. And the Problem Is???

    Advocates of net neutrality are big on talking about how grubby issues of crass money will cause unfavored sites and consumers to be relegated to the slow lanes of the internet, while all the fat cats will cruise on the fast lanes.

    Exactly how is this different from, like, everything else in life?

    There is nothing like "NY Yankees neutrality," for example. Here's the price and the view from the expensive seats:

    NYY first row

    And here's the price and the view from the bleachers:

    NYY grandstand

    How unfair! How unequal! Someone should do something about Yankees neutrality!

    By comparison, all the "seats" on the internet offered by ISP's are just fabulous. Access rates are thousands of times faster than in the past, and at good prices. You can get even faster speeds if you're willing to pay — and that's OK.

    The Greatest Current Threat to the Internet is Apps and Mobile

    "Net Neutrality" is mostly a "what-if" threat, based on the minimal things ISP's have done, and the horrible things someone imagines they could do. Apps, driven by mobile, are a huge, here-and-now threat, growing by the day. As users shift their attention to mobile, they are shifting away from the open, highly competitive web to the walled gardens of the mobile world, which is exactly what monopolistic giants like Apple, Google and Facebook want.

    Here's a good summary, see the article for much more:

    It isn’t that today’s kings of the app world want to quash innovation, per se. It is that in the transition to a world in which services are delivered through apps, rather than the Web, we are graduating to a system that makes innovation, serendipity and experimentation that much harder for those who build things that rely on the Internet. And today, that is pretty much everyone.

    The Internet is Wildly Complex and Rapidly Evolving

    People who complain about net neutrality typically have no idea how the internet works and how it's evolved over time. There's a lot going on; it's not just a set of pipes that get bigger and faster over time.

    This is the part that's tough for me to limit what I say. While there are people who have spent more time inside the internet and its predecessors than I have, I was involved early, in the ARPA-net in 1970 and 1971 at a time when it had fewer than ten nodes, and periodically since then to the present. Here's the ARPA net in 1977:

    Arpanet_logical_map,_march_1977

    A good chunk of the fun stuff, both the power and the problems of the internet, comes from the fact that the "internet" is a network of networks, an "inter-network" that connects many networks together, sort of like the interstate highway system connects the states, though much less uniformly than that. Here's an early version of the network of networks:

    800px-InetCirca85

    If this were the interstate highway system, some things to note would be:

    • ISP's control the local roads and entrance ramps to the big roads.
    • There are different ways to drive cross-country.
    • If you care a lot about drive time, you get to know the best routes.
    • If speed is really important to you, you take the toll roads to avoid the choke points. This is the origin of Internap, for example; its big early customer was Amazon.
    • If you've got lots of stuff to deliver to many customers in many cities, you pre-deliver it to warehouses near the customers, so that when they order, delivery is fast. We call it a CDN, content delivery network.
    • Special sub-networks are constantly being developed to solve problems, and the people who use them pay for their use. Business as usual.

    The fact that the internet is an evolving web of variously connected networks is key to its vitality and astounding growth. Let's stand back and enjoy its continued unimpeded, unregulated growth.

    Worried about Comcast and Netflix? You Shouldn't Be

    Net Neutrality adocates like to create fear with all the things big scary ISP's could do that would be just awful — therefore we have to regulate them before they do those things, as in the Philip Dick story The Minority Report. They also love to recount the charges Netflix has made against Comcast as evidence of actual wrong-doing. In other words, they like to take the side of the monopolist of content (Netflix) against the local monopoly of access (Comcast). Once you dig all the way to the bottom, you realize that Netflix wanted to be able to dump content onto Comcast's network amounting to more than a quarter of its total traffic and demand that Comcast deliver it with uninterrupted regularity — for free, leaving Netflix to keep all the money it charged its customers. In the end, they cut a deal similar to typical CDN deals (see above).

    When you buy HBO, you expect that the cable company and HBO somehow split what you give them — why should you care what they work out? But then when you buy Netflix, net neutrality advocates demand that the cable company deliver it for free. Only on Planet Stupid is this anything like "fair."

    Summary

    A cardinal rule in engineering is "if it ain't broke, don't fix it." Enthusiastic young engineers break this rule all the time. Hard experience usually educates them. Applying that rule to the internet, we get: the internet is a big collection of moving parts and blobs, constantly evolving. It works remarkably well. Parts of it are crappier and slower than they could be, anywhere from 2X to well over 1,000X. Most people who operate various parts of the internet have no reason to care about the ultimate consumer experience and act accordingly. The slowest and crappiest parts of the internet stay in use way past their natural expiration dates, but eventually die off. The biggest entities and/or the most regulated and/or the most monopolistic tend to be the slowest and crappiest of all. They try to implement and/or enforce practices and technologies from many years ago, and do so poorly, at great expense to themselves and everyone involved. Sometimes they act in a nakedly self-interested or "principled" way and make things even worse. But all in all, the consumer experience on the internet has improved with remarkable speed and few glitches compared to almost anything else, and way better than if it had been regulated. So let's leave it alone, and worry about the true threats.

  • Hiring a CTO: the Impossible Dream

    I've been a CTO several times. I've worked with many CTO's. Last time I checked, every one of them was a natural-born human being. No cyborgs, no alien intelligences controlling a homo sapiens body. So why is it when most organizations try to hire a CTO, their requirements clearly demand capabilities and accomplishments that no human being could ever achieve?

    The Impossible Dream

    I've seen a whole pile of CTO job specifications over the years. They are remarkably similar. If I were in the executive recruiting business, I'd be embarassed to deliver my "custom-built" CTO job requirements document, knowing that all I've really done is some light editing from previous efforts. The word "plagarism" somehow comes to mind.

    But that's chicken feed compared to the blue whale in the room — this is a way bigger issue than a mere elephant in a room. It's the fact that no actual, living human being born of human parents (to rule out the alien connection) could ever possibly satisfy the requirements! The "requirements" aren't requirements — they're an impossible dream.

    The Impossible Dreamer

    Here are the first couple of verses from the song, the central song in the 1965 musical Man of La Mancha

    Impossible dream

    The song and the musical are about Don Quixote, hero of one of the greatest and most influential novels of all time. Don Quixote and his faithful companion Sancho Panza go off on a quest to revive chivalry and do good works.

    Don Quixote

    One of the most famous episodes is Don Quixote going after the windmill, which he imagines to be a giant. Things don't go well.

    Windmill

    This is the origin of our phrase "tilting at windmills," in which you are taking on an impossible task.

    Tilting at windmills

    Impossible CTO Requirements

    Part of the trick in writing these requirements is to make it not seem to be an impossible task. I admire the clever writing needed to slide the impossibilities past the average reader, who, frankly, pays little attention to the wording.

    The requirements amount to something like the following list:

    • The successful candidate will be over 7 feet tall and be able to successfully pass under a limbo pole exactly 2 feet high without touching the ground.
    • The successful candidate should be able to crank out bug-free code at the rate of hundreds of lines per hour, while being an empathetic, nuturing leader of technical talent.
    • The successful candidate should have a proven track record of hiring technical talent that dresses in board-of-directors-friendly "business casual," works during normal working hours at the designated location, and is consistently cheerful and friendly.
    • The successful candidate will have demonstrated the ability to be a self-starter, while at the same time executing management directives enthusiastically and without deviation or failure.

    Have I exaggerated? No! I've only slightly edited actual requirements that I've seen in order to protect the guilty.

    Conclusion

    Trying to hire an in-your-dreams-only CTO is business-as-usual in my experience. I try not to get upset about it. And in-your-dreams is not even the biggest problem in CTO hiring; even worse is making requirements that actively filter out the most promising candidates! This madness is yet another side effect of people without a technical bone in their bodies thinking they can manage it, because after all, technology is just another thing to manage. This absurdity is so widely accepted in management circles that even solid examples of how ridiculous it would be to act this way in other fields just whittle away at the deep-set conviction.

    Postscript

    I know, I know, I'm being mean. The recruiters are, after all, doing the best they can with what they have to work with. They won't get very far with their customers (the companies doing the hiring) if they say "that's a stupid requirement, I won't put it in." And in the end, they're measured on the success of the person who eventually gets the job, which is way beyond a document — which people don't take terribly seriously anyway.

    But there's a serious point here anyway: the document could be meaningful and helpful if it specified something that could be satisfied by something less than a celestial being, and if people could get clear about avoiding "there are at least 15 top priority items here — yes, they're all equally important!"

     

  • Joe Torre and Software Development

    Joe Torre had an outstanding run as manager of the NY Yankees baseball team. While managing baseball seems pretty distant from managing software development, there are nonetheless a couple of important lessons to be learned. Put simply, baseball has it right and software has it wrong: if we chose software managers using the common-sense methods that are widely accepted in baseball, our software development track record would emerge from its current long, dismal, always-agonizing depression.

    Joe Torre

    Maybe not everyone knows who Joe Torre is. Now retired, he was a baseball player and manager.

    JoeTorre1982
    Joe Torre had an excellent career as a player, from 1960 to 1977. He was an all-star 9 times, was NL MVP once, and was the NL batting champion once. Unusually for a baseball player, he had extended playing time at multiple positions: catcher, first base and third base.

    Joe_Torre 2005
    He went on to have a stellar career as a manager, from 1977 to 2010. His Yankees won the World Series 4 times. He was AL manager of the year twice. His NY Yankees #6 was retired.

    Players and Managers

    Are most managers former players? Is Joe Torre the exception? Loads of baseball fans imagine they can do a better job than their home team manager. The owners have their own opinions on the subject. How hard can it be?

    I looked into this question. There is a list of every baseball team manager from the start of the game. The list gives lots of information, including the manager’s history as a player (or not).

    Here are the facts: as of today, there have been 686 managers of major league baseball teams, starting in 1871. Of those, 566 were former players, while 120 were never players. So the numbers show that the vast majority of managers have been former players. Just 17% of managers since 1871 were never players.

    Is it just Sports?

    My go-to example in music is, of course, Franz Liszt, who excelled as a performer, composer and conductor.

    Liszt
    But he was hardly alone. The NY Times says

    Times have not really changed. In Bach's day composers played their music at keyboards and conducted the instrumentalists about them. Beethoven conducted. So did Berlioz, Mendelssohn, Wagner, Mahler and Strauss. In our day composers are still conducting…

    Here’s Gerard Baker, managing editor of the Wall Street Journal. Gerard Baker WSJ
    Mr. Baker is an accomplished writer, an excellent reader and judge of other peoples’ writing.

    A pattern seems to be emerging here

    Yes, it’s a pattern. Can you imagine a CFO who can’t add? A managing editor who not only can’t write, but can’t even read? How about a museum director who not only isn’t an artist, but can’t see?

    Let’s apply the pattern to software!

    Oh! Ummmm…

    "Well, software is just another thing that can be managed by good management techniques!"

    "I don’t need to know the details – I manage for results!"

    Can we talk about something else now please?

    Conclusion

    The best qualifications for managing software in general and programmers in particular has never been a hot topic. In spite of all the evidence of massive failure, I doubt it will become a hot topic any time soon. But it should be! Just think about the basics here: however peculiar you may think writers are, do you really think editors don’t need to be able to read and write themselves? You may think of accountants as people with thick glasses hunched over desks with green-shaded lights, but do you really think the CFO doesn’t need to be able to add? Programmers may be weird, but doesn’t similar thinking apply?

    Postscript

    While 83% of baseball managers were players, 17% were not, among them some excellent managers. I'm not saying that only former programmers can manage programming efforts, and I know a couple truly excellent non-programmer managers. But in each case, they do interesting special things that are not widely understood that enable them to achieve excellent results.

  • Cyber-Insecurity and the Maginot Line

    The French built the famous Maginot Line after WW I as the perfect defense against another German attack. We all know how that worked out; it became the textbook example of “fighting the last war.” With computers, the speed of evolution is literally hundreds of times faster than with armaments. That’s partly why in cyber warfare, the vast majority of money and effort is spent fighting the last war, which partly explains why we are so cyber-insecure and why it’s so important to get way smarter about cybersecurity than we are.

    The Maginot Line

    According to the history books, the French (among others) “won” World War I. The French certainly thought so. The French generals definitely thought so.

    The French decided that they wanted “learn the lessons” of the war, and apply them to preparing for the next war with the Germans.

    They knew that the technology of war evolves. They were well aware that, once they recovered from their post-war deprivations, the Germans would continue to advance the weapons of war. They were confident that heavily armored vehicles (tanks) would evolve from their nascent status during the “Great War.” To make a long story short, after considerable deliberation, they designed and built the Maginot Line as the ultimate defense against German attack.

    The name Maginot “Line” implies that the Maginot whatever was line-like in nature. The reality is richer and more interesting. As this diagram indicates, Maginot line
    it was a rich complex of systems, stretching more than 10 miles from the border posts to the back.

    Here, for example, is an element in the Maginot line. Hochwald_historic_photo
    Things like this would contain machine guns and/or anti-tank guns.

    It was built over about 10 years, from 1930 to 1940, and was extolled as a “work of genius” by military experts.

    The Maginot Line at War

    The Germans attacked on May 10, 1940. By May 21, the Germans had the Allied armies trapped by the sea on the northern coast of France. German forces arrived at an undefended Paris on June 14, and forced the French into an armistice on June 22. France, victors in World War I and creators of that work of genius, the Maginot Line, fell in about six weeks.

    How did it happen? In retrospect, it’s pretty simple: the Germans read the French script for how the war was to be played, and refused to play the part written for them. Their tanks simply by-passed the invincible Line, and the French planes were inferior in design and number to the German planes. Bundesarchiv_Bild_101I-401-0240-20,_Flugzeug_Heinkel_He_111
    Even though the English fed them details of German operations obtained by breaking the Enigma code, French inferiority was so great that they still lost!

    And how could the French possibly have won when the Germans had generals who looked like this? Bundesarchiv_Bild_146-1987-121-30A,_Hugo_Sperrle

    Looking back on the Maginot Line

    It’s hard to find a better example of “fighting the last war” than the Maginot Line. But surely everyone learned the lessons of how bad it is to fight the last war, right? Nope. That’s one of the reasons why the Maginot Line serves so well as a metaphor, going well beyond its role in history. It serves as an oft-ignored beacon for what you should not do.

    The Maginot Line and Cyber Insecurity

    We can make ourselves feel comfortable by calling it cyber-security, but the reality is that anyone involved with computers is somehow involved in cyber-warfare, whether as a civilian (most people, the “users”) or as a professional. Most computer professionals like to think they have civilian jobs in the computer industry, but the fact is, they’re involved in cyber-warfare no less than the people who transport military supplies to the soldiers are involved in warfare. Everything they do makes a contribution to either winning the war or losing it.

    How’s the cyber-warfare going? How do most wars go when the leaders refuse to acknowledge they’re at war? Yup, that well. We act in every way like we're at peace, and insist on peacetime software development methods, while on the other side, hosts of bad guys fully acknowledge they're at war, and it's a war they intend to win.

    The leaders of our computer systems insist that they’re doing everything they can to maintain cyber-security. Their words are often backed by money. It’s not unusual for 10% of a company’s IT budget to be spent on cyber-security. Unfortunately, the vast majority of the money and the efforts go to building the computer version of Maginot Lines, systems that the people in charge are convinced are brilliant, but which are in fact generations behind the bad guys who are constantly attacking them.

    There is a natural tendency to fight the last war, no matter what you’re doing or where you work. Many people are aware of this tendency and try to avoid it, just as the people who built the Maginot Line tried to avoid it. They genuinely tried their best to take into account the advances that would take place, and plan for that future state. But the Germans were more advanced than the French planned for, and more clever.

    So what do you think would take place in a field where the rate of advance of the technology is greater than in any other domain of human experience? If it’s hard for people in domains in which patterns and practices advance slowly, how hard is it in a domain which advances hundreds of times more quickly than anything in history?

    That, in a nutshell, is why the vast majority of the billions of dollars spent on cyber-security has the net effect of wasting money and making us cyber-insecure.

     

  • Cyber Security and Cyber Insecurity

    People talk about “cyber security” as though it’s something we have; they say we’d better be careful (i.e., spend more money), because awful things might happen if we become cyber insecure.

    Sorry, but that train has left the station. Our computers and networks, government, corporate and personal, are already unbelievably overrun by bad guys of all sorts. Not just attacked – overrun; the bad guys are already on the inside, doing stuff that would horrify most people if they could see it or understand it. There are millions of mostly-electronic, mostly-invisible (to most people) instances of thefts and vandalism every year.  And it’s getting worse.

    How Bad is our Cyber Security?

    It’s really bad. While hard to estimate accurately, there is good evidence to suggest that over a quarter of all the traffic on the web is generated by bad guys. Think about it – it’s as though every street you walked or drove had terrorists or obvious gang members or other truly frightening people driving or walking along – not just people you thought looked scary, but people who were genuinely bad, and were out to do damage for their own benefit or just for “fun!”

    A lot of this seems to be low-level crime that many people don’t notice, like bad bots.

    Grandma
    But even in that case, real money is involved!

    Not only is every internet “street” crowded with smart thugs, they are far more effective than the famous robbers of the past. Willie Sutton was a famous bank robber. Over his 40 year career, he got away with an estimated $2 million. And spent decades in prison.

    Willie_Sutton

    According to a 2009 study by Lexis-Nexis, consumers lost over $4 billion that year; banks lost $11 billion, and retailers lost an astounding $190 billion – all just to one source of fraud, credit cards! Before long, we’ll be talking serious money here…

    Slick Willie Sutton is probably rolling in his grave, seething with jealousy.

    There are also really scary things like cybercrime directed at things that can make big explosions and kill loads of people. It’s on its way. I’m not going to talk about it any more here, but suffice it to say that I’m not feeling great about it.

    If it’s that bad, why isn’t is front page news?

    What kind of visual are you going to have on the evening news for another cyber-theft? What is the chance for the news babe to stick a microphone in some grieving person’s face and ask “how did it feel when [the bad guy] [did that awful thing] to [you or your close relative or your neighbor]?”

    By contrast, even a single awful thing happening to one person can make a great news story, with visuals and perhaps an interview with a person in distress. No cybercrime (so far) has generated close to the level of compelling visual as a single car-jacking of a single car in Chicago, for example.

    Carjacking 1
    In addition, the juicy targets for cybercrime are big organizations. When these organizations are hit, they tend to go to great lengths to cover it up. Most of the big hits (and they’re getting bigger and bigger) don’t make the news – partly because they don’t make “great news” (see above), and partly because the big organizations that get hit keep it real quiet – after all, there are no plumes of smoke, explosions or bleeding people to draw attention to the disaster.

    Conclusion

    Cybersecurity is way down on the priority lists of most people, for a variety of reasons, among them that it’s mostly invisible and hard to understand. This in spite of the fact that the fruits of cyber-crime from credit cards alone are thousands of times greater than physical robbery, with a fraction of the conviction rate. Cyber-crime is safe and profitable for those proficient in it! Cyber-crime, along with all other aspects of cyber-insecurity, is already at unprecedented levels and is getting worse, while most of us, including those in charge who should know better, are strolling along and whistling, as though everything were just fine. It’s NOT!

  • Data Center Managers Spend too much on Equipment

    There are lots of complicated IT decisions to make. Buying hardware should be one of the easy ones. Most data center managers do make it easy — for themselves. But way too expensive for their organizations.

    Piles of money are spent on data center equipment

    According to a recent Gartner report, more than $140 billion dollars will be spent on data center equipment this year. That sounds like a big number. It is a big number. But then when you read that IT spends more than twice that amount on enterprise software, and then three times that on IT services (nearly a trillion dollars), maybe it doesn't sound so big.

    Getting back to reality, most of the companies I usually deal with don't spend billions, hundreds of millions or even tens of millions a year on equipment. But it's still a lot to them!

    The Huge Spending is rarely examined

    The smaller companies I'm closest to spend remarkably little time seriously questioning the huge (to them) amount of money they spend on hardware every year. Cutting the number by a significant factor can make a huge difference to them.

    I see this curious lack of interest from the other side as well. Some of our companies have great equipment to sell that enables their customers to get more for less. While they love to go into details about how wonderful their stuff is and how hard it was to make it wonderful, the bottom line is simple: it delivers more and costs less. You would think this simple message would be easy to deliver, and quickly result in lines out the door to buy stuff. Not so! Getting more for less turns out to be pretty low on the priority list for most data center managers.

    Bergdorf and Target

    The fact is, most data center managers have no idea what they're buying, and their managers know even less. No one knows how much various things "should" cost, and it changes all the time anyway. If you claim you saved your organization money, it's hard for anyone to evaluate the truth of the claim, so you don't get much credit for it. Whereas if something goes wrong with stuff you bought, it's clear where the finger of blame will be pointing.

    The situation is amazingly different than normal life. Most data center managers, in effect, shop at Bergdorf Goodman. Bergdorf

    The Bergdorf name and high levels of service makes them feel like they'll come out looking sharp. What if they could support that new application (the new school year) at Target? Target 1

    What if they could get lots of everyday things that are perfectly adequate for less there? Target 2

    The trouble is that we all know about clothes. We have lots of personal experience wearing them and seeing them on others. We know what they mean and have an idea of what they cost. But when it comes to data center equipment, even most of the professionals are clueless!

    The result is that they're petrified of making a mistake and taking the blame, so the default position is "buy more of what I already have." And above all, take comfort in big brands. This explains why the vast majority of data center purchases go to the computer equivalent of high-end, services-rich Bergdorf, while places like Target and Walmart remain tiny little places by comparison. The computer equivalent of Ikea, which requires assembly at home? Practically non-existent.

    Computer people are supposed to be so smart!

    Yes, the reputation is that computer people are smart, sophisticated folks. They deal with a deep, complex, rapidly-changing set of products and services. Their skills are increasingly at the heart of most organizations, both private and government. It certainly takes a great deal of skill to avoid embarassment with all the jargon, not to mention the realities of buying the optimal equipment at an optimal price.

    There is a solution to the complexity. The buyers understand that very few people are in a position to judge how well they're doing at their jobs. So long as things don't break too often, they can spin how well they're doing. They understand that no one has a sense of what's expensive and what's cheap. The vendors get this too. They shower their clients with service and support. They're not like Target, they're like Bergdorf. They help you pick the right things, the things that will look great in your data center. If they cost five times more than you could get at one of those crappy, wander-the-aisles-you're-on-your-own stores, who cares? You come out lookin' good! And that's what matters.

    It turns out the computer people are smart — at advancing their personal careers. They've figured it out and are executing well on it, and who's to say otherwise?

    That's the status quo among data center managers.

    Along comes the Cloud

    There are storm clouds on the horizon. It's called, oddly enough, the "cloud," which is just a modern term for an outsourced data center that's easier to use than the ones usually built by data center managers. It works. It's flexible. It's cheap! Most of the people who build successful clouds make decisions that are closer to Target than Bergdorf when buying hardware. And, guess what, it works just fine.

    Smart data center managers typically "embrace" the cloud, which in reality is along the lines of "keep your friends close and your enemies closer." But that's a long story for another time.

    Data Center Spending

    Data center managers have jobs that are as complex and challenging as they get. It's hard to learn the basics of everything they have to know, much less keep up with all the changes. Most of the ones who keep their jobs have evolved simple, effective methods for buying equipment, in collaboration (collusion?) with the leading vendors. The methods guarantee that more will be spent on data center equipment by whole-number factors, but the stuff they buy mostly works, and they get to have great careers. While the future is looking a bit "cloudy," I suspect that these resourceful people will work something out so that their own futures remain sunny.

  • Innovation Made Simple

    There is lots of noise about “innovation” and its importance. Not only are there books, articles and conferences, large organizations increasingly employ Chief Innovation Officers to make sure innovation really does take place – otherwise, it might not, and what a horror that would be!

    Innovation seems to be a big, important, mysterious thing that isn’t one bit obvious. Lots of people have to get together to figure out this grand new thing. Here’s a typical example:

    Capture

    I must be missing something. I agree that making things better is real important, and I’m happy to call that “innovation.” But it appears to me that, in most cases, the innovation that most of the people served by an organization would value most highly is simple and obvious.

    For example, in football, people focus on all sorts of fancy things. But what wins most games most of the time? Getting real good at blocking and tackling.

    In most non-sports organizations, doing the equivalent of blocking and tackling makes things better. Since most organizations use computers a fair amount, the process is simple:

    1. People should do their jobs. Completely. Correctly. On time.
    2. Computers should help people do their jobs, and monitor whether they’ve done them correctly.
    3. Computers should do things that people used to do.

    No magic, no mystery, no focus groups required. It’s simple: Do it right! Then computer-enhance it! Finally, automate the human element! If this bothers you in any way, ask yourself whether you’d prefer to wait until the bank is open, walk into it, wait in line for a teller and get your money – or whether you’d just as soon walk or drive up to an ATM any time you please and get your money from a machine. Hmmmm.

    Big Fat Personal Example of the need for simple Innovation

    I had an appointment to see a doctor at one of the top hospitals in the US: Mount Sinai in NYC. Lest you think this was a no-big-deal appointment, let me just say I’m taking a drug that can have really bad (but hidden) side effects, and this was to check on how I’m doing. I wasn’t feeling casual about it.

    I had written confirmation of the appointment, an on-line reminder of the appointment, and a robo-call reminder of the appointment the day before. Efficient! So I took the couple hours required to get to the hospital in plenty of time. The place where I usually sign in had my appointment in their system, but told me to go to another desk. They also had my appointment, but told me that unfortunately, my doctor was on vacation. They were polite, but the doctor wasn’t there, so the appointment wasn’t going to happen.

    Assuring the innocent person giving me the bad news that I wasn’t mad, I asked what he would recommend as the best thing I could do to rattle someone’s cage about this unfortunate event. He got a supervisor to come out. The supervisor apologized and explained that a lady who’s out today was supposed to call me, but obviously didn’t. She’s sorry. Can she pay for my parking or something? Since I know Mount Sinai uses Epic, I asked whether she could get an alert put in to catch cases like this. She acted like she thought it was a good idea. But given how IT works at places like this, I’m not holding my breath. And there were actually two problems: the robo-call should have been cancelled, and a call to re-schedule me should have taken place. Not to mention e-mails, etc.

    Mount Sinai’s medicine and doctors are among the best anywhere. But the hospital’s blocking and tackling is abysmal. The day before I was scheduled for an MRI they called to say my appointment was cancelled because they had no pre-authorization. Personal appeals to hold the appointment, followed by frantic phone calls, uncovered than Mt. Sinai has a whole department that does pre-auth’s. My doctor had placed the order correctly, but the pre-auth didn’t happen. My doctor’s assistant said it happens all the time, and is tired of catching the blame for it. It took a couple hours of phone time to get me the go-ahead.

    Mt. Sinai thinks all sorts of things are important and worth spending money on. They have big TV’s blaring away in waiting rooms. They have iPad’s available to patients to amuse themselves while waiting. They have signs announcing how great they are marching down many NYC streets. They have classes on meditation and all sorts of activities directed at Arabic-reading people of the Islamic persuasion, judging from displays in the waiting rooms. All of these things are apparently more worthy of attention than blocking-and-tackling for boring, trivial things like appointments and pre-authorizations. And, sadly, I have lots more similar examples.

    Mount Sinai hospital and Innovation

    Mount Sinai has made its position on innovation clear: they’re for it. They have a whole department in charge of it. They have hosted at least one conference on innovation featuring all sorts of important people. They tout their innovative computer technology, including Epic. I neither dispute nor disparage any of this. But it’s kind of like a surgeon who does genuinely wonderful surgery, but disdains to wash his hands or double-check whether he’s operating on the right thing. They have indeed purchased and installed one of the most advanced, complex EMR systems – but they fail to get it to do the most basic things. And my personal experience is the tip of an iceberg. The waste and inefficiency within the hospital that I have observed that results from failing to pay attention to simple things like scheduling is simply monstrous.

    I can’t resist giving just one juicy example. Where I check in, there is a whole line of check-in people who have to enter lots of stuff into the computer while you sit there. I noticed a little speaker on the wall that would sometimes make discrete little sounds. There was no one waiting behind me, so I asked the operator about it. It turns out that the speaker was installed some time ago and everyone like her trained to listen, because it tells you when it’s safe to hit submit on a new entry. The computer is so overloaded most of the time that unless you wait for the “it’s OK” audible signal, all your work will be thrown away and you have to start over.

    As a life-long computer geek, my jaw didn’t just hit the floor; it blasted through it and was finally halted in its downward descent when it hit the bedrock under the island of Manhattan. I think I’m still working on putting it back. I’m so blown away I have no words – even sarcasm, my go-to mode, escapes me. Enough said.

    Innovation made simple

    I like cool new stuff. There should be more of it. It should even work.

    But if you’re willing to pay attention to what actually matters, even though it may be pedestrian and boring, you can make a huge impact at nearly any organization without the benefit of a single conference, book or hi-falutin consultant. You can “innovate” by doing the equivalent of blocking and tackling, i.e., taking care of basics. In other words: Make sure every job is done, and everyone does his job. Then assist and enhance them with computers. Then, to the extent you can, replace human labor with full automation, including calling for human attention only when it’s needed. These simple steps are frequently and painfully not done; if they were done, surprising amounts of money and time would be freed for doing the complicated “cool” stuff that most people call “innovation.”

  • Building Software: the Bad Old Way and the Good New Way

    Software is hard to build. There are lots of failures. When the stakes are high, all parties concerned are “highly qualified” and failure isn’t an option – there are still lots of failures.

    What is it about software? If cars failed anywhere close to the rate at which software fails, everyone would be afraid to ride in them. If houses failed at the rate of software, we’d see an explosion in people living in tents. Like it or not, building software is different from building almost anything else.

    The reason why there are so many software failures is simple: most people think that building software is pretty much the same as building anything else, and they apply the same methods and criteria to it. Even collections of supposedly smart people who start out designing processes specific to software end up throwing in the towel and admitting that their processes are applicable to nearly anything – which is a good way of distracting people from the fact that they don’t work for software!

    Knowledge of the bad old way to build software is widespread. It’s pretty simple, when you come right down to it. It’s not much different than, say, building a house. You start with requirements (how many bedrooms do you need, etc.) and budget. You work with an architect and get a set of plans (the detailed design). You put it out to bid, and select a contractor based on price, time and quality. The contractor gets permits, builds the house, you make progress payments along the way, there are inspections, a final payment and finally you move in.

    When you try to apply this process to software, things fall apart quickly. I won’t go through the awful details, but imagine you weren’t allowed anywhere close to the job site until move-in day, and that most of the work is done by kids who are learning as they go, led by managers who have never used the tools or materials the kids are using. The Dirty Secret of Peacetime Development is gruesome.

    The Good New Way to build software is just as simple, but rarely practiced. Here it is. Have the developers talk with the users and figure out what’s needed. Not for weeks – for a couple of hours. Then they should build something, taking hours or a couple of days at most. The “something” may not do much, but it should kinda work. Then they should show it to the users and have another discussion about what should be added, what should be changed. The programmers get back to work. No control freaks allowed! “That’s not what I meant” and “I forgot to add that” are just fine. It’s best if everyone agrees that progress is more important than perfection. When the software gets to elementary school age, it should be sent to school – how “real” users react to your precious child will add value and give perspective. If things are going well, the kid can skip grades. Repeating grades is OK too. Gradually you let the kid out on play dates and even summer camp. There’s never a date when the kid is “done;” there’s just increasing independence, and fewer visits back home.

    One of the biggest advantages of this method is the elimination of the classic “big surprise,” that heart-pounding moment of the big demo a few days prior to launch. In an article published September 24, 2013, officials of the Oregon Health Exchange stated:

    Oregon optimistic

    The big surprise happened just 4 days later, on September 28, 2013:

    Oregon shut it down

    There you have it. The nearly-ubiquitous Bad Old Way of building software, as illustrated at great trouble and expense by Cover Oregon (among many others).

    The Bad Old Way masquerades as a heart stress test and as reputation roulette for practically everyone involved. On the other hand you have the Good New Way, the apparently chaotic but amazingly effective way of building software, as exemplified by start-ups, people under pressure and generally speaking programmers who don’t have the time, money or perhaps patience to build things the bad way, and who end up adopting Wartime Software methods. And simply whupping the competition.

     

  • Bureaucracy, Regulation and Computer Security

    There always seems to be a bureaucracy ready to tell you how to keep your computer systems secure; or, worse, to tell you what you must do to be in compliance with the regulations promulgated by the bureaucracy. "It's for your own good," they say.

    If you are forced to comply with some regulation or other, you'd better comply. But you're a fool if you confuse compliance with keeping the assets of your business actually, you know, secure.

    Bureaucrats can't keep simple physical things secure

    Computers are complicated. Construction sites? Not so much. Fences, cameras, sensors, guards and an alert, well-managed staff should do the trick. But when bureaucrats are in charge? Forget it.

    David Velazquez was in charge of security at the World Trade Center construction site. Mr. Velazquez is a Columbia University graduate and had a 31 year career at the FBI, ending as head of the Newark field office. You might think well of the FBI, I don't know, but what I do know is that it's a giant government bureaucracy, and Mr. Valazquez appears to have applied the lessons he learned there on his new job.

    Here is one of the crack guards "on duty" at the work site:

    Sleeping guard
     

    That may explain why a group of guys was able to get to the top and jump off, recording video all the way down:

      Base jumper

    Then a kid slipped through a fence and made it all the way to the roof, unheeded by sleeping guards:

    Security kid

    The biggest, baddest bureaucrats of all can't keep their own computers secure

    Alright, maybe the FBI are amateurs. Let's go to the best of the best, the scariest cybersecurity experts of all, the NSA.

    NSA

    These guys are in charge of keeping us secure from the worst of the worst. A cover story in Wired Magazine told us all about it.

    Wired cover

    Loads of people using piles and piles of super-secret cyber magic are on the case:

    Wired story 1

    If anyone can achieve cyber-security, surely these guys are it:

    Wired story 3

    But we all know how that turned out. It just took one moderately clever person with bad intentions and all the vaunted cyber-wonderfulness was for naught. Among Mr. Snowden's myriad revelations was the previously secret budget of the cyber-bureaucrats of the NSA, an astounding $52 billion. Do you think if they doubled the budget they could have done a better job? Hmmmm.

    Bureaucrats and Security

    Why should you listen to someone who can't do it themselves? If you want to stop smoking, do you eagerly take the advice of someone who smokes? If you want to get rich, do you take advice from poor people? Bureaucrats are sure they're right — because they have no competition, and there's no one who has the power to tell them otherwise.

    Why this matters

    The laughable ineffectiveness of bureaucratic security in general, and cybersecurity in particular, can matter a great deal to you. Here's why:

    • If you do what the bureaucrats tell you to do, you'll spend a lot of money.
    • Following the regulations makes everything slower and less efficient. You'll hurt your business.
    • If you get conned into thinking that following the regulations means that you're secure, you're in big trouble. You will be more vulnerable to business-damaging breach than ever before.

    What you should do is simple: establish effective and efficient security by the best means available, which will typically be unrelated to what the authorities solemnly declare. Then, do as much regulation-following as you need to do, whether it's PCI or any of the rest of the alphabet soup, to avoid punishment.

    Is this cynical? Of course! But it's also real life.

  • Giant Software Company Bureaucracies

    It is the nature of giant bureaucracies to coerce and control the populations they "serve." Giant bureaucracies also tend to resist change, protect themselves at all cost, operate with laughable inefficiency, and become increasingly disconnected from their supposed mission. This is true whether the bureaucracy is a government agency (illustrated on a small, local scale by the wonderful movie Still Mine)

    Still mine

    or a software company. When the bureaucracies are giant software companies, the coercion is often masked in a sickly-sweet cover story about trying to help you, or assuring that things happen with high quality, which just rubs it in.

    I recently ran into an example of this with Microsoft. I was trying to play WMA (Windows Media Audio) files that I had created for my own use from CD's I had purchased. In other words, I was trying to do something I should have been able to do.

    Why CD's? I had bought them a long time ago, why should I purchase them again digitally when it's legal to create a personal digital copy. Why WMA? At the time, it was technically slightly better than the MP3 easily available to me.

    The Random House example (apologies to Random House)

    Imagine I had bought a paper book years ago. Now I was trying to open it to re-read a section. When I tried to open it, it won't open! The book was stuck, and there was a knock on my apartment door. There's a loud voice coming from outside: "Open up! Open up! This is Random House!" OMG! What's this about? I can't open my old book, and suddenly some publisher is pounding at my door??

    I go to the door, open it, and there's a couple scary-looking guys. They say, "We understand you're trying to open a Random House book. Before you open it, we need to verify that you have the right to do so."

    I say, "What do you mean? IT'S MY BOOK! I BOUGHT IT! I'VE OWNED IT FOR YEARS! WHAT RIGHT DO YOU HAVE TO POUND ON MY DOOR AND QUESTION ME?"

    They reply, "We're Random House. We're the publishers. You may think you own this book, but we're the publishers. How do we know you own the book legally? We've got to make sure you have the proper rights for this book. Until we receive that assurance, you will not be able to open the book you claim to own."

    "OK," I say guardedly. "What do I have to do to convince you I own the book I own?"

    "It's simple. Just replace all your phones and your phone service with Random House's. Then our book will be able to call our office and make sure you have the rights you say you have."

    "I've heard about the Random House telephone service. It's really crappy. It's full of static. That's why fewer people use it every month, even though it's free. Even worse, crooks have figured out how to use it to see when I'm not home, so they can break in and steal my stuff. If you insanely want to somehow have the book you published be able to 'phone home,' why not just use the phones I've already got, which work great?"

    "They're not Random House phones. We can't guarantee their quality or appropriateness. Random House books only work with Random House phones. You can say what you want — but we say that we put our name on it and we stand behind them — and they're the only phones we'll use."

    I get the message. I kick myself for being so deluded that I thought buying a book from Random House was a good idea. There's no way I'm trading my secure phones for ones that practically fly a flag to alert all the criminals in the area when the house is vulnerable. I hand the book that I bought and paid for, but which I cannot use, to the agents from Random House, and dis-invite them from my house.

    Microsoft and WMA

    This is what Microsoft did, acting just like the imagined Random House of my example.

    I tried to play my WMA file. It wouldn't play. Instead, just like the agents from Random house pounding on my door, I get this:

    Microsoft fail

    Note the copyright, literally ten years ago! Tens of thousands of supposedly super-bright programmers, and they can't manage to keep things up to date?

    They "don't support" my web browser, which (on this machine) is Firefox. They insist on using IE, which is of course their own browser. Whose utilization has plummetted from over two-thirds in 2009 to about the same as Firefox last year.

    Usage_share_of_web_browsers_(Source_StatCounter).svg

    Why do I care? First of all, they shouldn't care. It's outrageous that they do. Second, here's one reason among many why I care:

    IE vulnerability

    I might as well fly a flag from my house saying "hey, all crooks in the area, c'mon over, the pickin's are good!" And this isn't the first time — IE is famous for being about the most inept, dangerous-to-use browser in existence. Imagine, a free product with a plummeting market share!

    Conclusion

    This experience didn't teach me anything I didn't already know. Microsoft isn't unique. It's like every other giant, bumbling bureaucracy: it's an elephant, we're mice, and you'd better look smart and be careful or you'll get crushed. But somehow, when your nose gets rubbed in it, and they effectively steal something from you from your own house (computer), and there's nothing you can do about, I at least get aggravated in spite of myself.

     

  • Fundamental Concepts of Computing: Software is Data!

    The fact that what computers operate on (data) and the instructions for how to operate on it (software) are the same kind of stuff (i.e., data) is obvious, simple, profound, not well-understood, and has huge implications. The relatively small number of programmers who take advantage of the fact that software is data are levels and levels better and beyond "normal" programmers. The fact that software is data is well qualified to be a fundamental concept of computing, along with counting, closed loop and a few other things.

    Software is Data

    All software is data; some data is software. US Letter Blank_3
    Everyone who sorta knows what programming is knows this, the same way that everyone knows that air is lighter than water. But who thinks about it? Who cares? What difference does it make?

    First let me spell out how software is data. You've got a bunch of files on your computer. Some of them may be text files or spreadsheets (data). Some of them may be the kind of text that will make sense when opened with a programmer's editor (still data); you edit it and save it back (data all the way). Then you run the compiler. It takes as input the text representation of the program (data) and writes out a new file, which is an "executable" file, still data. Then you "run" the program, i.e., a program loads the executable file data into memory and then, by one means or another, gets the machine's instruction address pointer to point to the first byte of the file in memory. At which point, the program is "executed."

    The file was data when it was on disk, regardless of its format. It was data when it was loaded into memory, not much different than a text file loaded by a text editor. It was data when the processor started executing instructions, and it was data once the program ceased being executed. It was data before, during and after execution. It just happened to be (hopefully) data in the format of sensible instructions the machine knows how to execute. Even if it's crap, the machine will do its best to execute it until it somehow loops or crashes out. Then you get your machine back. The point is: the machine doesn't know the difference!

    To a computer, everything is data. If we set the instruction address pointer to data that happens to be nicely formatted instructions, good things will happen. But it's still just data.

    Software wasn't always data!

    It's an amazing, unprecedented leap forward to make the control function of a machine out of the same stuff, stored in the same way and processed in the same way, as the stuff the machine works on. No other machine is like this! Because of this unique facet of computers, most people tend to act the way they usually do, just as they do for the unprecedented speed of evolution of computers.

    Any machine you can think of has the control part and the "business" part, where the machine does what it does, according to the control. This is true for a lawn tractor, 1968 05 Mem day Bobs 43-22
    and it's true for a vehicle. It's true for the calculating machines of the late 1940's such as the IBM 402 (here's the plug board where the "program" was entered). 800px-IBM402plugboard.Shrigley.wireside
    It's true for that famous early computer, the ENIAC; here's the plug board, sadly not replaceable, Two_women_operating_ENIAC
    with a couple of the ladies who programmed it. It's even true for the much-lauded Turing machine, whose famous endless tape contains only data, while the control is somewhere else.

    It was a true great leap forward, breaking with the strict separation of control and action that exists everywhere in human experience, to use some of a computer's data for control purposes, and the rest for data that isn't software. This is called a von Neumann architecture, or a "stored-program" computer.

    Because software is data, a program can act on itself, since "itself" is data; i.e., a program can modify itself. This characteristic is unique among machines — only stored-program computers can do this. Sound familiar?

    Interpreters and other levels

    The first software was the binary data that the machine recognized as instructions. Next step was a more readable, text version of the machine language: assembler language. They key thing with assembler language is that each line of assembler language translates to exactly one line of machine language. Next comes compiled languages like FORTRAN and COBOL, which compilers turns into machine language, typically with multiple machine instructions for each line of language. Next comes interpreted languages, which are "executed" by an interpreter program; instead of generating instructions, the interpreter just does what's intended by the program right away.

    From this we gather that programs can take as input programs (in some format and language), and either execute them or generate other programs as output, either literally executed machine instructions or some other form of language.

    Consequences of software being data

    This is a BIG subject. For starters, though, doesn't it make sense that when a machine is "self-operating," as computers are, and no other machine in our experience is, that effectively utilizing the self-referential power of the machine would lead to interesting things? It certainly has for humans!

    Let's take the classic issue of customization. Once a programming environment has been chosen, the tendency is to model your problem in terms of the programming environment, and code away. For example, there's the whole field of object-oriented analysis and design, in which you're supposed to use this style of thinking to put everything in terms of, then you proceed to build your classes and away you go. Life is great. But now something has to get changed. This requires that you examine the entire body of code, make the changes, test, migrate data as needed, etc. And again and again.

    Eventually, you might realize that certain classes of changes are often required. If you really get that software is data, you will realize that you could have modeled the entire application in the simplest possible terms and built an interpreter. Changes then fall into one of two categories: the new thing is a variant on the kinds of things the interpreter already does, in which case you just change the model; or the new thing is a new kind of thing, in which case you extend the interpreter and use its new capability in the extended model.

    This is just one example. You can have mixed models, you can generate code, you can mix in classic parameters, etc.

    The point is that, if you are a software-is-data-aware person, you aren't "stuck" in any programming model or environment. Moreover, you are more likely to come up with effective and efficient approaches, which can easily be orders of magnitude better than naive, single-level ones.

    Conclusion

    "Software is data" should be one of the most obvious statements you can make in the computer world. It's like going to the New York Yankees and declaring that one of the most important things about baseball is that bats are involved. Everyone would avert their eyes and you would be quietly shown the door. Yet I find that the norm in software groups is to demonstrate no awareness through their actions that software is data. As far as you can tell by their actions, software is a whole separate thing. To them, it's as though software were like every other kind of machine control panel everyone encounters in their normal lives. They don't seem to even consider programming paths that involve software that modifies or interprets software. As a result, they are vulnerable to massive embarassment and humiliating defeat by groups that take advantage of this fundamental concept of computing.

  • Delivering Software is a Nightmare

    Blackliszt is down!

    Capture

    That led me to reflect on how nice things are when they work, but how many things can go wrong.

    Building and delivering software is nightmare-hard. Given all the difficulties, it sometimes amazes me that anything ever works. This is not news – it’s just the way things are. But I’m going to illustrate it with current events.

    Attacks from the outside

    My blogging platform has been under attack for days. Here's what I got when I went there. Typepad error

    Here's a little bit about what's going on.

    Techcrunch typepad ddos
    Bad guys attacking! For days and days at a time!

    Attacks from the inside

    It's bad enough that there are hordes of tough, effective bad guys roaming around outside the walls looking to cause trouble. But loads of folks inside the castle walls, people who are supposed to be good guys, aren't.

    Government cyber security

    This is a huge problem. It isn't widely accepted.

    Brute incompetence in government systems

    There’s an on-going, massive failure of government computer systems.  DOJ statement
      NYP 4-22 1
    10 days ago as of this writing. A whole nationwide part of the Department of Justice has been thrown back to pre-computer days. Details were revealed here.

    When you build things the right way, no fault can take a system down. When you build things the wrong way, it might take an hour or two to fix things. If you build a system an unimaginably stupid way, it might take a day. Like an anti-Manhattan Project, no bureaucracy other than the government could possibly have a system that someone would know would take a couple weeks to repair. But that's how it is here!

    NYP 4-22 2

    Amazing brain-dead fails in non-government systems

    I got an e-mail today telling me about a wonderful new storage system. They claimed that Gartner had dubbed them a “cool vendor,” which I’d never heard Gartner doing and sounded weird. Exablox email

    So I thought I’d check it out. Here’s the result.

    Exablox DB error
    This wasn’t a fluke. Their whole website, not just the e-mail landing page, gave the same result. Good work, guys! Good to see the private sector showing everyone how the discipline of profit-making leads to great computer systems results!

    Chaos and obsolescence among the “experts”

    When you walk into any computer systems organization, you have no idea what you’re in for. This is because most computer organizations make the Keystone Kops

    KeystoneKops
    look like Eliot Ness and the Untouchables.

    Put aside all the politics and infighting; software “experts” can be continents and decades away from each other in terms of how to get things done. Most software people can’t even follow the arguments, much less decide what the right answer is. It’s amazing these people manage to get out of bed in the morning.

    When it “works” is it vaguely usable?

    It’s not under attack, from the outside or the inside. It’s not broken. The internal comedy of errors has managed to deliver a piece of software that arguably “works.” Wonderful!

    Sadly, this is where reasonable debate on software would start, not end. Because there are endless ways to get things done in software, and endless ways to interact with users. When you have software companies with loads of employees, reputedly smart, with tens of billions of dollars in the bank, and they trumpet their proud new creation — and it’s a barely-usable piece of crap, the debate should be about how crappy-but-not-broken software gets shipped. Instead it’s far worse.

    Conclusion

    This is why smart, motivated start-ups beat rich established companies all the time. You’ve got 100 programmers for every one of mine? Great! The odds are on my side! Of course, the truth is that start-ups mostly fail. Their stuff doesn’t work or nobody cares or they just think they’re great when really they’re just like everyone else.

    Nonetheless, a tough, smart, hard-working team can be literally 100 times more effective than the established players, including the cool modern ones like social media companies. Because building and delivering software that actually works and people want is a shockingly difficult thing to do.

  • Typepad DDOS and Blackliszt

    Blackliszt has been unreachable since last week. Apparently Typepad was hit with a rather determined DDOS attack. Now a bunch of it is back up. Sort of. Blackliszt used to look like this:

    Black bad

    And now it looks like this — when you can get it:

    Black now

    In other words, it looks like crap. Even by my low standards.

    But then, a few minutes ago it looked like this:

    Black no

    Suddenly, merely looking like crap seems pretty good. If it doesn't recover by tomorrow, I'll be changing hosting services. Meanwhile, I have a draft post about the challenges of building and deploying software, with enemies outside and in, and chaos generally having its way.

  • Continents and Islands in the World of Computers

    The vast majority of people appear to think that the world of computers and software is pretty uniform. While everyone recognizes that there are differences between the systems used by consumers and ones in business, it makes sense that pretty much the same thing is going on inside.

    The reality is that vast cultural and practical differences separate the various clusters of computer and software applications. I look forward to the first anthropological studies that are devoted to this subject, illustrating and spelling out the untravelled oceans that separate the diverse lands of computer and software practice. Meanwhile, there are both obstacles and opportunities that arise from these facts.

    A Diversity of Tongues

    The Bible gives a vivid explanation of how the various languages arose. In the post-flood world, there was said to be a single people with a single tongue. They built a city with a tower that reached to the sky, to make a name for themselves. Tour_de_babel.jpeg
    As a single people with a single tongue, "the sky was the limit" for how far they could go. God didn't like this. He reached down and confounded their speech, and scattered them over the face of the earth.

    Whether it's the fault of God or humans, it's well understood that groups of people develop their own languages, which then evolve and splinter. In fact, by studying the relationship of various languages, you gain insight into how humans migrated over the earth. The Indo-European family of languages is an excellent example of this.

    A Diversity of Software Languages

    Early computer programmers in the 1950's clearly saw the advantage of having a single language for software. They tried hard to create a universal software language, FORTRAN being the most well-known and successful such early language. While FORTRAN (short for "formula translator") was great for math people, people working with business records weren't impressed. Thus COBOL ("COmmon Business-Oriented Language") was invented. Things were still pretty simple in the mid-1950's. 1957

    But of course it didn't stop there. People migrated to different "lands," confronted new issues, and created new languages suitable for the new problems. Here's a snapshot of some of the major developments in the mid-1970's.

    1976

    By now, literally thousands of general-purpose languages have been created. Not including "esoteric" languages and thousands more languages for narrow problem domains.

    Idioms and Layers

    As anyone who has learned a new human language as an adult is well aware, learning a language is one thing — but learning all the incidental aspects of the language, particularly the idioms, is a task that never seems to end. This is because there are lots of them — an estimated 25,000 of them in English, for example.

    There are equivalents of idioms in software languages, in multiple categories. I'll just give a basic example: the run-time library, whose documentation frequently exceeds that of the base language, and without which you can't write practical programs. A more complex example: a "framework," for example the RAILS and Sinatra frameworks for the Ruby language.

    Let's say you know English pretty well. Are you qualified to write or even to read and understand a legal brief? Unless you're a lawyer, probably not. Yet there's no denying that the brief is written in English, with a whole pile of idioms and other special things that are not in common use.

    It's the same with computer languages. You may know the language C pretty well, but when you first look at the C code in a driver or an operating system kernel, it's got a strange idiom — "straight" C would stand out as obviously as BBC English would in Brooklyn. The C that's in a compiler is even more rife with idioms and unfamiliar constructs, so much so that a person who is otherwise fluent in C would have trouble figuring out what was going on, just as a normal fluent English-speaker would have trouble following a discussion by two doctors of a difficult medical case.

    Different Continents, Different Cultures

    Everyone knows that human languages are part of overall human culture. Cultures differ at least as much as languages. It's not widely appreciated that this is also true in the world of computers. While of course there are commonalities, you actually think differently in different languages, whether human or software. Lots of incidental things tend to get wrapped up in the cultures as well. To take a simple example: in the US, when you walk into a store, prices tend to be marked on the goods. If you like that thing at that price, you buy it; otherwise you don't. It doesn't work that way in much of South Korea. Prices may not be marked at all; and negotiation is assumed and expected. There are a whole set of cultural norms that have to be understood in order to thrive.

    Differences in software cultures are just as strong as differences in human cultures. For example, one of the major forces in the world of hospital automation is Epic. Epic is written in a language originally called MUMPS. While few write programs from scratch in MUMPS anymore, you have to use the language to customize the Epic system, just as you have to use ABAP to customize the SAP manufacturing system. In either case, learning the peculiar language is the tip of the iceberg — your programs "live in" the Epic system, and so knowing that system inside out is the key to success, far more important than utilization of the language itself. This is perhaps comparable to the importance of knowing all the relevant prior case law in writing a legal brief in support of your position, vs. simply knowing the vocabulary and syntax of English.

    Contact Between Distant Cultures

    We know that in human life, cultures developed in isolation from each other for many thousands of years. Migrating peoples would clash, and the cultures with superior instruments and warrior culture tended to decimate the others. A culture that develops superior methods of war, usually with distinct technology, tends to expand. This was true, for example of the Commanche in North America and the Mongols, each of which developed unique competence and technology with horses, and used that to rapidly expand their sphere of influence.

    This is an area of both similarity and difference with software cultures. Within a company, there are frequently members of different software "tribes," who usually neither understand nor like each other. They are constantly at war to establish primacy, the winning culture grudgingly conceding resource-poor reservations to which the losers are confined.

    It's different with software cultures that are separated by "oceans," usually different problem domains or industries. You might like to think that everyone who does software has all the information available, and therefore is at a similar cultural level, the equivalent of, say, the different countries in Europe. They may speak different languages and like different foods, but they all have cars, telephones, and electrical appliances. In software, this is not the case! In software, there are cultural differences that are the equivalent of cars being widely available in one place, while ox carts are the standard mode of transport in another. It's that extreme.

    What's even more shocking is that the denizens of these culturally isolated software continents are comfortable and secure in what outsiders see as their "backwardness" or "ignorance," and find ways to denigrate and disparage outsiders who dare to suggest there might be a better way of doing things.

    How big are these culturally retarded continents? Just a few distant places, the equivalent of Australia? If you have the opportunity to see a wide swath of software culture, what you find is that the vast majority of software groups have cultures that are dramatically inferior to the best places. Moreover, the variance in just how primitive things are is huge.  In any given place, it is likely that practices that are unknown but would dramatically enhance results are already standard practice in other places. Judging software isn't like auditing the financial books of a company, in which you either pass or fail an audit. It's more like figuring out which part of which software continent the place is, and how many years or decades behind the best known methods the place is, and to what extent their near neighbors are slightly ahead or behind.

    Conclusion

    It's useful to compare human language and culture to software language and culture. Just as with humans, language is an important part of culture, but thriving involves a whole lot more than just grasping the basics of a language. Just like with humans, there are varying levels and there are conflicts. But what's most interesting are the differences, which are the equivalent of humans living in environments that are physically next to each other, but using tools and methods that are hundreds of years different in terms of evolution. This fact has huge implications on many levels.

  • Innovation with Computers and Slow Things

    People have theories about innovation. Increasingly, they think it's important to innovate. Fine. I'm all for it. Given a choice between "innovation" (whatever that is) and the alternative, which I assume is something like "sitting and rotting," I'll take some of the former, thanks very much.

    Whatever people end up saying "innovation" is (which kinda doesn't matter, because before long it will fade away, eclipsed by the next fashionable thing), it's clear to me that there is a huge difference between innovation that is based on using computers (which evolve quickly) and all other kinds of innovation.

    For purposes of this post, I'll define innovation simply: innovation is doing something differently than you did before.

    Physical Innovation

    Physical innovation is hard. It doesn't happen very often. The reason is simple: over time, everyone pretty much figures out the best way to do things, and figuring out something new is hard and rare. A typical example of this is the gradual shift from wrought iron to steel.

    Here is the famous iron pillar at Qtub Minar in Delhi, as it was when I visited it a few years ago. 2005 05 17 Qtub Minar Delhi 007

    This pillar was created no less than 1,000 years ago, and perhaps longer than 1,500 years. Wrought iron was created in many parts of the world, from China: Chinese_smelting

    to Europe.

    Steel is closely related, but different in important ways. The very earliest steel is about 4,000 years old. A form of steel, Wootz steel, was made in India more than 2,000 years ago. This steel was shipped to the Middle East, where it became the raw material for Damascus steel swords. But none of it was a practical innovation over wrought iron until the introduction of the Bessemer Process in the 1850's. 800px-Bessemer_5180
    Then and only then could we have wonderful modern things like steel cables, structural steel for buildings and bridges and many other things.

    Physical innovation, like the replacement of wrought iron by modern steel, is tough and long, punctuated by invention while still requiring endless baby-step innovations.

    Process innovation

    Process innovation is a whole different animal. Process is what the concerned human beings agree it should be, even if a bunch of machines are involved. The only limit is concepts. Opportunities for process innovation are all around us. In all too many cases, it seems more appropriate to call a process innovation something more like "stop doing it the obviously stupid way."

    Here's an example. Not long ago, a delayed flight I was waiting for at JFK airport was finally cancelled at 1am. A whole lot of people went to the terminal entrance and got in the roped-off line for cabs. I waited about 20 minutes to get to the front of the line, and there were loads of people still waiting after me. "It's real late," I thought, "I guess most of the cabbies were sensible and are home sleeping in bed." Nope! There was a looooong line of cabs waiting to pick up the loooooong line of exhausted rejected passengers. 20140308_cabs
    What was the problem? Process, of course. There was a single person who had to find out where you were going and give you the right piece of official paper before you could get into a cab. 20140308_dispatcher
    And instead of walking up the line of waiting people, the "dispatcher" insisted on performing his duty as you were getting into the cab, which serialized the whole process.

    Process "innovation" is, more often than not, simply "stupidity elimination."

    Conceptual innovation

    Conceptual innovation is a pretty big deal. It is limited only by the powers of the human mind. One of my favorite examples is one I encountered around the time I graduated from high school, George Danzig's Simplex algorithm for solving a linear programming (as in math programming, not software programming) problem. It's cool; it's been called one of the top ten algorithms of last century.

    Computer Innovation

    I know there's lots of physical innovation involved in creating the unprecedented, awesome speed with which computers evolve. There has been nothing comparable to it in human history. It also know it's accompanied by and partly enabled by lots of true conceptual innovation and some process innovation. But let's take all that as a given. What do we have?

    We have a set of tools that can control, automate and communicate faster than anything in history, and that improve at a hard-to-comprehend rate. As soon as we get something working with one generation of the things — BOOOM! — everything concerned has just gotten better by 2X or more in speed, cost and size.

    Steel took over from wrought iron when the process of making it got faster and cheaper, and when the results were superior. It took decades. Well, that happens every year or two with computers — the question is, how are you going to take advantage of the improvement?

    That's computer innovation. What's possible now that wasn't a year ago? What can I do now to create a product or service that, on next year's devices and networks, will make sense? The people who jump on this and make it happen are the innovators.

    The themes are clear: we move from slow transmission of small amounts of data to big, expensive devices (think teletype) to near-instant transmission of huge amounts of data to small, affordable devices (think smart phones). This happened in small steps. Each step was a massive technology and business disruption. Fortunes were made and lost at each step. Fortunes will continue to be made (and lost) as some people see the possibilities and take advantage of them, while most learn the current state of computing and networking, and — amazingly — act as though it won't change. That's actually what the vast majority of people and companies do!!

    Computer innovation is different than the other kinds — not better, just different. If you understand the rules and act accordingly, you can accomplish amazing things. It almost feels like cheating to call it "innovation," but technically it is, so let's go with it.

Links

Recent Posts

Categories