Author: David B. Black

  • Replication: Good Idea! Storage replication? Nah!

    Everyone knows losing your data is a bummer. If you're in charge of your organization's data, you know that losing data is the shortest path to "don't let the door hit you on the way out."

    All the ways to assure your data is still available when you wake up tomorrow share a common theme: "make a copy." This is such a popular theme that it has turned into a theme-and-variations: "make a copy; make another copy; copy the copy; etc."

    This sounds simple, but we all know that in computing, stuff is supposed to be complicated. Sure enough, this simple "just copy it" theme has gotten mired in hotly competing ways to get it done. And of course, there are politics — whose responsibility is it to assure against loss?

    So let me boil it down: there are two basic ways to do the copy:

    1. The guys in charge of the data, the storage guys, should copy the data from the original bunch of storage to a second bunch of storage.
    2. The guys who write the data, the applications or systems guys, should get their applications or systems to talk to each other and write the data twice.

    The only reason this is hard is that politics and history are involved. If you had fresh, educated people starting from scratch, it would be no contest: way number 2 wins, almost every time. It's faster, cheaper and easier than way number 1. But since when can we wave a magic wand and eliminate politics and history? The reality is, storage guys own the data, they want to protect it, and so they (usually) really, really, REALLY want to be in charge.

    Here's why they shouldn't be.

    You've got two sites, number 1 and 2. Each one of them has a database and a bunch of storage. Transactions come into site 1 and get written to storage. D1

    Here's a simple transaction that might be written to the database.

    Txn 1
    It's a SQL statement that says the DBMS should write the transaction into the transaction table. The transaction contains the usual fields, things like the unique ID for the tranaction, the account number it's applied to, the amount of the transaction, etc. This is usually a simple string, a line or two long.

    When the database processes the transaction, it gets complicated, of course.

    Txn 2
    When the Insert statement goes to the DBMS, the DBMS has to write the transaction itself, but it also has to write at least a couple of the fields to index tables, kind of like card catalogs in old-style libraries that let you find where things are. Indices typically use well-know things called b-trees, which may require a couple of writes to create a multi-level index, for the same reason you put related files into sub-folders so you have some chance of finding them later. There will certainly be an index for the account ID and one for the account number. Finally, there's a log to enable the DBMS to figure out what it did in case bad things happen. D2

    All this happens when the Insert transaction comes in. One simple request to the DBMS, many writes and updates to the storage, usually involving reading in big blocks of data, modifying a smal part of the block, and writing the whole thing out again.

    Now we come to the crux of the matter: how do we get the data over to site 2? Does the DBMS at site 1 talk with his buddy at site 2 to get it done, or are the relevant storage blocks in site 1 copied over to site 2?

    In the diagram, I show the DBMS doing the job in green and the storage doing the job in red. D3

    You'll notice that the DBMS only has to send a tiny amount of data over to site 2, essentially the insert statement. Once it's there, DBMS #2 updates all the storage, something it's really good at doing.

    To replicate the data once it's been stored (in red), HUGE amounts of data need to be sent over the network to site #2. It's not unusual for the ratio to be hundreds or thousands to one. US Letter Blank_2

    Sending data between sites is a relatively slow and expensive operation. That's why, if you want replication that's fast, reliable and inexpensive, you want the application to do the job, not the storage.

    The storage replication people don't like to talk about the things that go wrong, but of course they do. What happens if some of the blocks get over but others don't. Or they're out of order. Or syncing with the database doesn't happen. Or any number of other bad outcomes.

    Other applications

    I'm using a database application to illustrate the principle, but similar dynamics work out with other applications. All major databases can replicate (Oracle, MySQL, SQLServer, MongoDB, etc.), the major file systems can replicate (for example Microsoft has VSS), and all the hypervisors can replicate.

    The hypervisors are amazing. The first thing the storage guys will come back with is how many different applications you have to fiddle with to protect their data. The answer of substance is that the incremental effort for each application is truly trivial, well under 1%. The quick answer is that hypervisors (VMware, Hyper-V, etc.) are universal, and their replication is superior to storage replication. This is exactly why, as organizations move their data centers to the cloud, they are abandoning expensive, inefficient storage vendor-lock-in features like replication in favor of doing it in the hypervisor.

    Conclusion

    You have to protect and preserve your data. Non-negotiable. The storage guys used to have a monopoly on it. But their high-priced, inefficient copy methods are rapidly giving way to more effective, modern ways that save money and are nearly standard in the SLA-centric world of cloud computing.

     

  • How to Evaluate Programming Languages

    Programmers say language A is "better" than language B. Or, to avoid giving offense, they'll say they "like" language A. Sometimes they get passionate, and get their colleagues to program in their new favorite language.

    This doesn't happen often; inertia usually wins. When a change is made, it's usually passion and energy that win the day. If one person cares a WHOLE LOT, and everyone else is, "whatever," then the new language happens.

    Sometimes a know-nothing manager (sorry, I'm repeating myself) comes along and asks the justification for the change. The leading argument is normally "the new language is better." In response to the obvious "how is it better?" people try to get away with "It's just better!" If the manager hangs tough and demands rationality, the passionate programmer may lose his cool and insist that the new language is "more productive." This of course is dangerous, because the rational manager (or have I just defined the empty set?) should reply, "OK, I'll measure the results and we'll see." Mr. Passion has now gotten his way, but has screwed over everyone. But it usually doesn't matter — who measures "programmer productivity" anyway?

    But seriously, how should we measure degrees of goodness in programming languages? If there's a common set of yardsticks, I haven't encountered them yet.

    The Ability of the Programmer

    First, let's handle the most obvious issue: the skill of the programmer. Edgar Allan Poe had a really primitive writing tool: pen (not even ball-point!) and paper. But he still managed to write circles around millions of would-be writers equipped with the best word processing programs that technology has to offer. I've dealt with this issue in detail before, so let's accept the qualification "assuming the programmer is equally skilled and experienced in all cases."

    Dimensions of Goodness

    Programmers confined to one region of programming (i.e., most programmers) don't often encounter this, but there are multiple dimensions of goodness, and they apply quite differently to different programming demands.

    Suppose you're a big fan of object-orientation. Now it's time to write a device driver. Will you judge the goodness of the driver based on the extent to which it's object-oriented? Only if you're totally blindered and stupid. In a driver you want high performance and effective handling of exception conditions. Period. That is the most important dimension of goodness. Of all the programs that meet that condition, you could then rank them on other dimensions, for example readability of the code.

    Given that, what's the best language for writing the driver? Our fan of object orientation may love the fact that his favorite language can't do pointer arithmetic and is grudgingly willing to admit that, yes, garbage collection does happen, but with today's fast processors, who cares?

    Sorry, dude, you're importing application-centric thinking into the world of systems software. Doesn't work, terrible idea.

    It isn't just drivers. There is a universe of programs for which the main dimensions of goodness are the efficiency of resource utilization. For example, sort algorithms are valued on both performance and space utilization (less is better). There is a whole wonderful universe of work devoted to this subject, and Knuth is near the center of that universe.ArtOfComputerProgramming

    Consistent with that thinking, Knuth made up his own assembler language, and wrote programs in it to illustrate his algorithms. Knuth clearly felt that minimum space utilization and maximum performance were the primary dimensions of goodness.

    The largest Dimension of Goodness

    While there are other important special cases, the dimension of goodness most frequently relevant is hard to state simply, but is simple common sense:

    The easier it is to create and modify programs, quickly and accurately, the more goodness there is.

    • Create. Doesn't happen much, but still important.
    • Modify. The most frequent and important act by far.
    • Quickly. All other things being equal, less effort and fewer steps is better.
    • Accurately. Anything that tempts us to error is bad, anything that helps us to accuracy is good.

    That, I propose, is (in most cases) the most important measure of goodness we should use.

    Theoretical Answer

    Someday I'll get around to publishing my book on Occamality, which asks and answers the question "how good is a computer program?" Until then, here is a super-short summary: among all equally accurate expressions of a given operational requirement, the best one has exactly one place where any given semanic entity is expressed, so that for any given thing you want to change, you need only go to one place to accomplish the change. In other words, the least redundancy. What Shannon's Law is for communications channels, Occamality is for programs. Given the same logic expressed in different languages, the language that enables the least redundancy is the best, the most Occamal.

    Historical Answer

    By studying a wide variety of programs and programming languages in many fields over many years, it's possible to discern a trend. There is a slow but clear trend towards Occamality, which demonstrates what it is and how it's best expressed. The trend is the result of simple pressures of time and money.

    You write a program. Sombody comes along and wants something different, so you change the program. Other people come along and want changes. You get tired of modifying the program, you see that the changes they want aren't to different, so you create parameters that people can change to their heart's content. The parameters grow and multiply, until you've got loads of them, but at least people feel like they're in control and aren't bugging you for changes. Parameters rule.

    Then someone wants some real action-like thing just their way. You throw up your hands, give them the source code, and tell them to have fun. Maybe they succeed, maybe not. But eventually, they want your enhancements in their special crappy version of your nice program, and it's a pain. It happens again. You get sick of it, analyze the places they wanted the changes, and make official "user exits." Now they can kill themselves customizing, and it all happens outside your code. Phew.

    Things keep evolving, and the use of user exits explode. Idiots keep writing them so that the whole darn system crashes, or screws up the data. At least with parameters, nothing really awful can happen. The light bulb comes on. What if I could have something like parameters (it's data, it can't crash) that could do anything anyone's wanted to do with a user exit? In other words, what if everything my application could do was expressed in really powerful, declarative parameters? Hmmm. The users would be out of my hair for-like-ever.

    What I've just described is how a new programming "level" emerges historically. This is what led to operating systems, except that applications are one giant user exit. UNIX is chock full of things like this. This history is the history of SQL in a nutshell — a powerful, does-everything system at the level of declarative, user-exit-like parameters!

    The Answer

    In general, the more Occamal the language (and its use), the better it is. More specifically, given a set of languages, the best one has

    • semantics that are close to the problem domain
    • features that let you eliminate redundancy
    • a declarative approach (rather than an imperative one)

    Let's go through each of these.

    Problem Domain Semantics

    A great example of a such a language is the unix utility AWK. It's a language whose purpose is to parse and process strings. Period. You want an accounting system, don't use AWK. You want to generate cute-looking web pages, don't use AWK. But if you've got a stream of text that needs processing, AWK is your friend.

    From the enterprise space, ABAP is an interesting example. While it's now the prime language for writing SAP applications, ABAP was originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report creation processor." In other words, they saw the endless varieties of reporting and created a language for it; then having seen the vast utility of putting customization power in the hands of users, generalized it.

    Features that let you eliminate redundancy

    This is what subroutines, classes and inheritance are supposed to be all about. And they can help. But more often, creating another program "level" is the most compelling solution, i.e., writing everything that the program might want to do in common, efficient ways, and having the application-specific "language" just select and arrange the base capabilities. This is old news. Most primitively, it's a subroutine library, something that's been around since FORTRAN days. But there's an important, incredibly powerful trick here. In FORTRAN (and in pretty much all classically-organized subroutine libraries), the library sits around passively waiting to be called by the statements in the language. In a domain-specific language, it's the other way round!

    A related approach, which has been implemented over and over, is based on noticing that languages are usually designed independent of databases, but frequently used together. The result is monster redundancy! Wouldn't it be a hoot if database access were somehow an integral part of the language! Well, that explains how and why ABAP evolved from a reporting lanaguage (necessarily intimate with the database) into "Advanced Business Application Programming." And it explains why Ruby, when combined with the database-leveraging RAILS framework, is so popular and highly productive.

    A declarative approach

    In the world of programming, as in life, the world divides between imperative (commands you, tells you how to do something, gives you directions for getting to point B) and declarative (tells you what should be done, identifies point B as the goal). In short, "what" is declarative and "how" is imperative. A core reason for the incredible success of SQL is that it is declarative, as Chris Date has described in minute detail. Declarative also tends to be less redundant and more concise. As well as doesn't crash.

    Conclusion

    I'm sorry if you're disappointed I didn't name Erlang or whatever your favorite is as the "best programming language," but I insist that it's far more novel and useful to decide on what basis we are to judge "goodness" in programming languages, and in programs. In general and in most domains (with important exceptions, like inside operating systems), non-redundancy is the reigning virtue, and languages that enable it are superior to ones that are not. Non-redundancy is nearly always best achieved with a declarative, problem-domain-centric approach. And it further achieves the common-sense goal of fast, accurate creation and modification of programs.

    Occamality is rarely explicitly valued by programmers, but the trend to it is easy to see. There are widespread examples of building domain-specific languages, meta-data and other aspects of Occamlity. Many programmers already act as though Occamality were the primary dimension of goodness — may they increase in numbers and influence! 

  • Lessons for Software from the History of Scurvy

    Software is infected by horrible diseases. These awful diseases cause painfully long gestation periods requiring armies of support people, after which deformed, barely-alive products struggle to be useful, live crippled existences, and are finally forgotten. Software that functions reasonably well is surprisingly rare, and even then typically requires extensive support staffs to remain functional.

    Similarly, sailors suffered from the dread disease of scurvy until quite recently in human history. The history of scurvy sheds surprising light on the diseases which plague software. I hope applying the lessons of scurvy will lead to a world of disease-free, healthy software sooner than would otherwise happen.

    Scurvy

    Scurvy is caused by a lack of vitamin C. It's a rotten disease. First you get depressed and weak. Then you pant while walking and your bones hurt. Next your skin goes bad,

    378px-A_case_of_Scurvy_journal_of_Henry_Walsh_Mahon
    your gums rot and your teeth fall out.

    Scorbutic_gums
    You get fevers and convulsions. And then you die. Yuck.

    The Impact of scurvy

    Scurvy has been known since the Egyptians and Greeks. Between 1500 and 1800, it's been estimated that it killed 2 million sailors. For example, in 1520, Magellan lost 208 out of a crew of 230, mainly to scurvy. During the Seven Years' War, the Royal Navy reported that it conscripted 184,899 sailors, of whom 133,708 died, mostly due to scurvy. Even though most British sailors were scurvy-free by then, expeditions to the Antarctic in the early 20th century were plagued by scurvy.

    The Long path to Scurvy prevention and cure

    The cure for scurvy was discovered repeatedly. In 1614 a book was published by the Surgeon General of the East India company with a cure. Another was published in 1734 with a cure. Some admirals kept their sailors healthy by providing them daily doses of fresh citrus. In 1747 the Scottish Naval Surgeon James Lind proved (in the first-ever clinical trial!) that scurvy could be prevented and cured by eating citrus fruit.

    JamesLind

    Finally, during the Napoleonic Wars, the British Navy implemented the use of fresh lemons and solved the problem. In 1867, the Scot Lachlan Rose invented a method to preserve lime juice without alcohol, and daily doses of the new product were soon standard for sailors, which is how "limey" became synonymous with "sailor."

    B_scurvy

    Competing Theories and Establishment Resistance

    The effective cures that had been known and used by some people for centuries were not in a vacuum. There were competing theories. Cures included urine mouthwashes, sulphuric acid and bloodletting. As recently as 100 years ago, the prevailing theory was that scurvy was caused by "tainted" meat. How could this be?

    We've seen this movie before. Over and over again. I told the story of Lister and the discovery of antiseptic surgery — and the massive resistance to the new method by the leading authorities at the time.

    Software Diseases

    This brings us back to software. However esoteric and difficult it may be, software is a human endeavor: people create, change and use software and the devices it powers. Like any human endeavor, some of what happens is because of the subject matter, but a great deal is due to human nature. People are, after all, people, regardless of what they do. Patients were killed for lack of antiseptic surgery — and the surgical establishment fought it tooth and nail. Millions of sailors were killed by scurvy, when a cure had been known, practiced and proved for centuries. Why would we expect any other reaction to cures for software diseases, when the "only" consequence of the diseases are explosive growth in the time, cost and risk to build and maintain software, which is nonetheless crappy and late?

    Is there a general outcry about this dismal software situation? No! Why would anyone expect there would be? Everyone thinks it's just the way software is, just like they thought scurvy in sailors and deaths after surgery were part of life. Government software screws up,

    Healthcare-gov-wait
    software from major corporations is awful,

    Hertz fail

    software from cool new social media companies is inexcusably bad. Examples of bad software can be listed for endless, boring, tedious, like forever lengths.

    Toward Healthy Software Development

    If I had spent my life in the normal way (for a software guy), I wouldn't be on this kick. But I didn't and I am on this most-software-sucks kick. Early on, I had enough exposure to large-group software practices to convince me that I wanted none of it. I'd rather actually get stuff done, thank you very much. Now, looking at many young software ventures over a period of a couple decades, the patterns have emerged clearly.

    I have described the main sources of the problems. I have described the key features of diseasefree software development. I have explained the main sources of the resistance to a cure, for example in this post. And I have no illusion that things will change any time soon.

    It will sure be nice when the pockets of healthy software excellence that I see proliferate more quickly than they are, and when an anti-establishment consensus consolidates and gains visibility more quickly than it is. In the meantime, there is good news: groups that use healthy, disease-free software methods will have a massive competitive advantage over the rest. It's like ninjas vs. a collection of retired security guards. It's just not fair!

  • Obstacles to Scaling: Centralization

    Want to build a scalable application? Use a scalable architecture. What's a scalable architecture? Simple. A scalable architecture is "shared nothing," an architecture in which nothing is centralized. This seems to be harder to achieve the "deeper" you go into the stack; many software architects still seem to like centralized databases and storage. It's sad: centralized database and/or storage are the most frequent cause of problems, both technical and financial, in the systems I see.

    Scalability

    Scaling is simple concept. As your business grows, you should be able to grow your systems to match, with no trouble. Linear scalability is the goal: 11 servers should be able to do 10% more  work than 10 servers. Adding a server gives you a whole server's worth of additional capacity. With anything less, you don't have linear scalability.

    This is what we normally enjoy with web servers, due to the joys of web architecture and load balancers.

    Sadly, this is often not what we normally enjoy with databases, because of mindless clinging to obsolete practices and concepts.

    Databases

    Databases are a wonderful example of a tool that was invented to solve a hard problem and has created a lot of value — but has turned into a self-contained island of specialization that tends to cause more problems than it solves.

    Databases are a Classic example of a Software Layer

    Most people in software seem to think that having layers is a good thing. Software layers are, with few exceptions, a thing that is very, very bad! The existence and necessity of the layer tends to be accepted by everyone. It's so complicated that it requires specialists. The specialists are special because they know all about the layer and what it can do. They compete with other specialists to make it do more and more. Their judgments are rarely questioned. Sadly, they are wrong all too often both on matters of strategy and detailed tactics. All these characteristics of software layers apply to the database.

    Database pathology is a classic result of the speed of computer evolution

    Databases were invented by smart people who had a hard problem to solve. But the fact that they have persisted as a standard part of the programmer's toolkit, essentially unchanged, is a classic side-effect of the fact that computer speed evolves much more quickly than the minds and practices of the programmers who use them. This concept is explained and illustrated here.

    How to fix the problem

    There are a couple of approaches, depending on how radical you are.

    • Fix the scalability problem by moving beyond databases

    If you have the chance, you should do yourself and everyone else a favor and move to the modern age. As I show in detail here, the fierce speed of computer evolution has solved most of the problems that databases were designed to solve. The problem no longer exists! Get over it and move on!

    • Fix the scalability problem by moving to shared nothing

    If you're not willing to risk being burned at a stake for the heresy of claiming that a problem involving a bunch of data can be solved nicely without a database, there are almost always things you can do to fix the typical centralized database pathologies.

    The desire to have all the data in a single central DBMS is strong among database specialists. This desire is what fuels the incredible amount of money that goes to high-end solutions like Oracle RAC. The desire is completely understandable. It's not unlike when a bunch of guys get together, bragging rights go to the one with the coolest car or truck.

    However understandable, this desire is misguided, counter-productive and remarkably ignorant of fundamental DBMS concepts, like the difference between logical and physical embodiments of a schema. There is no question that there needs to be a single, central logical DBMS. But physical? Go back to database school, man! All you need to do is apply a simple concept like sharding, which in some variation is applicable to every commercial schema I've ever seen, and you've gone most of the way to the goal of a shared-nothing architecture, which gives you limitless linear scaling. Game over!

    Analysis

    Computers evolve far more quickly than software, which itself evolves far more quickly than the vast majority of programmers. There is nothing in human experience that evolves so quickly. This fact explains a great deal of what goes on in computing.

    I've found that the more layers a given computer technology is "away from" the user, the more slowly it tends to change, i.e., the farther in the past its "best practices" tend to be rooted. In these terms, databases are pretty deeply buried from normal users, metaphorically many archaological layers below the surface. They are "older" in evolutionary terms than more modern things like browsers. Similarly, storage is buried pretty deep. That's why most of the people who devote their professional careers to them are mired in old concepts. If you think about it, you realize that DBMS and storage thinking strongly resembles thinking about those ancient beasts that used to rule the earth, mainframes!

    Conclusion

    Most software needs to be scalable. "Shared nothing" is the key architectural feature you need to achieve the gold standard of scalability, linear scalability. Shared nothing is common practice among layers of systems that are "close to" users, but relatively rare among the deeper layers, like database and storage. But by dragging the database function to within a decade or so of the present, and by applying concepts that are undisputed in the field, you can achieve linear scalability even for the database function, and usually save a pile of money and trouble to boot!

     

  • Who Makes the Software Decisions?

    When the home team loses game after game, everyone starts wondering who's in charge, and shouldn't changes be made? Well, this is exactly what's happening in software. It's gotten so bad that it's making the front page of the tabloids.

    Post
    Who's in charge in software? Who makes the decisions? Could it possibly be that the wrong people are empowered to make crucial decisions in software, and that things will only get better if we make a change?

    Software decision makers

    Who makes important decisions in software? The answer is obvious: anyone but programmers (ABP)! Programmers are the ones who do the real work: create and modify source code. They work with various tool sets in one or more programming environments and create the code that leads to the results needed by the business. You might think that programmers, therefore, would be front-and-center in selecting the toolsets and methods to use to get the desired results most effectively. This is rarely the case. Such decisions are typically made by people who are not, and in many cases never were, programmers.

    What the industry thinks

    I receive e-mail solicitations every day for various software products. I'm often invited to attend seminars or webcasts. Here is a typical example I received recently: What

    I'm not drawing attention to it because it's exceptional in any way. The next part of the solicitation is also typical, who should attend: Who

    Who should attend? ABP, of course! Again, the company that sent the solicitation is not doing anything wrong. In fact, they're being smart. They are inviting the people who make important decisions about programming, i.e., ABP.

    How things work in other fields

    In just about any field you can think of, the more highly specialized and skilled the person doing the work, the more involved that person tends to be in all important decisions about the work. While kids starting out in baseball are told what gloves and bats to use, accomplished players have their own gloves and bats they have selected.

    Even when the front-line people in other fields don't make the ultimate decisions, the important managers who do tend to be former front-line people.

    Software is different

    Things are different in software, of course. In sports, anyone can watch the game. They see the players on the field or court. TV commentators can circle a player on the screen and tell you to watch the thing they did, whether stupid or smart. Whatever it was, it makes sense to the viewer.

    There is no equivalent in software! Software is invisible! (to everyone but programmers…) All those important decision-makers ever see is reports. Not only don't they open the hood and look underneath, they can't even see the car! The decision-makers largely rely on rumors and hearsay, but nonetheless develop strong opinions about how best to win on a playing field they can't see, where a game is played they can't play, following rules about which they are entirely ignorant. Hmmm, how is this going to turn out, I wonder???

    It doesn't have to be this way!

    There are places where business-as-usual in software decision-making is … blatantly violated! Someone who … wrote the code! … is actually in charge of things. Of course, since he's a guy entirely without management training — OMG, he doesn't even have an MBA, that's how bad it is! — the place must be a disaster, right?

    One such place is Athena Health. Athena powers doctor's offices. I first encountered them more than ten years years ago, when I had the pleasure of having a phone interview with the guy who wrote their code. I was hearing lots of skepticism, which is why I was having the call. This was still the time when internet bubble thinking ruled technology, and the rumor was that this guy was using "toy" technology and building something that "wasn't scalable." Heh.

    The real problem was that the guy who wrote the code's skill was … get ready for this … writing good code, and making excellent decisions about writing code along the way! He had no experience in or talent for telling ignorant investors what they expected to hear. Bless him!

    We invested, and the company has done, and continues to do, great. A couple years ago, I was pleased to have Ed Park, who is now EVP and COO of the company, attend my nerdfest, a gathering of top CTO's of companies I'm associated with. Here he is explaining something.

    2011 07 03 Nerdfest Sunday 006s
    Ed wrote Athena's original code. He still knows stuff, and continues to make decisions based on the substance, not just go-through-the-motions process.

    Conclusion

    While lots of spinning goes on to disguise the fact, software projects typically fail, and even ones that "succeed" have crappy software. The Post was right (see above) to feature the question, "who makes the software decisions?" The industry's answer is clearly and unambiguously ABP (anyone but programmers). If you want the software your organization produces to sort of, actually, you know, work, you might want to think about removing the "ABP" restriction from the job requirement for software decision-makers.

  • Fundamental Concepts of Computing: Speed of Evolution

    Nothing we encounter in our daily lives changes or evolves as quickly as computing. All our habits of thinking are geared towards things that evolve slowly, compared to computing. This is a simple concept. It is disputed by no one. But it has implications that are vast and largely undiscussed and unexplored. It clearly deserves to be a fundamental concept of computing, along with a few correlaries.

    Normal evolution speeds

    Our planet evolves. Most of the changes are slow, with peak events mixed in. Life forms evolve slowly, over tens of thousands of years. Human culture evolves more quickly — too fast for some people, and not nearly fast enough for others. Human capabilities also evolve, but for something like speed to double over a period of decades is astounding.

    Take people running as an example. Here are the records for running a mile over the last 150 years or so. Mens record
    The time has been reduced by roughly 25% over all those years. Impressive for the people involved, and amazingly fast for human change.

    Once you shift to things made by humans, the rate increases, particularly as science and technology have kicked in. We've invented cars and they've gotten much faster since the early ones. Here is a lady on a race car in 1908. She was called the "fastest lady on earth" for driving at 97 mph in 1906.

    Miss_Dorothy_Levitt,_in_a_26hp_Napier,_Brooklands,_1908

    in 2013, Danica Patrick won pole position during qualifying rounds at the Daytona Speedway by averaging over 196 m.p.h.

    Danica

    In this case, speed roughly doubled over about a hundred years.

    Computer evolution speeds

    Computers are different. Moore's Law is widely known: power doubles roughly every 18 months. And then doubles again. And again. And again. Every time someone predicts an end to the doubling, someone else figures out a way to keep it going. This is a fact, and it's no secret. It's behind the fact that my cell phone has vastly more computational power and storage than the room-sized computer I learned on in 1966.

    This chart should blow anyone's mind, even if you've seen it before. It shows processor transistor count (roughly correlates with power) increasing by a factor of 10, then another, then another. In sum, it shows that power has increased about one million times over the last forty years.

    Processor speed

    It would be mind-blowing enough that we have something in our lives that increases in speed at such an incomprehensible rate. But that's not all! Everything about computers has also gotten less expensive! For example, the following chart shows how DRAM storage prices have gone down over the last twenty years, from over $50,000 per GB to around $10. In other words, to about 2% of 1% of the price twenty years ago.

    DRAM-GB-Price

    That's faster and cheaper in a nutshell.

    So what? Everyone knows that things change quickly with computers, what's the big deal? It's just the way things are!

    Here's what: this simple fact has profound implications.

    Everything about human beings is geared for people and things that evolve at "normal" speed. Our patterns of thought, the things we do, most of our behaviors were developed for "normal-speed" evolvers, i.e., everything but computers (EBC). A surprising number of them break, are wrong or yield crappy results when applied to computers. There is no reason at all to be surprised by this; in fact, anything else would be surprising. What is surprising and interesting is that the implications are rarely discussed.

    Computers evolve more quickly: it matters!

    When you apply patterns of thought and behavior that may be appropriate for EBC to fast-evolving computers, those thoughts and behaviors typically fail miserably. This "impedance mis-match" explains failure patterns that persist for decades. Of course everyone knows that computers are different from other things — just as they know that Indian food is different from Chinese food. What they tend not to know is that computers are different in a different way (because of the speed of evolution) from EBC.

    Here are a couple examples.

    The mainstream "wisdom" for software project management is essentially the same, with minor modifications, as managing anything else, from building a house to a new brand of toothpaste. It's not! That's one of the many reasons why the larger organizations that depend on those techniques fail to build software effectively.

    We treat software programming skills like any other kind of specialized knowledge, like labor law. They're not! Thinking that they are is one of the many reasons why great software people are 10 times better than really good ones, who are themselves 10 times better than average ones.

    The normal ways people go about hiring software engineers is crap. They think that hiring folks who deal with a subject matter that changes so dramatically is the same as hiring anyone else. It's not! They also think that extended, in-depth experience with a particular set of technologies is really valuable. It's not! It's actually a detriment!

    Software engineers tend to learn to program in a certain way, using a given set of tools, techniques and thought patterns. Those tools were designed to solve the set of problems that existed with computers at a certain point in their evolution. But computers evolved from that point! Quickly! The programmers are doing the equivalent of hunting for rabbits with weapons designed for hunting Mastodons, blissfully unaware of what's appropriate for computers the way they are today!

    Software is hard and isn't getting easier

    Software isn't a problem that gets solved — oh, now we know how to do it, finally. It doesn't get solved because the underlying reality (computers) evolves more quickly than anything else. The examples I mentioned are things that "everyone" is sure must apply to software. How could they not? The fact that they yield consistently horrible results seems not to break the widespread faith in the mainstream approach to software. 

    These and many other broadly accepted falsehoods explain why so many things about software are broken and (worse) don't seem to get fixed, decade after decade, when superior methods have been proven in practical application. Why is everyone so resistant to change? Is everyone stupid?

    Aw shucks, I admit it: "everyone is stupid" was my working hypothesis for explaining things like this. But I now have a more satisfying hypothesis: everyone is so used to dealing with "normal speed things," EBC's, that they just can't help themselves from applying to computers the methods and patterns of thought and behaviors that work reasonably well in most of their lives. Since nearly everyone gets the same lousy results, the conclusion everyone draws is that there's something about computers that's just miserable.

    Conclusion

    There is nothing comparable to computers in the rest of our human experience. Nothing evolves at anywhere close to the speed of computers, getting more powerful while getting cheaper at hard-to-comprehend rates. We apply methods and patterns of thought that work well with practically everything, and those methods fail when applied to computers. But they fail for everyone! The conclusion everyone draws from this is that computers are just nasty things, best to stay away from them and avoid blame. It's the wrong conclusion.

    Computers are understandable. The typical failures are completely explained by the mis-matched methods we bring to them, like trying to catch butterflies with a lasso. When people use methods that are adapted to the unprecedented evolutionary speed of computers, things go well.

  • People and Substance in Software Management

    What's the right focus for managing a bunch of software engineers? Should you focus first on assuring that you have a congenial team with good working relationships, or should you focus first on assuring the substance of what they're creating is the best it can be? This is a similar question to the choice of process and substance that I covered in a previous post and it's just as important.

    Substance vs. Process and People

    Just as schools of education purport to teach people how to educate without reference to the subject that is being educated (math, history, etc.), management schools (you know, the kind that give MBA's) purport to teach people how to manage without reference to the kind of work that is being done or the kind of people who are being managed. The idea that "good management" and "management skills" are entirely distinct from the actual work being done pervades our work environment. The more esoteric the work skills, and thus the more difficult it is for a manager to have a clue about what his people are doing, the more this seems to be the case. Software development is right up there with particle physics in its ability to induce glassy eyes among the non-initiated.

    Substance vs. People

    It's really tough if you're a manager and you're trying to manage something when you have no clue what the people doing the work are doing. This is what leads to the strong tendency for anyone who claims to be a good manager to focus on process instead of substance, and to focus on people and human relationships rather than substance. Truth be told, if you don't have a clue about what people are doing or talking about, you don't have a lot of choice. A normal human being observing an exchange between two engineers can easily discern that engineer A is being nasty and disrespectful to engineer B, but how can that normal human being see that engineer B is proposing a completely stupid approach that will take a lot of time and then fail, while engineer A is being quite restrained (relatively speaking) in his reaction to the uneducated drivel drooling out of engineer B's mouth? The manager is likely to react to the only thing he actually sees, i.e., the disrespect being given by engineer A to engineer B, and therefore chastise engineer A, when actually it's engineer B who should be sent back to the farm teams, if not worse.

    I am not claiming that there is a binary choice between people and substance, that if you focus on one you can't pay attention to the other. Of course, both are important. But as I look at software shops, I frequently see managers treating one of these as the primary goal, while figuring that the other will mostly take care of itself if they get the primary one right. The less they know about the substance, the more likely this is to be the case, but even experienced engineers who are trying to make the transition to management get a clear message on this subject from their superiors: good management is independent of substance, and a "good manager" makes sure above all that the human relationships in his group are in good shape, and that if they are, good software will naturally be the result.

    Management focus and company success

    In the best of all possible worlds, a manager would create harmony and excellent relationships among the programmers, while at the same time achieving maximum productivity towards making the company successful. That's a nice fantasy. But that's all it is. While being nasty doesn't imply great programming any more than being misunderstood implies that you're a genius, the fact is that there's a strong correlation between a focus on substance and company success.

    The reason putting major emphasis on substance (i.e., "who's right" is more important than "who's more respectful of other opinions") is a winner is simple. The groups who all agree that writing the best possible software to meet the company's needs are more likely to … write the best possible software to meet the company's needs! Compare this to groups who all agree that having personal harmony among its members is more important than achieving the best possible software — doesn't it make sense that you'll make all sorts of compromises in software to optimize everyone's feelings? Oooo, better give everyone a medal, we don't want to risk diminishing anyone's self-esteem, do we?

    Organizational Size

    I've noticed that the larger the overall organization, the more that general, content-free management is the norm, and the more likely it is to be imposed on groups that develop software. This is completely understandable from a human psychology point of view. Everyone thinks that the skills and knowledge they have are the really important ones, and the others (which they don't have) pale in importance by comparison. The big bosses are likely to come from sales, finance or consulting and think that having an MBA is a good thing. Why should software be different from, say, running a store, or making deliveries? Or making tubes of toothpaste? If you're a … drum roll, please … manager, then that's what you do: manage. Your skill trumps all the others!

    This is the dominant view of people who run organizations, or are on the management ladder. In order to advance, you have to adopt the organizational management-centric creed. Otherwise, you'll be "stuck" at the bottom of the management ladder, along with all the unwashed, unrewarded hordes who have to sit quietly with their heads down as various management fashions sweep through the grasping folks with sharp elbows who occupy the rungs of management ambition. If you even try to say "software doesn't work that way," all you've done is further disqualify yourself from advancement in rank.

    If you've ever wondered why it is that small, under-funded organizations are nearly always the one who build ground-breaking software, and that giant, "well-managed" organizations with hundreds or thousands of programmers rarely do, this is your answer. "Good management practices" assure that good software rarely emerges from their large organization, and that outstanding programmers with superior ideas about how to do things are marginalized and otherwise made unproductive.

    Software and Baseball

    This may sound cynical and harsh. But think about, say, baseball. While it is tempting to think that all those beards


    131003153809-red-sox-beards2-single-image-cut

    had something to do with the Red Sox winning the 2013 World Series, it is more likely that their outstanding pitching, fielding and hitting enabled them to defeat an excellent Cardinals team.

    The 2012 season was awful for the Sox. It was their first losing season since 1997, and worst season since 1965. How did they respond? Did they update their project management methodology? Did they send in HR specialists to get everyone to be nicer to each other? Did they sideline people based on failing to high-five someone when they struck out with men on base? They did none of these things. Here's what they did:

    • They fired the manager, even though he had a year left on his contract.
    • They got rid of a bunch of players based on … their poor performance!
    • They acquired a bunch of players based on … their superior performance!
    • Finally they focused on their energies on, you know, … winning games! Doing whatever it takes!

    Building good software is surprisingly like building a winning baseball team. Everyone does their job and puts everything they have into delivering great results. If someone isn't performing, too bad — you're not good enough to be a Red Sox! (Oh, just think how badly his feelings must hurt. How cruel!) Even so, sometimes someone is so awesomely good that it seems like he's carrying the team on his shoulders — and with Series MVP David Ortiz, you can make a strong argument that's exactly what he did with the excellent Sox.

    Conclusion

    People should be nice to each other. There should be mutual respect. But software is about results. The difference between average and great can easily be 10X for an individual, and 100X for an organization. You don't get "great" by focusing on process issues or good general management practices. You get "great" by … wait for it … focusing on software! Just like a musician focuses on the music and a painter focuses on the painting, the best software people focus on the software. The substance.

  • When is Software Development “Done?”

    Almost any activity you can think of, from building a road to composing a symphony, gets to a point where it's done. If not, something awful has happened, and you declare failure and move on. Software projects seem to be different, for no obvious reason. Quite frequently, software isn't a throw-it-out failure, but then it's not done either. What's going on here?

    Building a house

    Why can't software be like building a house? My uncle built a house for himself back in the 1950's. First came the foundation:

    1955 Arch St construction 14s

    He did a great deal of the work himself, as much as he could:

    1955 Arch St construction 67s

    All the way up to finishing the chimney for the fireplace and furnace:

    1955 Arch St construction 97s

    And then it was done! He and his wife could enjoy a nice time with their nephews in front of the fireplace of the completed house:

    1959 01 01 Mountoursville 4-10c

    Which actually was completed, unlike all those software projects, which drag on and on, refusing to get completed or to die. Perhaps this is why books and movies about zombies have become so popular!?

    If houses were like software…

    If houses were like software, instead of actually being done with them, they'd all be like the house built by Sarah Winchester, who bought an unfinished farm house in 1884, and spent the 38 years from then until her death having it worked on and expanded continuously, all day and all night. Here's a clip about it from an old magazine:

    Winchester mystery house

    Building Software

    Frequently, software projects are just failures. In spite of the traditional massive padding of estimates, things take even longer than projected. After the usual remedies (denial, punishing the innocent, rewarding the guilty, etc.) are exhausted, more money and resources are thrown at the project to "rescue" it. This inevitably has the effect of adding to the mountain of evidence supporting the thesis advanced by Fred Brooks in his classic "Mythical Man-Month" that adding resources to a late project makes it even later. Finally, the project is declared to be a "success" and promptly put on the shelf, never to be mentioned in polite company again, or the project in rare cases is declared a failure so that blame can be put onto the innocent target of some politically powerful person's agenda.

    However, there are exceptions. I see such exceptions constantly in the growing, innovative companies I work with. These companies don't just grow. They learn, experiment, evolve, extend and sometimes take great leaps. As modern companies, they do this in close collaboration with their software, and frequently software is all or a major part of the service they provide.

    Instead of thinking of the software as a house that needs to be designed and built, it's more appropriate to think of these companies as starting out with baby software that needs to keep growing and becoming stronger and more independent, like an infant grows to be a toddler and so on. If you stop developing software in this context, you guarantee the demise of the business. With a static business, it's appropriate to think of "finish or fail" as the relevant choices for software. With an innovative, growing business, it's appropriate to think of "evolve or die" as the relevant choices for software.

    Conclusion

    Everyone wants software to be like everything else: figure out what you want, build it, declare completion or failure, and move on. But when software is the engine that runs your business and you're trying to get on track to be a big success, the rules are different. In that case, the rules for software are: make the most important changes, figure out what is most important next, do it, clean up the software a bit, run some experiments, refine the winning approach, and keep evolving. Work fast, work accurate, be responsive, always learn, and keep learning. That's how you win with software.

     

  • Edward Snowden, Daniel Ellsberg: Ineffective Security, then and now

    In 1971, the New York Times started publishing excerpts from the closely guarded, highly top secret Pentagon Papers. It was an explosive public exposure of long-held secrets about the Vietnam War, and was a huge controversy. In 2013, the Guardian started publishing excerpts of closely guarded, highly top secret NSA operations. It was an exposive public exposure of the top secret operations of the most well-funded, computer-savvy security organization in the US. There is every reason to believe that security breaches will continue to happen, because the "experts" in charge of security just don't know how to get it done. They didn't know how 42 years ago, they don't know now, and they show no signs of even being interested in learning how to provide effective security.

    The RAND Corporation

    The RAND Corporation was one of the original top-secret research institutes. It was started after World War II to provide a place for top brains to figure things out that would help the military. In contrast to most places with top secret information at the time, the atmosphere inside RAND was purposefully academic and collegial. There were often open seminars and presentations anyone could attend, so that cross-disciplinary fertilization could take place. You had to have a very high level of background checking and security clearance to be admitted — but once you were in, you could go anywhere and talk with anyone, since everyone knew that if you were there, you had the appropriate clearances.

    People at RAND did truly pioneering work in econometrics, operations research, game theory and computing.

    The secrets at RAND needed to be faultlessly secure. While it looked like an ordinary office building close to the beach in Santa Monica, in fact it was a heavily fortified and guarded fortress, with armed guards at every entry point.

    Daniel Ellsberg

    Daniel-ellsberg-resized
    The story of Daniel Ellsberg and the Pentagon Papers is well known. Mr Ellsberg was a RAND employee, with degrees in economics from Harvard and a stint in the Marine Corps. He was involved in secret studies concerning the Vietnam war in the 1960's, and had access to what became known as the Pentagon Papers while at RAND around 1969. He made copies of literally thousands of pages at RAND … and walked out the door with them. Fortress RAND and all the armed guards kept the "normal" bad guys at bay — while letting the former corpsman with a PhD, dressed in a coat and tie and carrying a briefcase, walk calmly out with what they were supposed to be protecting.

    David Black

    1971 09 Harvard student ID card
    I was a scruffy-looking Harvard undergrad in 1970, and had gotten a summer job at RAND to work on the early ARPA net, the predecessor of today's internet. Before starting work, I had to undergo a thorough security clearance; agents actually visited many of my friends and asked probing questions. By the time I started work in July 1970, I had my SECRET clearance and was pending for TOP SECRET. I had a great time solving pioneering problems with the computers. RAND had an early IBM 360, and it was the first non-DEC machine to be connected to the ARPAnet, so we had to overcome a host of very basic issues, like resolving the conflicting coding schemes (EBCDIC vs ASCII), byte lengths (8 bit vs. 6 bit) and word lengths (32 bit vs. 36 bit), in addition to everything else.

    I was also amazed at everything else you could learn at RAND. While protests raged on the streets, inside the protected walls of RAND you could find out what was really going on in Vietnam and Cambodia, from people who had just returned from those places.

    In retrospect, I realize that I got a personal demonstration of how to conduct ineffective security that summer at RAND. The protestors had no chance of breaking into RAND and stealing its secrets. In fact, none did. The guards waved through most of the employees coming through the employee entrance. Except for the one who looked too much like the "hippies" outside. I got stopped and triple-checked every time. On the way out, all the clean-cut, well-dressed, brief-case-carrying employees like Daniel Ellsberg were similarly waved through — no danger there! But that tall, gangly, scruffy Harvard kid? Better stop him and search him thoroughly. He's just the kind of person who would steal our secrets. While they were doing everything but strip-searching me, Ellsberg was shopping the 7,000 pages of secrets he had already brazenly walked out with, under the friendly eyes of the clueless guards.

    The NSA leak of 2013

    The NSA is more of a fortress than RAND ever was. No way anyone could break in and come out alive. Cyber attack? Unlikely, for the same reason. A clean-cut employee-equivalent? Same story as RAND. Once on the inside, have fun! Do what you want, take what you want — we're too busy guarding against those scary outsiders to bother with you — you've got a clearance, you're OK! Except, like Ellsberg, Snowden was not OK.

    Ineffective then, Ineffective now

    I've previously discussed the standard methods for securing important things like bank and medical records. These methods have two fatal flaws.

    First, they take a fortress approach to security. They assume the attacks will come from outside the "walls" by outsiders. They ignore insider attacks, which are the most damaging ones by far.

    Second, they take a procedural, legalistic approach to security, assuming that if enough lawyers write enough regulations and procedures, and enough enforcement takes place through audits and certifications, the problem will be solved. They assume that complex, step-by-step procedures spelling out how to implement security are intrinsically better than simple definitions for what must be secured, with penalties for failures.The trouble is, no one executes the procedures perfectly, the procedures themselves are flawed, and the bad guys are always figuring out new ways to be bad.

    Either of these flaws is sufficient to explain our never-ending security crises, and our ever-spiralling costs for trying to be secure. Together, bad results are guaranteed.

    Summary

    Our security systems are straight from the time of castles and knights: we imagine that the threat is from the scary guys in armor charging around on big horses "out there." Then, with the wrong threat in mind, we .. get the lawyers on the case! We bury ourselves in policies, procedures, regulations, certifications and audits, all of which take time and money, and most of which is completely useless. Then the bad guy cleans up his act enough to get hired, ransacks the place, flees laughing all the way … and we're shocked?? The only shocking thing is that, 42 years after the Pentagon Papers, we're piling even more time and money into ramparts and moats, when the main threat has always been the traitor inside the walls.

     

  • Cyber Security Standards are Ineffective against Insiders like Edward Snowden

    The case of Edward Snowden, the fellow who ran off with a big pile of secrets from the super-secret NSA, illustrates a problem with the mainstream approach to computer security: it's expensive, it's burdensome, and it just doesn't work! Strengthening existing standard security measures, which is what usually happens after embarrassing episodes like this, will just make things worse.

    Securing what should be secure

    Other people can argue about what various agencies should or should not be doing and whether they should be secret. Putting all that aside, there are lots of things most of us want to be kept secret, for example our health and financial records, and for sure we want to prevent unauthorized use of that information. How hard is this to accomplish?

    Apparently it's pretty hard. There are huge security compromises that take place all too often, and smaller ones with great frequency. Security breaches resemble car crash deaths: there are so many of them (tens of thousands a year in the US!), that only the most gruesome of them make the news. If an agency with a secret budget probably in the billions, whose whole mission is about secrecy, can't stop an amateur like Edward Snowden, how is it that anything stays secret?

    Approaches to Security

    The vast majority of our thinking about security threats makes a couple crucial assumptions.

    Our thinking assumes that the threat comes from an outsider, and that the outsider attacks from the outside. The outsider (we think) probes to find a weakness in our defenses, and when he finds ones, smashes in and grabs what he wants.

    Regardless of the source of the threat, we assume that we can establish a procedure that will thwart any breach of security. We assume that if we are rigorous in our requirements for process, documentation, testing and much else, we can eliminate security threats.

    As the NSA case demonstrates, these assumptions are false. Regardless of your feelings about whether Snowden is a hero or a traitor, he clearly demonstrates the fact that our current approach to security is a waste of time.

    Insiders are the real threat

    The first assumption is the "bad guys out there" assumption. Huge amounts of money is spent on "intrusion detection," firewalls, and endless things that amount to building a castle wall that is high and thick so that our secrets can be protected.

    Here's what happens. The marauding knights come sauntering along and see those high walls. Naturally they check it out. They're impressed by everything about your wonderful castle: the moat, the guards, the mean-looking guys on the ramparts, the whole bit. So if you were a sensible bad guy, what would you do?

    You'd go to the nearest town, trade in your bad-guy clothes for a respectable suit or workman's clothes, or whatever the castle is looking to hire. Then you'd walk up to the employee entrance and apply for a job! Once you were inside, you'd keep your nose clean and figure out the lay of the land. Once you had it scoped, one day you'd leave at the end of your shift a much richer person than you were before, so rich that, well, you didn't bother to report to work at the castle any more.

    I was first educated about this by Paul Proctor, who gave me a copy of his 2001 book, The Practical Intrusion Detection Handbook. Most of the book is about what people want to buy, which is based on the "bad guys are out there" theory. But he has a whole chapter on "host-based intrusion detection," in which he spells out the methods and importance of detecting and thwarting bad guys who have managed to get a job working for you. This is what everyone should be doing, and all these years later, we're not!

    Tell me what to do, not how to do it!

    The second assumption is that we can define step-by-step procedures that will prevent security breaches. Hah! Not true! The vast majority of our security procedures have been written by people who are lawyers; if they're not, they're sure acting like they are!

    What we should do is tell you what to accomplish in simple terms, like "Don't murder anyone. No matter how mad or drunk you are, just don't do it. If you do, we'll execute you or put you in jail for a long time. So there." That's all you need, when you're telling someone what to accomplish.

    The equivalent for HIPPA would be something like: "Don't give anyone's health records to anyone except that person or their designated representative, like a parent if they're a kid."

    The equivalent for NSA would be: "Hey, everything we're doing here is real important stuff regarding national security, like what our name says. So don't let anyone who doesn't also work for NSA have it. Period. Ever. Otherwise, you're a traitor, and we'll nail you."

    Instead, what companies and agencies are required to do is conform to an ever-growing collection of detailed methods for supposedly getting secure. Except you spend so much time conforming to the regulations that some guy walks out the door with all your secrets!

    Here's the bad news: Snowden wasn't an exception; he's simply a particularly famous typical case in security-regulated organizations.

    Conclusion

    Edward Snowden is the tip of a security-breach iceberg. Credit cards are being stolen in spite of onerous security regulations. Health records are being compromised, in spite of increasingly onerous regulations. Our approach to security is flawed, fundamentally and by assumption. It's like we're in the water and we're trying to swim by blowing on the water. It's not working, and the solution is not to try blowing even harder. The solution is to take an aggressive, non-regulatory approach to the most likely perpetrators, insiders.

     

  • Wartime Software Book Available

    I've been threatening to release my book on Wartime Software. It is now available as a Kindle book.

    BBSB cover WTS
    Wartime Software is all about writing software when competition and speed matter. It's about releasing more often. It's about using new methods, as different as building bridges in peacetime and in time of war.

    Here is the introduction, which should give you the idea.

    Most people assume there is one “right” way to build software, and that’s that. While there are various fashion trends that infect software from time to time, none of them are as different as they like to think they are.

    There are some important but little-discussed facts about the mainstream consensus of software development:

    • It is mostly organized to give non-technical people confidence that things are OK, meaning on-time and on-budget. Its highest principle is predictability. Not speed.
    • It mostly doesn’t work. Studies support what everyone in the field knows: most projects fail outright, or have their goals changed to avoid admitting failure.

    So what we have are methods that are slow – and produce crappy results! What happened to slow but sure, or slow but steady? What we’ve got is slow and stupid.

    If everyone you compete against uses the same crappy methods, you’ll be OK. Your projects will be perpetually late and disappointing, but so will everyone else’s, so you’ll be performing “up to standard.”

    But what if you’re not? What if you’re competing against a group that gets way more done in much less time? I’m not talking 10 or 20% here; I’m talking many whole-number factors, like 10, 50 or more. What’s going to happen? It’s simple: you’re going to lose! If that’s OK with you, stop reading right now, close your eyes, and get lost in your muzak. You’ll be happier.

    If your goal is to learn the standard, accepted techniques of software as widely practiced, don't waste your time with this book. But if you're pioneering or really under the gun and need to find a way to program the way software ninjas program, you'll find some useful information in this book.

  • Storage For Big Data

    In Big Data, computers and storage are organized in new ways
    in order to achieve the scale required. The major storage companies just assert, without justification, that their old products are just fine. They're not.

    Big Data is way bigger than the biggest
    computers. In Hadoop, you solve the problem with an array of servers that
    can be as big as you like. Hadoop organizes them for linear scaling. While most
    storage vendors continue to plug their old centralized storage architectures
    and claim they’re good for Big Data, the only solution that’s actually scalable
    is an array of storage nodes, directly connected to the compute/storage nodes.
    Hadoop organizes the computing to use such an array of compute and storage
    nodes optimally, and it can grow without limit, for example to thousands of
    nodes.

    Hadoop has its own file system and database. The NAS systems
    pushed by legacy vendors just add expense and slow things down. The old
    centralized controller SAN systems are expensive and not scalable. Some vendors
    promote how they are good for Big Data because they use lots of SSD – but
    that’s way too expensive for Big Data. Others promote hybrid systems, but make
    them affordable by playing tricks like compression, which just add expense and
    slow things down.

    Exactly one vendor has a storage system that is best for Big
    Data: X-IO. X-IO has exactly the kind of storage nodes that Hadoop wants. Its
    independent storage nodes are linearly scalable, without limit. Its software
    makes spinning disks deliver at least twice the performance compared to any
    other system. It can optionally incorporate SSD’s for even better performance,
    without using the distracting tricks used by others – you just get better
    blended performance, without effort. Because of the inherent reliability of the X-IO ISE units, you don't need as many copies of the data.

    If it's Big, if it's Cloud, if it's virtual, the X-IO is the place to go for storage.

  • The Bogus Basis of “Trending on Twitter”

    People write and talk about what's "trending on Twitter" as though the trend meant something. It doesn't. It's based on deeply flawed Twitter search software that gives random, widely varying results. I know the weatherman is often wrong, but what if he said it was going to be sunny in the 70's tommorow and as often as not there was a blizzard — would you keep listening? It's the same with Twitter, only worse.

    Trending on Twitter is everywhere

    It's amazing how widespread this useless stuff is. New York Times editors are in on the game.

    Times editors
    It's even now got a prominent place on Wall Street!

    Bloomberg
    You can not only follow what's trending in general, but you can narrow it down to different locations.

    200 locations
    When a Twitter account is hacked, bad things happen.

    Hacked
    And sure enough, the markets react.

    Market plunge
    We seem to care not only about what the Boston bomber says on Twitter:

    Boston
    But we also pay attention to the useless Twitter trends about it:

    Innocent
    We've really got to stop this. It's not as though we've got reliable data here. It's just not. Twitter has been a technical joke for years, and there are no signs of improvement.

    Trending on Twitter is meaningless garbage

    I don't have the access to perform a universal test. But I did perform a test, and anyone else can reproduce my results. I did searches over a couple week period for the same term and saved the results. Sometimes the results were correct, but most of the time, items that were there before disappeared, only to pop up again on a subsequent search. Sometimes just a couple things were missing, and sometimes the gap was massive. Here is the evidence.

    Then I took the search that appeared to have the most gaps, and performed the identical search about a week later. As I documented, one search had just 5 items and the other had 32, when they should have been identical. About 85% of the search results had been dropped by Twitter!

    "Trending on Twitter" is based on comparing results of a search performed on one day to the same search performed on other days. If the number of results goes up or down, you've got a trend. Or so you think. But what if the results are really as bad as I have documented? I found that "blackliszt" went up or down by a factor of 6, like 600%! Wow!

    Conclusion

    Twitter software has always been bad. Management has learned to disguise the awfulness by suppressing the appearance of the "fail whale," but they clearly haven't actually, you know, made the software better. Anyone who takes its results as actually meaning something is depending on bogus data.

     

  • Twitter Software Quality: An Oxymoron

    Twitter software quality Stinks. As I've demonstrated. On revisting and updating the facts, I've decided that "Twitter Software Quality" should be promoted to the status of oxymoron, joining the august company of terms such as "southern efficiency," "northern hospitality," and "government worker."

    A Brief History of Random Awfulness

    I took samples of searches for "blackliszt" on these dates: Apr 18, 19, 20, 22, 24, 25, May 1, 8. A total of 8 samples.

    All searches were done as "All" to tell Twitter I wanted, you know, all the results, not just the ones Twitter felt like disclosing at the moment.

    I only grabbed the first page from each search. I've shown the results in another post. Of the 8 searches, the one on May 1 is the most extreme. Here's a copy of the May 1 search for "blackliszt:"

    XX
    You can see there are 5 tweets in the list of results, from Apr 11 to Oct 13. I decided to try to find out how many tweets there actually were between Oct 13 2012 and May 1, 2013, the date of the search pictured above.

    I did this research on May 8. At least on May 8, Twitter was willing to admit that there were a total of 32 tweets in the same date range, although one of them (Feb 27) appears twice. Here they are:

    May 8 top
    May 8 top 2
    May 8 top 3
    May 8 top 4
    May 8 top 5
    May 8 top 6
    A Twitter search for "blackliszt" performed on May 1 resulted in a list of 5 tweets going back to Oct 13. The same search for "blackliszt" performed on May 8 (above) resulted in a list of 32 tweets that should have been returned by the May 1 search. Maybe there are more! Given that one is double-counted (Feb 27), who the &*() knows?? What I do know is that on May 1, Twitter decided to discard 27 out of 32 potential results of a search. Roughly 85% of the tweets were gone!

    Summary

    I already knew that Twitter software quality was bad. It turns out that it's worse than I ever imagined. It's "Twitter-quality"-is-an-oxymoron bad.

    You know all those "trending on Twitter" items you're seeing now that seem so modern and cool? They all assume that getting more or fewer results from a search means something. We now know that the results can easily go up by a factor of six, or drop by the same factor, just because of Twitter "quality." It's obvious that "trending on twitter" deserves to be the punchline of a joke, not something that anyone pays attention to.

  • Wartime Software: Optimizing for Speed

    Software Development is a mission-critical issue for increasing numbers of organizations, particularly the growing number of "software-enabled service" organizations. Which makes it all the more surprising that there is a lack of concensus about to best do it.

    I've written about software development quite a bit on this blog. Now, I'm in the final stages of preparing my small book on Wartime Software Development for publication as an inexpensive Kindle book. This post about bridges in war and peace gives some of the flavor. 

    Wartime Software is all about optimizing the process for speed instead of predictabillity. Here's a short excerpt from the book about what optimizing for speed really means.

    The usual procedures for producing code are supremely arrogant. They are arrogant because we decide that we can figure out what the customer wants, and the customer should simply wait while we “get it right.” We’re so sure that we know what the customer wants that we build it, and not just any old way, but we build it industrial strength, loaded up with piles of documentation, test plans for every little jot and tittle, so that when we (finally) roll it out, it’s on silver platters and with bands playing, with code ready to stand the test of time…and sadly, all too often, we’re wrong! We’ve misunderstood the customer, built things they don’t want, failed to build things they do want, built some things they need in confusing, incomplete or simply perverse ways. We frequently spend a year solving last year’s problem, and when we deliver our well-intentioned mess next year, the customer and the market have moved on and sometimes our competitors have leapfrogged us. Most software projects resemble your worst nightmare of a pork-barrel politics public works project, like the “bridge to nowhere,” the project in Alaska that projected nearly $400 million to build a bridge as long as the Golden Gate bridge and higher than the Brooklyn Bridge to Gravina Island, an island with only 50 residents, no stores, no restaurants and no paved roads. Who cares how well the bridge was designed?

    The design of the bridge (or the software) is not the most important thing – the most important thing is the unmet needs of the people who will use the thing you intend to build. And so the number one priority is to discover what those needs are, from the only authoritative source. And by the way, the customer’s opinions may be more relevant than your opinions, but they are not truly authoritative – only the customer’s actions are authoritative.

    And that means that you have to find a way to write code really quickly, so that you can turn your ideas (that hopefully you’ve mostly stolen from customers or other successful services) into services, modify them quickly based on customer feedback, and either discard them and move on, or evolve them until you’ve improved your service, using the real actions of real customers at every step of the path to make your critical decisions. You have to optimize all your processes for speed in order to pull this off.

    And remember – if you’re not doing things this way, you’re probably building a software “bridge to nowhere.”

  • Twitter Software Quality Stinks

    There are big problems with software quality. The problems range from social apps to corporate to academia, include "mission critical" software, and everywhere in between. The social apps in particular have decided it's embarassing. But instead of actually, you know, fixing the problems, they seem to have decided to mask the problems! Twitter is a great example of this disease.

    Two ways of Responding when you don't know the Answer

    Suppose you're a kid and someone is demanding answers from you. Either you know the answer or you don't. If you know the answer, it's simple:  just give the answer!

    Q: When did Columbus sail the ocean blue?

    A: 1492

    If you don't know the answer, there are two ways to respond: the right way and the wrong way. The right way to respond is simple: Just say you don't know!

    Q: When did Columbus sail the ocean blue?

    A: I don't know.

    The wrong way to respond is a little more complicated. You have to guess at an answer, state it as though you knew the answer, and hope no one cares or that the person asking doesn't know either so you can get away with it.

    Q: When did Columbus sail the ocean blue?

    A: 1542.

    When the question you're asked has several answers, you can be wrong in a different way. For example:

    Q: Name the ships in Columbus' voyage to the New World.

    A: The Nina and the Santa Maria.

    Q: Is that all of them?

    A: Yes.

    Twitter's Response when it doesn't know the answer

    I never thought it would happen, but now I have fond feelings for Twitter's Fail Whale, which I haven't seen recently. You would think that the fail whale not showing up as often would be a good sign. It's not. It's a sign that Twitter has decided that it's better to lie than to admit it doesn't know the answer to the question you're asking. Instead of forthrightly saying "I don't know," Twitter now brazenly gives the wrong answer. Even worse, it gives a different wrong answer from one day to the next!

    Twitter's Bogus Search results

    Here are some screen shots of the results of the identical query, for "blackliszt," over a couple of weeks. I always selected "All results" to remove any excuse that Twitter was selecting the "top" results to help me out.

    Let's go through time. Here's the result from the first day, Apr 18:

    BLApr18

    I tried again the following day, Apr 19, and was quite surprised with the result: the Rebelmouse tweet simply disappeared, pulling an older one into the results!

    BLApr19
    On Apr 20 I added a tweet and did the search again. My new tweet was there, and RebelMouse came back!

    BLApr20
    On Apr 22 I tried yet again and got another brand-new variation: this time Cadencia's tweet disappeared!

    BLApr22

    The results were unchanged on Apr 24 and 25. I gave Twitter a couple days to lose some data, and had my patience rewarded when I searched again on May 1. The first result was Rebelmouse; the most recent posts, my post on ballet, Cadencia and Rob Majteles, were all gone! Here's May 1:

    BLMay01
    Finally, look at this simple list of my tweets taken Apr 23, not a search:

    DBBApr23
    Note that I had tweets on Apr 10 and Mar 25, both of which included "blackliszt," neither of which appeared in any of the search results!!

    Sadly, I can't even claim that the folks at Twitter have it out for me. It's just the way things work there … uhhh, I mean, the way things don't work there…

    Conclusion

    Social Media software quality stinks. It's worth every cent you paid for it. Oh, you didn't pay anything for it, you say? Well, that's my point. When a program like Twitter gives you an interface, lets you do a search, gives you a result that's even worse than my "Nina and Santa Maria" answer, brazenly implies that it's the right answer and everyone just ignores the issue, something is wrong. 

    Q to Twitter exec: Why does your software randomly leave out results from searches? Why should anyone look at "trending tweets" or anything else when the data is randomly bogus?

    A: I've never been asked that question before. The answer is simple: I do it because I can, because I don't care, because no one else seems to and because I'm worth a great deal of money and you're not. Next question please.

    Thanks to MaryAnn Bekkedahl for inspiring me to write this up.

  • What can Software Learn from Ballet?

    Most people think that software and ballet are distant topics, completely unrelated. While you can imagine a program that helps a correographer keep track of things, what could software possible learn from ballet? Answer: a great deal. It would take many posts to describe it all.

    First Some Ballet

    The annual Youth America Grand Prix competitions just concluded.


    Yagp0

    These are a big deal, and people in the field rave about it:


    Yagp9

    The movie First Position was recently released about the competition, and it is well worth seeing.

    First_Position_2011

    While the competitions are for young people, professionals are involved in everything. During the Gala that ends the season, the program consists of the youthful winners of the competition and some of the best professionals in the world. Here are a couple of pros performing at a recent Gala:

    Yagp1

    And here are a couple of the amazing kids:

    YAGP2

    I attended the 2013 Gala; it was totally amazing. What was most striking was that all the kids were in the audience, cheering and generally having the time of their lives. They knew what they were seeing, and were all at some stage of training to be able to do it. They totally got it, and appreciated every nuance.

    Software

    Watching the whole spectacle made me think about what it would be like to substitute "programming" for "ballet."

    There are strong similarities. Both are hard to do. Not many people can do it well. It takes years of hard work and dedication to get good at it. Huge discipline, focus and concentration are required. Small mistakes can wreck an otherwise perfect performance. While it's primarily an individual discipline, group performances are often required, and are even harder but can lead to even better results. The best performances seem effortless. Beauty and symmetry are important aspects of successful performances. And in both cases, you are orchestrating a flow through time.

    There are of course huge differences between the two. The most important differences have nothing to do with what you wear to work.

    Here is one of the young competitors doing something completely amazing:


    YAGP4

    There are young programmers who can do the programming equivalent of this, but how can they get identified, rewarded and encouraged? What opportunity do they have for watching and learning from people who are way more advanced then they are? Even at a more advanced level, when they take courses, they're sitting in class and being taught mostly irrelevant stuff by academics, most of whom aren't serious programming practitioners, and don't even respect it! They think their papers and conferences are much deeper and more important. Sad. It's always best to be taught by someone you want to emulate, rather than by people who look down their noses at actually, you know, writing code.

    They got all the kids on stage at the end. They did a lot of amazing things a picture can't capture, but here's one anyway, from last year:


    YAGP3
    Who would have thought that the field of ballet would be, in many ways, a role model for the transformation of software training and organization? But now I realize that it is, and I encourage others to pick up this ball and run with it.There is quite a bit we can learn from "artsy" fields, including architecture, music and sculpture. Not to mention steamboats and antiseptic surgery.

  • Software Development: the Relationship between Speed and Release Frequency

    There is a deep, fundamental relationship between the velocity of software development and the frequency of releases. I hope this relationship will be studied in detail and everything about it understood, but the basics are clear: with minor qualifications, the more frequently you release your software, the more rapidly it will advance by every relevant measure. It will advance not only in feature/function, but in quality!

    Mainstream thinking on Releases and Development Speed

    The relationship I propose, "more releases = more features & better quality," is counter to the vast majority of mainstream thinking in software. In fact, in those terms, it's counter-intuitive. Here's why.

    Think about software development in the simplest possible terms. You've got define it, plan it, do it, check it and release it. Five basic steps, which apply across a wide variety of process methodologies. Each step takes some time, right? After you do the work, you've got to check it and then release it. And you can't just check what you did — you also have to make sure you didn't break anything that used to work, the "keep it right" part of quality, which grows ever larger as your software evolves.

    This "check and release" process is a kind of necessary evil, the way most people think of it, and as quality failures hit you, it tends to get bigger and longer. A clever project manager (an oxymoron if there ever was one, except when intended ironically, as it is here) will naturally think, gee, let's go from 6 releases a year to 4. By cutting the overhead of the two extra releases, we'll be able to buy some development time back.

    Yup, that really is how people think! Fewer releases = more time to do other stuff = we get more done.

    Not!

    A Real-life Example

    A good example of a company that illustrates the proper relationship between release frequency and development speed is RebelMouse. RebelMouse is a next-generation, socially-fueled publishing platform. It can be used to turn boring-appearing blogs like BlackLiszt from this:


    BL snip

    to this, a snapshot from my RebelMouse page:

    RM page
    v

    Increasingly, they are used by big-media places, for example for Glee:


    Glee

    and the recently released real-time publishing curation features were used for The Following to create a social firestorm:


    The Following

    RebelMouse — the Facts

    The CTO/founder of RebelMouse is Paul Berry. Here he is below explaining something to his fellow nerds at the nerdfest I held a while ago.


    2011 07 02 nerdfest first day 008s

    RebelMouse has grown like crazy in its short life. Currently there are about 280,000 websites powered by RebelMouse, and that number is growing over 100% month-to-month. Their sites have over 2 million unique visitors a month.

    Does RebelMouse have just a handful of releases a year? Duhhh. Try over 10 a day. A day! And there are more than 30 developers, who are not all in the same location.

    Digging in

    There is a lot to be said on this subject. For now, I'm just going to keep it to a single simple but important observation.

    The relationship between development speed and frequency of releases does not hold up at a fine-grained level; so, for
    example, given two organizations, one of which has a release every 10
    weeks and the other every 11 weeks, any difference in speed will be
    random. Similarly, if the two organizations release 5 times a day and 10
    times a day, any difference in speed will also be random. But at a
    coarse-grained level, I observe large differences. HUGE differences.

    Conclusion

    RebelMouse is far from the only example, but they show the relationship between development speed and release fequency very nicely. They move much more quickly than most development organizations of their size — in fact, they manage to push hundreds of releases in the time most organizations would have been able to limp through an "agile" (heh) development cycle or two.

     

  • Storage Vendors in the Cloud

    When computer vendors encounter a major technology disruption, they respond the same way, with fervent claims that their products are really well suited for the new environment, when of course they are not. The response of storage vendors to the new ground-rules of the Cloud provide a timely illustration of this near-universal phenomenon.

    Our Product is Definitely in Fashion

    Computers
    are complicated. Many people have trouble just keeping the buzzwords in
    mind, much less understanding what, if anything, is behind them — much
    less actually understanding things. It's particularly tough when a wave
    of fashion sweeps the industry, as it so often seems to. Then everyone
    but everyone immediately claims to be at the forefront of whatever that
    fashion is.

    This
    was true years ago when the good thing to be in databases was
    "relational," and suddenly every database vendor revealed that their
    precious products were, in fact, "relational." At first I laughed. What
    idiots these marketing people were — why anyone can tell that C's
    product wasn't relational when it was built, isn't now, and probably
    never will be. What a joke!

    It turns out the joke was on me. Whatever the buzz-fashion-word of the moment, Industry-standard practice is to claim it. And for most people to accept the claim!

    This is a big deal for the established vendors. There is a lot
    of money riding on maintaining market share as the new trend takes
    hold. When "relational" becomes the hot thing, and your marketing people
    are any good at all, then by golly, our database is relational — because I say it is!

    The Cloud — the Buzz-Fashion-Word of the Moment

    Now
    the Cloud is hot. Surprise, surprise — everyone's product claims to be
    "cloud-ready," "Cloud-optimized" or whatever it is they think you want
    to hear.

    Everyone's product is just great for the Cloud. The major vendors:
    EMC
    Netapp

    and everyone else.

    Inside the Marketing Department

    Something like the following dialog probably happens inside each major vendor.

    Bright New Kid: "I'm having real trouble producing that marketing piece about our products for the Cloud. I've read a lot about Cloud, and we just don't fit. I don't know what to do!"

    Seasoned Veteran: "You're making it too hard. We make storage, right? Our storage is great, right? Cloud needs storage, just like everything else, right? So our storage is ideal for the Cloud. That's it!"

    Bright New Kid: "I'm not so sure –"

    Seasoned Veteran: "You're over-thinking it, kid. Our storage is great, so it's great for Cloud. Just get over yourself and write it."

    What's Different about the Cloud?

    There
    is no cloud industry association to certify what the criteria are for
    cloud appropriate. This is just as well, because the cloud is just another name for something we already do — run data centers.

    But the reality is that things are different in the cloud.


    The
    bottom line is simple — it's the bottom line! Literally! Meaning, the
    cloud is all about making things faster to implement and change; better
    performing and more responsive; and less expensive. I make no secret of my preference here. But the point and my analysis would be the same even if I had no horse in the race. It's not about feature X or service Y, all of which are irrelevant or migrating up the stack in Cloud applications. It's about the bottom line, not just purchase price, but TCO.

    The vast majority of data centers have been run essentially without competition. The people who pay the bills haven't been able to choose. It's the in-house data center or nothing.

    With the Cloud, suddenly there's competition. Buyers compare on price and quality — and can even switch if the promises prove to be hollow ones! So things are different in the Cloud. The arm-waving is replaced by the simple measures of capacity, performance, energy and space utilization, management costs, and maintenance.

     

  • Storage For the Cloud

    The massive movement to Cloud architectures puts new demands
    on systems vendors that most of them are unprepared to meet, while at the same
    time devaluing special features that many vendors used to differentiate their
    products. Nowhere has this trend been more evident than in storage.

    For years, storage has had its own silo in the data center,
    SAN and/or NAS, with its own storage managers and administrators. They became
    dependent on various storage-centric features of the different vendors.


    The Cloud has disrupted this comfortable island of
    automation.

    The Cloud is all about reliable, low-cost self-service, with
    tremendous automation and integration. Service, capacity and performance need
    to be available on-demand, with no human intervention. Everything needs to be
    able to grow and shrink as application needs change, with a sharp eye to
    capacity utilization, since it’s easier than ever to switch Cloud vendors when
    one stumbles or is simply no longer competitive. The same observations are true
    of “private clouds.”

    Virtualization is a key part of achieving Cloud goals, and
    virtualization changes the rules of the systems game. Functions that were
    traditionally part of storage are now performed as an integral part of
    operating systems and/or virtualization software, to make them more agile.

    Many companies have observed that traditional,
    controller-centric, feature-rich SAN and NAS solutions are not appropriate for
    the Cloud environment. They are simply using inexpensive JBOD’s for storage and
    depending on massive replication by the file system to provide reliability,
    typically making a minimum of 3 whole copies of the data, before backups, in
    order to assure availability. If the alternative is an old-technology NAS or
    SAN, this is a smart idea, which is why its use is growing so quickly.

    X-IO has a whole different approach to storage. It’s not
    NAS. It’s not SAN. It’s not cheap JBOD’s with a make-lots-of-copies filesystem.
    It’s an intelligent storage node that not only uses, but enhances the drives from one of the major OEM
    suppliers, Seagate. X-IO makes them better by a large margin, and it doesn’t do all the things that are no longer
    needed in the Cloud environment. X-IO gives you more of what you do need for Cloud, and none of what you don’t need.

    The X-IO approach to storage assumes you’re smart about
    building your data center. You’ll take a building-block approach, with lots of
    well-configured servers, network and storage blocks, with a layer of software
    on top of it all to orchestrate it. You want each building block to be great at
    what it does – do a lot, cost a little, and play its role in the overall system.

    In the end, storage comes down to a small set of storage
    components used by everyone. Rather than ignore the details of the drives and
    wrap them in fancy, useless (in the Cloud) packaging like everyone else, X-IO adds value, real value, to the
    drives themselves. This value persists as Seagate develops and releases new
    drives – the 2 to 5X X-IO advantage over every other storage solution will ride the
    waves of new drives into the future.

    X-IO spent over 10 years of deep development of unique IP
    (the first 5 as a Seagate division). Over that time it invented and hardened
    algorithms and code and incorporated the experience from having thousands of
    units in the field over many years. The results are clear, and differentiate
    the X-IO storage brick approach from everyone else. Given a set of drives, X-IO
    will make them:

    • Perform at least twice as fast,
      often 3-4X anyone else when near capacity
    • Deliver at least twice the throughput
    • Fail at less than 1% the rate of
      anyone else
    • Not require replacement during their
      5 year warranty
    • Take much less space, often 30-50%
      less
    • Require much less power, often 50%
      less
    • Require less cooling

    Finally, X-IO can incorporate SSD drives as required to
    achieve even better performance, though this is needed much less often than
    with other vendors.

    In service operations, Cloud is measured on cost and SLA’s.
    X-IO storage is all about cost and SLA’s. X-IO is the winning choice of
    storage for Cloud.

Links

Recent Posts

Categories