• Why Software Quality Stinks

    We know our users want high quality software. We know our software, in general, stinks.  What’s our problem?

    We Claim to value quality

    Dilbert2004032348588 1
    We say we value quality highly. Who will admit to wanting poor quality?

    But we value almost everything else more

    Dilbert2004032348588
    Who would design a lawn mower without a “dead-man’s” switch – the kind of thing where you have to keep pressing it to keep the lawn mower running; if anything goes wrong, the mower stops and nothing bad is likely to happen. But when software is involved … watch out!

    There was a case this year of a person being crushed by an elevator.

    A-mcgev-front1215_0
    The elevator, of course, was controlled by software. The software was written to respond to someone pressing a button to go to a floor, but not to make sure all systems were “go” before sending the signal to start moving. So, while the person was entering the elevator, the bad software started the “go to a floor sequence” without proper checking, and before anyone could react …

    Body wheeled away
    The stories at the time blamed the workers for failing to follow procedures. They probably didn’t. But why would software be written to even make this outcome possible???

    Why does bad software happen?

    Poor quality doesn’t need to be achieved – poor quality happens all on its own, without our having to do anything to make it happen! That’s why it’s so wide-spread! It’s just like when you’re writing natural language. Who thinks that the text of Twelfth Night that we read today was Shakespeare’s first draft?

    Beyond the fact that high quality is something that must be achieved, here are the most basic reasons it is so rare:

    • High Quality is unrewarding

    The typical incentive structure for quality practically guarantees we won’t get much of it. High-quality software doesn’t get kudos – it’s expected. You may get slapped if the quality is stupendously bad. When it’s adequate or better? Yawn.

    • Our focus is wrong

    There is a long tradition of attempts to improve quality in software. Think test-driven development. Think about all the people preaching how “quality” is different than (and better than) “mere testing.” Decades of effort have resulted in little to no progress, except for spending more time and money.  Chief among the errors in focus is the way we put our effort into assuring the quality of the new stuff we’re building instead of what customers mostly care about, which is whether the old stuff continues to work.

    • Bad quality is a side-effect of poor development methods

    If you are using project-management-style techniques to build software, achieving high quality is nearly impossible. At least if you’re using “grow the baby” techniques, it’s possible to maintain a reasonable level of quality.

    Conclusion

    Most software stinks. We know it. We give lip service to quality, but little more. When we apply standard methods to achieve higher quality, we are rarely rewarded for our efforts. QA is one of the lowest status jobs in software, and most likely to be cut when there is pressure of any kind.

    Given the situation, we get exactly the quality we should expect.

  • Computer History

    In software, history is ignored and its lessons spurned. What little software history we are taught is often simply wrong. Everyone who writes or uses software pays for this, and pays big.

    But we know about history in software — there's Babbage, the ENIAC, etc.

    Yes, we've all heard about various people who are said to have invented modern computing. A  shocking amount of what we are taught is WRONG.

    Babbage is a case in point. People just love to go on and on about him. There are  problems, though. I'll just mention a couple.

    220px-Charles_Babbage_-_1860

    One problem is that his machines simply didn't work, even after decades of work, and huge amounts of skilled help and money. He must have known they wouldn't; although he was personally wealthy, it was other people's money he spent on his famous dalliance.

    Another problem is that his best idea wasn't his. The idea of using punched cards

    220px-Jacquard.loom.cards
    to contain the program was invented in France and was a key aspect of the Jacquard Loom — a machine that pre-dated all his work, and a machine that actually worked and was in widespread use.

    The ENIAC is another good example of what appears to be the typical pattern in computing, which is someone invents a good thing, makes it work, and then someone else steals it, takes credit for it and tries to cover up the theft, often without delivering results as good as the original.

    250px-Eniac

    If you only read the standard literature, you would still be convinced that the ENIAC and its inventors were giants of the field. Once you read everything, you discover that reality is more interesting. It turns out that the inventors of the ENIAC were "inspired" by prior inventions, much like Babbage and the Jacquard Loom. In this case, the inspiration was the Atanasoff-Berry Computer.

    ABCdrawing
    Here is an excerpt from the ruling in the patent dispute that settled the issue:

    Judge Larson had ruled that John Vincent Atanasoff and Clifford Berry had constructed the first electronic digital computer at Iowa State College in the 1939-1942 period. He had also ruled that John Mauchly and J. Presper Eckert, who had for more than twenty-five years been feted, trumpeted, and honored as the co-inventors of the first electronic digital computer, were not entitled to the patent upon which that honor was based. Furthermore, Judge Larson had ruled that Mauchly had pirated Atanasoff's ideas, and for more than thirty years had palmed those ideas off on the world as the product of his own genius.

    Other fields don't need history — why should software?

    Not true. Other fields are saturated with history.

    Politicians study history in general and the last election in particular. Fiction writers frequently read fiction, current and historic. Generals study old battles for their lessons; even today at West Point, they read about the Civil War. Learning physics is like going through the history of physics, from Galileo and Newton and through Planck and Einstein to the present. Even the terms used in physics remind you of its history: hertz, joules and Brownian motion.

    Software, by contrast, is almost completely a-historical. Not only are most people involved uninterested in what happened ten years ago, even the last project is unworthy of consideration – it’s “history.”

    Consequences of the lack of history

    War colleges study past wars for the highly pragmatic purpose of finding out how they were won or lost. What was it the winner did right? Was it better weapons? Better strategy? Better people? Some combination? And how exactly did the loser manage to lose? Was it a foregone conclusion, or was defeat snatched from the jaws of victory? People who conduct wars are serious about their history — they want to win!!

    In software, no one is interested in history. Everyone thinks they know the "right" way to build software, and thinks that the only possible source of loss is failing to do things the "right" way — the requirements weren't clear; the requirements were changed; I wasn't given enough time to do a proper design; there was no proper unit testing; the lab for testing was insufficently realistic. The list of complaints and excuses is endless, and their net effect is always the same: crappy software and whining: I need more people, more time and more money. Because studying history is so rare, few are exposed to the software "wars" that are fought and won by teams that didn't follow their rules.

    There is only one conclusion to be drawn: software people would rather lose with lots of excuses than win by doing things the "wrong" way. Ignoring history is a great way to stay in this comfortable cocoon.

    When software history becomes as important a part of computer science education as physics history is of physics, we'll know it's approaching credibility. Until then, everything about computer science, education and practice will continue to be a cruel joke.

  • Bridges and Software in Peace and War

    We build bridges in times of peace. They take a long time to build; they tend to last a long time, but sometimes they crash. We also build bridges in war-time. Working in the face of enemy fire, they get built really quickly, and tend to serve the purpose well.

    What is war-time software? Are there methods that enable us to build software in a fraction of the usual time in highly competitive circumstances, while still serving its purpose well? The answer is yes.

    Peace-time bridge building

    The bridge over the Firth of Forth in Scotland was the world’s first major steel bridge. It took about seven years to build, was completed in 1890, and is in use to this day.

    RailbridgeMain
    (Credit)

    As many as 4,000 men worked on the bridge at a time, with 57 losing their lives.

    The Golden Gate bridge in San Francisco is more recent, having been completed in 1937 after about 4.5 years of work.

    Sevisitorarea_view
    (Credit)

    Peace-time bridge building: the results

    I’ve given just a couple of examples, but they are typical: bridges take years to build in peace-time, and people die while building them. And while we expect them to never crash, in fact they do. It’s not as rare as you may think! Here’s a collapse of a bridge in Canada in 1907 killing 95 people:

    Bridges_down_01

    The Silver Bridge was built in 1928 over the Ohio River. Here is in when it was completed.

    Silver_Bridge _1928

    It collapsed in 1967 during heavy rush hour traffic. 46 people were killed.

    Silver_Bridge_collapsed _Ohio_side

    And here’s a portion of the Route 95 in Connecticut that collapsed in 1983:

    Bridges_down_04

    There are many more examples. Peace-time bridges take years to build and are expected to work without problems, but in fact they sometimes collapse and kill people.

    War-time bridge building

    Building bridges in war time is a whole different matter. The bridges aren’t allowed to collapse and kill people any more than those built in peace-time, and the loads they’re required to carry can be much greater. Frequently they are built under enemy fire. Yes, they look different and are constructed using different techniques:

    (Credit)

    But that’s the whole point. The time constraints are severe: instead of years to build a bridge, it must be done in days.

    Here’s the story of the bridge pictured above:

    It was during this week, in late March of 1945, that the U.S. Third Army under Gen. Patton, began its famous bridging and crossing operations of the Rhine.

    The first unit to cross was the 5th Infantry Division that used assault rafts to cross the raging Rhine … in the early morning hours of March 23. … By 1880 that evening, a class 40 M-2 treadway bridge was taking traffic. The following day, a second 1,280 foot class 24 bridge was completed in the same area. It was later upgraded to a class M-40 bridge. Without the benefit of aerial bombardment or artillery preparation, units landed quickly and established a beachhead that was seven miles wide and six miles deep in less than 24 hours…When daylight came, the Luftwaffe attacked the enclave with 154 aircraft in an attempt to dislodge the foothold on the east bank. Effective anti-aircraft fires brought down 18 of the attacking planes and destroyed 15 more.

    By March 27, five divisions with supporting troops and supplies had crossed the three bridges constructed at Oppenheim. The entire 6th Armored Division crossed in less than 17 hours. During the period of March 24-31, a total of 60,000 vehicles passed over these bridges.

    Peace-time software

    Most of the software built today is built using “peace-time” methods. Those methods are so ubiquitous that they are simply considered to be “the right way to do things.” We document everything. We have a nicely, orderly flow from requirements through design, coding, testing and deployment. Whether waterfall or “agile” is used, everyone is given time to do their job, and frequently asked how long it will take. Estimates are critical, and the most important thing is delivering on the expectations you set.

    In this environment, it’s important to make sure your estimates are long enough to account for things you forgot about. Taking a long time to get a job done isn’t a problem; taking longer than you said you would take is the problem.

    War-time software

    So what is war-time software? It’s a looooong subject, and can’t be done right in a short post. But the principles should be obvious from the bridge-building metaphor:

    • Time is the most important thing; if you take a year to do what the other guy gets done in a month, you’ve lost the war.
    • Solving immediate problems is far more important than effort put towards some imagined future.
    • Something is better than nothing.
    • Finding and fixing problems is more important than preventing them.
    • Did I mention that nothing is more important than speed, except possibly avoiding getting killed (usually)?

    Those war-time bridge-building guys made it up as they went along, but they couldn’t have done it without an elaborate tool kit, appropriate supplies and matching skills and procedures. The scene on the river may seem chaotic, but there’s a pattern and lots of coordinated activity, with everyone working towards a common goal: the least they can possibly do that gets things safely over the river. When is peace-time software ever subjected to that kind of parsimonious discipline?

    War-time software development is development that is organized and optimized for speed: getting the least acceptable solution built and deployed in the shortest possible amount of time, and rapidly iterating from there. And then doing it again. Obviously, you spend time gathering and organizing your supplies and improving your technique.

    War-time software is not doing things the usual way, only skipping steps, doing things sloppily and writing half-done crap code. That’s doing a bad job using peace-time methods. War-time software is doing things in a war-time way using war-time techniques.

    Conclusion

    Are you truly operating in peace-time? Is your competition frozen? Do you have no time constraints or money limitations? Then, by all means, continue to use peace-time software methods — take huge amounts of money, incredible amounts of time, document and plan and manage everything with precision, and build your software. Software that will crash when you least expect it.

    If, on the other hand, you are at war, and if you, you know, want to, like, survive — well, you may want to consider building software that actually meets the immediate need.

     

    Find a way to get that data, those screens and workflows over that threshold, soldier. Now! Yes, I know that in your previous life, it would have taken you a week to write a proposal for creating a plan to get it done. These screens, workflows and databases are going to be on the other side of that threshold in under a week, while enemy forces and programmers are doing their best to kill us in the marketplace. Move it!

    War-time software. It’s the way to win.

  • When you call a programmer “arrogant,” are you committing libel?

    The best programmers are often accused of being "arrogant." Are they? When you make the accusation, are you committing libel?

    How to respond when accused of Arrogance

    You can just tough it out. Dilbert shows us the way here:

    Dilbert 2012 02 11
    In order to figure out how to respond, maybe we should understand just what arrogance is.

    What is "arrogant," anyway?

    Here's the scoop from the dictionary:

    Definition of ARROGANT

    1: exaggerating or disposed to exaggerate one's own worth or importance often by an overbearing manner <an arrogant official>
    2: showing an offensive attitude of superiority : proceeding from or characterized by arrogance <an arrogant reply>
    The second meaning is clear: you're arrogant if people don't like the way you act or talk; they somehow think that you think you're better than they are.
    The first meaning is more interesting: it links the way you act to the facts of the case. You're arrogant if you act like you're better than you are. Hmmm…
    Does this mean that you're arrogant if and only if you exaggerate how good you are? Sure sounds like it. So your arrogance is real arrogance IF your view of your self-worth is greater than your actual worth. Sounds reasonable, actually. If Eli Manning (the QB in the Superbowl who is not married to Giselle Bundchen) says "I'm a great quarterback," is he being arrogant? I'd say "no."

    Arrogance is understandable and justified…

    What happens when some aggressive, ignorant fool takes over a meeting, presses his own neanderthal solution and is close to getting it turned into marching orders for the less-aggressive ignorant fools? First of all, I'd say buddy, you're in the wrong place. Bail out! May Day! May Day! Second, I can completely understand getting everyone's attention, perhaps with some edge, and putting out a superior solution.

    …Except when it's not!

    The other problem is that sometimes the nerd is really wrong. He's just blown it. This is easy to understand. Are all nerds Top Nerds? Of course not! So there are whole lots of nerds that are wrong (or at least sub-optimal) on lots of subjects lots of the time! Yuck!! Even worse, such a nerd is, almost by definition, an "arrogant nerd," even if the nerd is behaving pretty well.

    Arrogance and Libel

    Suppose you call someone a lying tax cheat. In public. Their reputation is under attack, and they respond by suing you for defamation of character, i.e. libel. IF you can prove that the person in fact has lied about important things and has in fact cheated on their taxes, it's case closed: there is no libel, no defamation of character, when all you're doing is speaking the truth. 
    Now suppose you call someone an arrogant programmer. In public.Their reputation is under attack, and they respond by saying they're not arrogant, they're just right and you're wrong — get over it! IF you can prove that the person in fact writes bad programs, designs them poorly and that there is in fact a better way of doing things, it's case closed: they are arrogant! There is no libel, no defamation of character, when all you're doing is speaking the truth.

    Conclusion

    There is a great deal to be said about nerds and arrogance. In the end, it's pretty simple. Try to be nice most of the time. When someone's being a fool, be kind. But you still can't let fools determine technical outcomes. Have you missed something? Are you really smarter in this case? If so, get the right outcome. Will you be called "arrogant?" Probably. Let them prove it!
  • Fundamental Concepts of Computing

    There are a small number of truly fundamental concepts in computing. They are not generally taught or talked about, but they underlie most of the smart things you can do in computing.

    The fundamental concepts are like "the fundamentals" in a sport, the very basic things you have to do, like dribbling in basketball or blocking in football. It's where the phrase "blocking and tackling" comes from. Woe to the team that puts all its energy into fancy stuff — it will be beaten by the team that does the fundamentals.

    The fundamentals are generally recognized in sport because there are objective measures of scoring and determining which team won. Eventually, people figure out which activities contribute most to winning. But in computing, it's a sad story.

    Competition would help us understand what are "computing fundamentals"

    If we competed in programming the way we do in sports, teams from different places would take on the same job at the same time. Each would complete the job, roll it into production and run it for awhile. For each team and their product, we would collect a variety of information, including: the size and cost of the team; the elapsed time they spent building, the resources required to build and operate, the number of bugs, the level of user satisfaction, etc.

    Who won? Well, we'd take some combination of the information above.

    I bet if we did this a lot in programming, the "fundamentals" of programming would gradually become clear to everyone. Everyone would want to know what the winning team did to win, what winning teams had in common, and over time, the programming equivalent of "blocking and tackling" would become obvious.

    But we never compete!

    Oh, you think we do? Like when there are competing products, or when competing companies have similar computer systems?

    Well, sure, at a business level there is competition, but at a programming level? Think about football. How would you feel if the team that won had twice as many players as the other team? What if the winning team got to use a different ball than the other team? What if the winning team was given ten downs per possession, and the other only had three? What if one team always had a goal post that was half as high and twice as wide as the other team? With differences like this, it's obvious that the game is rigged and there's not much to learn from the game. There is as little to learn from examining the programming practices of companies or products that win in business.

    So what are the fundamentals of programming?

    There is no generally accepted answer to this dead simple but incredibly important question! And given the lack of meaningful competition, there is no objective way to prove what they are!

    I've spent way too much time:

    • programming, and
    • trying to get better at it, and
    • wracking my brain to determine exactly what "getting better at programming" means, and
    • trying to identify the key factors that lead to better results.

    So I've got opinions. I've written about some of the fundamental concepts in some of my private papers, and I intend to post about some of them here.

    For this post, it's sufficient to establish the basic concept: in fields that we care about, there are measures of goodness and a not-too-large collection of "fundamentals" that constitute the "blocking and tackling" for the field. And that, sadly, a broad understanding of those things are lacking in the field of computing.

     

  • (Most) Nerds are Introverts

    A book on Introverts was just published. Mostly it states things that are obviously true. But since most people don't know these things and they're not part of mainstream cultural thinking, it's worth reading.

    Quietbookiconlarge

    Jobs and Wozniak

    One review of the book refers to the hoopla about the "wonderful, creative" Steve Jobs. Here's an excerpt:

    If you look at how Mr. Wozniak got the work done — the sheer hard work of creating something from nothing — he did it alone. Late at night, all by himself.

    Intentionally so. In his memoir, Mr. Wozniak offers this guidance to aspiring inventors:

    “Most inventors and engineers I’ve met are like me … they live in their heads. They’re almost like artists. In fact, the very best of them are artists. And artists work best alone …. I’m going to give you some advice that might be hard to take. That advice is: Work alone… Not on a committee. Not on a team.”

    Introverts

    Not every introvert is a nerd (far from it); and not every nerd is an introvert (though most of them are, I think). So this book is worth looking at because of the high overlap. I was particularly struck by the transformation in American society to elevate the extrovert to the image of what is desireable in a human being. To the point of admissions officers at top universities saying they'd rather have someone who was good at sports and slapping backs than someone who (among other things) doesn't put going with the crowd above all else.

  • Internet Software Quality Horror Shows

    Whether the software is a cool social app, an academic website or a real business, there is a common theme: the software is poorly designed and, even worse, it just breaks. As in falls flat on the floor, waves its arms in surrender, and just gives up. And not just once — it keeps breaking! As I've said before, we really need a revolution in software quality.

    Cool Social Apps

    Hey, social is where it's at — how can billions of Facebook users be wrong? Before long, there will be as many FB users as MacDonald's has sold hamburgers (billions and billions)!

    Those guys must be great programmers, huh? I mean, just look at their office:

    Facebook-office-tour-thumbnail

    Here's one of them giving a talk at a conference:

    FB programmer

    See how cool he is? He's just wearing a t-shirt, not even "business casual."

    The other social media are just as cool. Here's a "chill" Twitter office:

    Twitter office space

    And Jack Dorsey, the Twitter CEO — quite the opposite of a buttoned-down financial guy, huh?

    Jack Dorsey

    It's perfectly obvious that these guys must write just the coolest, most awesome code ever. There's no way people this cool could make elementary programming mistakes, particularly when their application is so very dead-simple, and hardly ever changes — they could spend practically all their time being cool and polish up some already-faultless code a couple times a day, and still be OK.

    Except this little detail, which I scraped from my own screen, and which I personally have seen countless times:

    Twitter fail whale
    Yes, the famous Twitter fail whale. I think Twitter got tired of all the publicity their "cute" failure message was getting them, so they reverted to something more discrete; here's an example:

    Twitter overload

    FB is just as bad, of course, and they've always tried to minimize the message when they screw up:

    FB no more posts
    Apparently, FB is incapable of keeping even the most recent day's worth of updates on-line — you should try going back in history and seeing how far you get. Oh, you thought the stuff you wrote was your data, did you?

    Naturally, it makes sense to consider that you get what you pay for; all these cool social apps are, after all, free. You can hardly complain when something you didn't pay for is flakey — return it and demand a full refund!

    So let's turn to a more promising field. Everybody's supposed to go to college and learn stuff, so…

    Academia

    Let's see if the universities do any better. I was just on a local college's website, and it was even worse than Twitter — Twitter's code knew it was screwing up and put up the fail whale. In this case, any number of links I hit encountered badly broken code:

    Bergen error
    Oh, alright. The colleges are perpetually underfunded, and putting up a website that works isn't a high priority compared to … all the other things they spend money on. I guess.

    Probably a real business does it better, right?

    Profit-making Big Company

    Even more so, an essential public service, like the cable company! Those guys have the money, the funding, the experience and the mandate to do it right. Let's pick the case where their motivation is the highest: collecting money.

    Oops.

    Just a few days ago, I was on my local cable provider's site trying to access my account. Here's what I got:

    TW error screen

    Not just once, but repeatedly, for hours!

    But maybe it's just TW that's got problems — surely all the other big companies do things great, with their huge staffs and policies and procedures and all, right?

    Sadly, no. Here's just one personal example from Verizon:

    Verizon login error

    Summary

    There's no getting around it. Software is just bad. Everywhere. We can speculate about why this is the case, but let's agree on the facts: it's bad, and not getting better.

  • Interviewing Software People

    The methods in widespread use for interviewing and selecting software engineers are appalling. It is only because they are so bad that ridiculous methods like those often used at Google in which applicants are given trick puzzles to solve can seem like an improvement. The sad thing is, asking candidates to solve mental puzzles is better than what's usually done, which is not much at all. Come on, people — we can do better!

    Typical Selection Practices

    Managers need more programmers. They get a job requisition from finance, then go to HR to get candidates. Since HR knows nothing about the substance of the work that needs to get done, what ends up going into the job requirements are a bunch of motherhood-and-apple-pie blah-blah (self-starter, etc.) and a list of keywords of the technologies in which experience is required.

    HR screens candidates based on whatever is in the resumes and may interview candidates, basically to see whether they mouth the expected platitudes when prompted by the HR people. Those who play the game are then passed to the programming department.

    Typical Interview Practices

    The candidate is typically scheduled for a round of interviews with programmers and managers. Since the managers rarely know much and have usually forgotten what little they used to know, they don't ask questions of substance; they basically find out if they like the candidate and if they believe the candidate will "fit in" and follow orders. The fellow programmers who interview remember their own interviews, in which questions of substance were few and far between, so they basically chat up the candidate and decide whether they like them. At the end of this "rigorous" process, if everyone agrees, the candidate is accepted.

    What a joke! When you're hiring a musician, an audition (in which the musician performs) is standard practice. When you're hiring a writer, reading things previously written by the candidate is standard practice. So when you're hiring a writer of software programs, naturally you'd expect that reading programs previously written by the programmer would be standard practice — but it's not!

    Leading Edge Interview Practices

    Instead, the leading edge at places like Google is to hit the candidate with trick questions. For example: "Suppose you were suddenly shrunk to the size of a nickel and found yourself at the bottom of a blender. The blender is going to start in a minute. What would you do?"

    If I were the size of a nickel, most of my neurons would be gone, so I wouldn't be me anymore. But more seriously, how relevant is the kind of skill questions like this test to writing programs?

    I could make an argument that this kind of thinking is relevant to a kind of programming that is important, but very rarely needed: algorithmic design. No one has ever (to my knowledge) measured it, but I would be surprised if algorithmic programming amounted to as much as 1% of all the code in a typical application. The vast majority of code that's written needs different kinds of skills: visualizing user interactions, understanding data structures and data flows, understanding and effectively using complex subsystems, and many other activities. These activities benefit little from the kind of skills and instincts required for solving trick puzzles.

    Are there people who can do the puzzles and be great "regular" programmers? Of course. The problem is the reverse: there are many people who would be perfectly adequate programmers who are flummoxed and generally disconcerted by questions of this kind. I've been in groups of trick question masters. They're great for finding what's complicated in basically simple things and other arcane but fundamentally counter-productive skills. Other than that — you can have 'em! Take 'em all — please!

    What can be Done?

    There are lots of simple steps that can be taken to improve the outcomes of sourcing and selecting software people. Any step you take is likely to make things better. I hate to do it, but I have to admit that even trick questions are better than the "Hi, how ya doin'" method of interviewing. But surely we can do better.

    Reading code. When hiring writers, we read their past works. Why are we so reluctant to do so for people who write code? I suspect it's because very few people would actually be able to read the code and make a reasonable judgment of the author — for all too many typically mediocre programmers, that would amount to a tour de force way beyond their limits. That's OK — find out who can read code with meaning and judgment, and they become your main filtering agent. Maybe, just maybe, you'll end up with .. people who write better code! What a concept!

    Auditions. What's wrong with an audition? If someone is supposed to know database design really well, show them one of your current ER diagrams and ask for comments. Tell them a change that is proposed and ask how they'd make it. Pose a recent tough problem you had to solve (which you've already solved) and ask them to solve it.

    Detailed archeology. A candidate programmer may not know much about your stuff, but she'd darn well be an expert on stuff she's coded in the past. Find a subject where your experience overlaps hers, and ask for a detailed rendition on what she did, why and how — and what she learned and would do differently today.

    Subject Matter Testing. Yes, testing. Like an audition, only more objective. If someone really is the expert php programmer they claim to be, they'll ace the test. No problem.

    Conclusion

    Software hiring is an embarassing mess. In a field that is over-the-top exacting with fairly objective pass/fail criteria (the program works or it crashes), the methods we use to ask new people to join us are random, ad hoc, and almost completely unrelated to finding out whether the candidate can actually perform as required. Asking trick questions can actually be an improvement, but that's not saying much. We can and should do better, and even a little better can make a huge improvement in the quality of the people who build our software.

     

  • Regulations: Goals or Directions?

    The sheer bulk of our regulations is exploding. By any reasonable measure, our regulations are obese; our super-size body of regulations cost more to create, feed and implement — and they aren't getting their intended job done!

    When confronted with the huge bulk of our regulations, some people say we need more regulations, while others claim we need fewer. This is the wrong debate.

    The real problem is the kind of regulations we have — our regulations spell out how we're supposed to do things, when they should be telling us what we need to accomplish. 

    What is the goal of regulations?

    In most cases, regulations are created to assure things that you, I and most sensible people want.

    • When corporate officials cook the books or otherwise hide what's really going on, we want them to stop and to be held responsible — like, go to jail! That's SarBox.
    • We want hospitals and doctors to be careful with our medical records, and not pass them out to anyone who asks for them. That's HIPAA.
    • We want financial institutions to be careful with our records, and make sure our private account and transaction information are kept safe. That's PCI.
    • We want our money to be safe when invested with investment people and the stock market; we don't want people stealing or pulling strings to make themselves richer and us poorer. That's the SEC.

    These regulations and many more are sensible. I want them and you probably do too. I want the corporate bad guys to go to jail. I want the medical people to keep my records confidential. I want the banks to keep my finances to themselves. I want my banks to be sound and the financial reports generated by public corporations to not be phony.

    How are all those Regulations working out?

    Are the regulations and regulators doing their job? How about Bernie Madoff? All sorts of bad things helped create the financial melt-down we're suffering from — how many of the top execs got in any kind of trouble over it, not to mention went to jail? There are massive losses of credit card data, some of which make the news and most of which don't, causing endless trouble to consumers with identity theft. Who's held accountable?

    The one certain thing about regulations is that we pay the price for following them. What is optional is that they do their job. In fact, if you think about it, if your financial records aren't always stolen, it's not because the regulations are effective — it's because most people are honest, and don't want to steal your financial records!

    We're getting more and more regulations, and we're taking more time, trouble and money to create and follow them. But the resulting regulations are doing a poor job of stopping the bad things we want them to stop.

    Means vs. Ends

    The reason why our ever-growing number of regulations fail to protect us is simple. In the vast majority of cases, they spell out, often in great detail, how to accomplish the goal, instead of plainly and simply defining the goal. The regulators insist on giving us what amounts to detailed, turn-by-turn directions for driving from Lincoln Center in Manhattan to Ridgewood, NJ instead of simply stating that we should drive safely to Ridgewood, NJ, and leave the exact route to us.

    I usually like to drive up West End Ave to 72nd Street, turn left, and go up the West Side Highway to the George Washington Bridge, and so on. I wouldn't be surprised if that's the route the regulators would insist that I take.

    However, sometimes when there's traffic, I continue north on West End up to 96th St and get on the West Side Highway there. That's not how the regulators or the GPS would tell you to go, but it turns out to be the smart route to take in certain traffic conditions. Given that no one has regulated (at least yet) my route to Ridgewood, I am free to take the route I think best and adapt to changing conditions. I can learn and innovate, so long as I reach the goal safely. Makes sense. But that's not how regulations work!!

    If my trip were regulated the way HIPPA, PCI and other things are, I would be slapped with a fine or worse for "violating the regulations" by failing to take the 72nd St entrance to the West Side Highway. Because the regulations give me exact, turn-by-turn directions I have to follow for reaching the goal, rather than simply letting me drive to the goal in the most sensible way.

    Are Too Many Regulations a Problem?

    No! But it's a BIG problem that we are buried under mountains of micro-managing, often-obsolete, turn-by-turn-directions-type regulations, which are by their very nature bulky. Which wouldn't be so bad if they actually got the job done.

    If we had goal-oriented regulations, they would be concise, clear, and not need revision very often. They would be easy to understand and leave room for people to learn, adapt and be creative while still achieving the intended goal of the regulation.

    We Need Regulations 

    We need government to establish and enforce a common set of rules that make our lives better. In that sense, we need regulations. We benefit from having them and we benefit from having them be enforced. But not if they're costly, out-of-date, stifle innovation and on top of it all don't work!

    If regulations were written to define the goals they are intended to accomplish, they would be inexpensive, always relevant, enable innovation and have a much better chance of being effective. We should all want regulations that tell us what to do, and leave it to us to determine how to do it. Don't tell us how — tell us what.

  • The Name Game of “Moving to the Cloud”

    The most recent in a long string of technology fashion trends, "the cloud" is hot. Like its hot technology fashion predecessors, it mostly consists of old ideas with a little spicy sauce on top and fresh packaging. If you mindlessly follow the fashion and just "go to the cloud," you are likely to end up in the same unhappy place where most mindless followers of fashion trends end up. What is "the cloud?" Simple. Clouds are belated enterprise IT implementations of the consumer internet.

    What is the Cloud?
    Is the "cloud" a new development? Well, it is a new name…

    Ancient Clouds
    I first encountered the cloud more than 40 years ago. Before that fateful meeting, my only experience with computers had been up close and personal. You had to get in the room, push buttons, flick switches, feed card decks or punched paper tape, listen to whirring sounds and watch blinking lights. Like with this computer:
    IBM 360 mod 50

    But The Cloud! Ahhh, the Cloud…I remember it vividly.

    Of course, things were a bit different then. We only had "fat clients," as in "takes two guys to lug it" fat. The principle was identical, however: I was in one place, with a "fat client" like this:
    450px-Teletype_with_papertape_punch_and_reader

    …and in another place was a computer like this:
    Dec-pdp-10.men_working_at_pdp10.102630583.lg

    …and everything worked.
    Remember the importance of labelling, however: what we did 40 years ago wasn't "cloud computing," which hadn't been "invented" yet — it was merely "telecommunications."

    Creating the Modern Cloud
    Between then and now, lots of iterations of Moore's Law have come and gone. All the hardware has gotten smaller, cheaper and faster, while the software has gotten larger, more expensive and slower — but, fortunately for all of us, the rate of hardware evolution is greater than the rate of software devolution, giving the impression of net progress.

    Where this leaves us, 40 years later, is faster remote computers talking with lighter remote clients over incredibly faster networks, all at lower cost. Sprinkle with a little extra software and drizzle with some marketing hoo-haw, and — kazaam! — you've got today's hottest technology fashion trend, Cloud Computing.

    Clouds aren't always friendly
    When we think of clouds, we're likely to think of this kind of cloud:
    Cumulus_clouds_in_fair_weather

    friendly, fluffy shapes floating in an otherwise sunny sky. When people think about cloud computing, this is the kind of cloud they seem to have in mind. But as we know, there are other kinds of clouds. There are dark, oppressive clouds that make everyone depressed. And there are really mean clouds, that wreck things horribly, creating the situations for which "disaster recovery plans" are made. Is this just a metaphor? Of course not. But just like financial fraud, the big, juicy examples are usually hushed up in order to protect the guilty.

    OK, so What is "the Cloud"

    There is a little-discussed trend that is deeply embarassing to IT professionals: there is a wide and growing gap between the use of computing technology in the consumer world and in the corporate, data center world. When the average user of corporate data systems is home, he works in a very advanced computing environment. His local machines and devices are amazingly capable and pretty easy to use. When connected to the internet, he can access a nearly limitless world of cloud computing resources — which are themselves largely run out of data centers that are remote from the people who set up and administer the software in them, and which contain an ever-evolving mix of dedicated and shared resources and services. The consumer internet has been based on a cloud computing model for a long time.

    The corporate world is a whole different thing. The corporate world has been consumed with consolidating their diverse data centers. They are finally beginning to confront the extreme flexibility and ease of use that consumers enjoy every day, and are finding it increasingly difficult to explain why the computing they run with such high capital and operating costs are so cumbersome, error-prone and inflexible.

    In this context, there is no way that anyone associated with corporate computing is ever going to plainly admit that what they are basically doing is trying to catch up with the consumer internet. So they must be doing something else. Oh, yeah — they're evolving to the latest, smartest trend in corporate computing, adopting the latest technologies and being really leading-edge: they're "moving to the cloud," but of course in a "smart" way, with large doses of "private cloud" technology along the way.

    Summary
    What's a "private cloud?" A corporate data center with a fancier name.

    What's a corporation "moving to the cloud?" A corporate IT group trying to play catch-up with the consumer internet, and desperately trying to make it look like something else.

    What's new about "cloud computing?" Very little; mostly naming and marketing fluff.

    Is anything real happening when a corporation "moves to the cloud?" Sometimes yes! Sometimes, they really are copying a couple proven techniques of the consumer internet, slowly and at great cost and trouble, but nonetheless creeping towards a 21st century computing model.

  • Thanksgiving Meals and Software Turkeys

    Software development is normally conducted the same, common-sense way that Thanksgiving feasts are created. Perhaps this is why software so often resembles a post-Thanksgiving mess.

    Requirements

    The "requirements" (i.e., the menu and number of guests) for Thanksgiving don't differ all that much from one year to the next, and after all, the menu is a variation on a well-practiced theme: a dinner. But you'd still better be exact in defining the menu: "some kind of vegetable," for example, won't do. Furthermore, the requirements are made by experts for highly experienced users, none of whom will need documentation or training to "use" the resulting product.

    The requirements for a typical software project, by contrast, are not made by people who are experienced consumers of the product. They are usually made by people who are experienced at making requirements, which is roughly as effective as having menus designed by people who never eat.

    Design and Construction

    Once the menu has been planned, recipes are selected (not many people wing it and risk cooking a dish that neither they nor anyone else has ever cooked before), the shopping list is made, the shopping done and finally the dishes are cooked. You can see here a picture of some of the advanced cooking techniques used by experts in 1963.

    1963 11 Thanksgiving 21-25s

    Software developers try to follow roughly the same method of finding recipes (designs) and getting ingredients (lines of code). But they're always making dishes neither they nor anyone else has ever made, so the recipes they find need severe adaptation, and basically they have to make it up. The same goes for ingredients: they find people, try to get them to understand the made-up recipes, and then have them create ingredients (lines of code) that work together. They try to convince themselves and anyone who will listen that everything will turn out OK because strict project management techniques are being adhered to. Uh huh.

    The Finished Product

    The finished Thanksgiving meal is often a sight to behold. So are the people assembled to admire and to consume it, who typically have at it with skill, experience and vigor. While some of the younger participants may need talking to (see below: the wise guy with his feet up looks like he's about to get it from the lady on the far right), in the end things work out remarkably well. Everyone knows their job and does it.

    1960 11 Thanksgiving 10-17s

    Not everything goes perfectly when cooking the Thanksgiving meal, but most of it works out really well, and in the end all the requirements of the end users (that they end the meal being happily full) are satisfied.

    In software, once the "meal" is cooked, there is usually an extensive testing, integration and quality process involving labs, staging areas and other things to which the supposed "cooked and ready" meal is subjected for fear that it simply won't be edible. In spite of all these measures, everyone knows disaster is not just possible but likely, and so before being brought to the table, the meal is served to special people who are used to eating half-cooked, never-been-cooked-before dishes. This is called an "alpha" release. It resembles getting some poor fool to eat bites of the meal intended for the king to assure that it wasn't poisoned; the trouble is, in the world of software, it usually has been, if only inadvertantly as a result of the usual chaos of building never-been-built-before software. In the world of software, there is usually no equivalent of the dinner-table picture above.

    The Aftermath

    In the world of Thanksgiving dinners, the aftermath is pretty typical. Here are typical remains of the kind of meal cooked by the ladies pictured above:

    1961 11 Thanksgiving 14-22s

    Looking at a mess like this is generally a happy thing, which is why my dad took the picture. You remember how good the meal was and chuckle about what was left.

    In software, however, this picture resembles the meal that was actually served: a ripped-apart, cold, coagulated mess that you may be able to pick at. Hey, maybe we can make a turkey sandwich by bringing in some extra tried-and-true ingredients (bread, mayo)! The sad fact is, by the time most software is developed and delivered, the original cast of characters has given up, moved on or descended into open cynicism. Aided by the fact that the software doesn't work and/or the situation has changed so much that the software is no longer relevant, at least as it is.

    Summary

    Software development techniques, even today, have a remarkable parallel to making a Thanksgiving meal. But Thanksgiving meals have a track record of working out pretty well for all concerned, certainly in the are-you-full-afterwards department. And software development techniques have a track record of not working out so well, except for the turkeys who run the projects, who rarely seem to be fired for the messes they so consistently deliver — after all, we learned alot from this, and things are going to be different next time! Sure!

    If you like hearing gobble-de-gook babble about project management and late software that doesn't work, by all means continue to model your software development after Thanksgiving. But if you look forward to legitimately associating the concepts of "software" and "grateful" together, without sarcasm, then I suggest you leave the turkeys to Thanksgiving and try something else for software.

  • Developers, Designers and Project Managers at War

    There is a natural conflict between the various groups that create computer products. This graphic captures it pretty well.

    Developers-designers-managers.jpg.scaled1000

    Credit:

    http://alextoul.posterous.com/the-war-between-developers-designers-project

  • Status in Software: Silliness and Stupidity

    In all too many software groups, you get higher status by being more removed from actual customers, their needs and concerns. This is bass-ackwards. It's silly. It's perverse. It is profoundly stupid and counter-productive. If this is how your software group works, change it or leave. Now.

    The Inward Flow: Support

    In most organizations, here is the perverse flow:

    • Customer has problem. Contacts Customer Service.
    • L1 customer service takes the call or e-mail. Eventually. They try to do something, but don't have much knowledge or power. So after wasting some time, it's off to…
    • L2 customer service, which is backed up failing to handle other things L1 already kicked up to them. After wasting some of their own time, and often some of the customer's as well, it's off to…
    • L3 customer service, which is the place where the really experienced L2 agents are promoted. Life is messy in L3. All the nasty problems end up there, often with the customer already being (understandably) mad, but too frequently lacking the skills and resources to even reproduce the problem, much less fix it. After spending some time here, the worst problems of the most upset customers migrate to…
    • Sustaining engineering. This is the death-watch group in engineering. Two types of characters are typically confined here: ignorant entry-level people who hope to move up and out; and experienced engineers who missed the cut for working on the new stuff. If it's an easy bug, they may be able to fix it. Otherwise…
    • …it may actually be necessary to interrupt an exalted person who wrote the code that caused the problem, taking him away from the important business of writing code that has brand-new problems! But this drastic measure is avoided if at all possible.

    There are actually more layers to march through, but the pattern should be pretty clear by now: the "most important" people are protected from the consequences of their past mistakes by layers and layers of carefully arranged bureaucracy designed to deflect and defuse any contact with real customers and the problems those customers may be having. The more you know, the more distant you are kept from having your august presence sullied by the trivial annoyances of mere customers. It doesn't need to be this way.

    The Outward Flow: Development

    When new products are created, it is all too often the case that the higher your status, the more removed you are from contact with the people who will ultimately use the product you create.

    In very large organizations, the remote peak of the status hierarchy is occupied by research groups or labs. These are truly hilarious. Why do they have ultimate status? It is a given that they see no customers, hear no customers and talk with no customers; but even better. they produce nothing tangible at all — unless you count academic papers and research reports. Those people are sure important! Their ground-breaking work will (pick your favorite) "lay the foundation for," "create the basis of," or "make the discovery on which" generations of future products will be built. Sure.

    Smaller organizations would love to have such a group — it's prestigious! — but instead make do with a few exalted individuals who think deep thoughts and create "architectures" that "solve" a wide range of present and future problems.

    High level design people then take over to create a "design" within the "architecture." This is not easy! It's important to fend off the constant pressure to produce something practical that works for today's customers, in favor of doing the design "the right way," i.e., spending lots of time thinking about problems some customers may have in some unspecified future, and "creating a framework" that will supposedly make them easy to solve.

    At this point, software development splits into an alphabet soup of competing creeds, each of them certain of their unique virtue and access to software heaven. There is the much-maligned waterfall, agile, SCRUM, extreme, and on and on. The details of what happens next vary. The status relationships and ultimate outcomes are pretty much the same: the more important you are, the less likely you are to have meaningful contact with customers. This remains true as the software staggers through phases that may various include integration, testing, staging, documentation and roll-out.

    Finally — finally! — the software is inflicted on the customers for whose benefit it was built. All I can say is that the chorus of complaints, however loud it may be, is rarely loud enough to penetrate the excellent sound insulation of the rooms in which the company's "brain trust" festers.

    Conclusion

    If you want to run a charity organization for egotistical, self-absorbed and self-important programmers (OMG! Did I just use the demeaning term "programmers," implying these people might actually lower themselves to doing actual, like, work!? I meant to use a more elevated term like "intergalactic systems architect" or "chief scientist.") — like I was saying, if it's your goal to provide welfare to high-minded computer scientists, by all means employ a staff of "elite" techies and help them avoid being interrupted by the hoi polloi. Their deep pondering is way too valuable to be sullied in any way by the mundane concerns of the common people. If, on the other hand, you have real work to do and want your best people to lead, then make sure that the closer people are to customers the more status they have. Building a product or service that real people value and want to use requires — gasp — contact and interaction with those same real people.

     

  • Three Most Important Factors in Storage: Performance, Performance and Performance

    We all know about the importance of location in real estate. What's the equivalent in storage? Performance. It's the one thing that you can't fix. When you look at storage, it should be what you look at first, second, and last.

    Location and Performance

    The three most important things in real estate are location, location and location. Real estate agents may talk about the attractive paint job, the great landscaping and the new roof. But if you didn't like them, you can fix them. The one thing you can't fix? The house's location.That's why it gets the top three slots.

    What about storage? You'd never know it from storage vendors, but the three most important things in storage are performance, performance and performance. You can fix most everything else with server-based software. You need replication? Your database can do it all by itself. You think thin provisioning is great? It's cheaper and better to get it with a VM. But performance? You want that go-cart to do 100 mph … uphill??? Fuhhgeddahbouddit, buddy. However fast you're going is as fast as you're going to go.

    The Storage Performance Problem

    We know there's a performance problem because of the fundamentals of spinning disks. We know there's a problem because vendors are coming out with expensive solutions that emphasize performance, and companies are going public based on storage performance; oddly enough, in the case of Fusion IO, they don't even deliver real storage, just a board that goes into a server! But people are so desperate for performance, they try it anyway. One company has even come out with an affordable solution that just screams performance.

    The biggest thing that convinces me there's a problem comes from the leader in server consolidation and virtualization, VMware. I went through their best practices in configuring virtual storage, which tells you all you need to know. They have four best practices. All four best practices amount to the same thing: make sure you get enough performance from your storage! Their best practices are explicit: you should buy storage not based on capacity, but on performance.

    Here they are:

    • Configure and size storage resources for optimal I/O performance first, then for storage capacity. –> Don't buy TB, buy iops (i/o's per second).
    • Aggregate application I/O requirements for the environment and size them accordingly. –> When you buy iops, make sure you look at all your applications.
    • Base your storage choices on your I/O workload. –> In case you didn't get it yet, pick storage based on iops!
    • Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention. –> Remember that using a classic SAN can make storage performance worse, so don't be fooled.

    According to VMware, there are four most important factors in storage: performance, performance, performance, and performance!

    Does anything but performance matter?

    Of course it does. Do you want to lose your data when a server fails? You'd better not buy server-based storage. Do you want your performance to drop to a crawl when there's a disk fault? You'd better ask how frequently that happens, how badly RAID re-builds impact performance, and for how long (hours of severely degraded performance taking place weekly is not unusual in a large system). But performance still takes the top 3 slots. It's just like location: if there are two equally-well-located houses, you avoid the shack with the outhouse and buy the comfortable, modern house. With storage, if you have two systems with enough performance to meet your current and future needs, you pick the one that isn't a board stuck in a server, and the one that has enough affordable capacity.

    Conclusion

    Performance is more important, by far, than any of those silly features the SAN vendors love to rattle on about. But in a post-SAN world, performance is front and center. The bigger disks get, the worse performance gets. The more you virtualize and consolidate your servers, the more performance you need. In a word, you need SSD's, because they're fast storage. But they're expensive. So an appropriate blend of SSD's and spinning disks would be great, fast but affordable, if they really were in a seamless pool of storage. That's the Xio Hybrid ISE in a nutshell. In a performance-starved world, it's food for the hungry — food you can actually afford to buy.

  • Storage: The KISS Principle in a Post-SAN World

    The dominant model in storage today is the SAN (Storage Area Network), a.k.a. "storage mainframe." While "SAN" makes you think of a storage version of a LAN (Local Area Network), it is far from it. In fact, SAN's are monolithic, mono-vendor, administratively heavy-weight, burdensome beasts. They are laden with "must-have" features that sound good, but which are mostly crippled versions of functions performed more effectively, at lower cost, by server software.

    Most storage vendors make it clear what they mean by the KISS principle: "Keep it SAN Storage." Why? more revenues, more profits, more high-margin maintenance — in general, more for the vendor and less for the buyer. It's time for buyers to revolt. It's time to enter a post-SAN world of simple effectiveness. It's time for storage to be fast, scalable and affordable. It's time for a return to the original meaning of KISS: "Keep it Simple, Stupid." In other words, it's time for the Xio ISE storage blade.

    The Controller is the Problem

    Storage buyers buy, well, … storage. Duhh. If they didn't need storage, they wouldn't be talking with storage vendors. And it's true that storage vendors deliver storage. But what do they sell? Anything but storage. It sounds strange, but it's not — since every storage vendor sells storage, how can you tell one storage vendor from another? Only by talking about something else. Today it is standard practice for storage vendors to emphasize the importance of features that are somehow related to storage, but aren't actually storage.

    This brings us to the controller. Every traditional storage vendor has a monolithic controller. The controller is an expensive box that sits between the servers and the actual storage. The controller is where all these storage-related features are implemented. The game every storage vendor plays is to make you want what's in the controller, because whatever it is, only that vendor has it. The controller is what makes you buy one vendor's terrabytes rather than another vendor's. The controller is where vendor differentiation is. Last but not least — the controller is where vendor profits are.

    What about those "must-have" features in the Controller?

    I would love it if someone de-constructed them all, publicly and effectively. But let me start from a simple observation. In my job, I get to closely follow the technology decisions and deployments of dozens of growing, leading-edge companies, and I get a quick look inside many more. What I find says volumes about the status of all those "value-adding" features of storage systems: nobody uses the fancy, "value-adding" features of storage systems — they just use storage! As in plain old storage, like reading and writing.

    The reason all these leading-edge people just use plain-old KISS storage, is pretty simple: they focus on bulding technology that supports their business. The value is delivered by applications that use files and databases. Files and databases need storage. Give them storage and you're done!

    Just this morning I talked with some terrific folks who operate a leading edge internet advertising service. They already handle monster volumes out of multiple data centers. When orders are placed, the orders need to get out to all the ad servers. A perfect application for that popular feature of SAN controllers, volume mirroring, right? Wrong. There are at least a handful of reasons why this would be a terrible solution. But it doesn't matter, because they get the job done, effectively, quickly and well, with mysql's replication facility. Their application puts the ad opportunity into the master database, which replicates it to read-only slaves. Problem solved.

    I see this pattern everywhere: the problems that SAN vendors use to sell their controllers are better solved, more simply and with less expense, with applications and server software.

    Introducing the no-controller SAN: post-SAN Storage

    What does that mean? If you remove the controller in a SAN, what are you left with?

    With the old SAN vendors, you're left with a big, expensive pile of storage you can't use. In the post-SAN world, exemplified by the Xio ISE, you've got what amounts to storage blades. You can direct-connect a storage blade to a server or to a couple of servers, or you can network a set of blades together with a set of servers.

    Each ISE storage blade comes with a full complement of storage capacity and performance. If you need a truly giant pool of storage, you can combine any number of them into a single volume using server-based software. But more likely you'll want to share them among a pool of servers, which can easily be automated using RESTful calls from a UI or script.

    Then there's the issue of storage performance. Bigger disk capacities equals shrinking performance. That's why Fusion IO and similar companies are so hot. Note that Fusion IO doesn't have controllers or any of the fancy (= useless) features that come with them. People are snapping them up anyway. Maybe that post-SAN storage is worth looking into … you could save a bunch of money, keep things simple and above all deliver the performance your business demands. If you've got the kind of problem Fusion IO says they solve (a storage performance problem), you should do yourself a favor and discover how Hyper-ISE delivers the performance you need at price you can afford. And Hyper ISE gives you performance while keeping your data safe and providing full fail-over, unlike Fusion IO, which isn't really "storage" at all — it's just a board in a server, so when anything about that server goes wrong, your data goes wrong with it..

    Conclusion

    Most computer storage today is anything but simple, scalable or affordable. Intelligent storage buyers are increasingly buying the storage that they need and only the storage they need. They are saving money, time and trouble by refusing to buy expensive controllers that are laden with features they already have in the server, features they just don't need. These buyers have effective, simple, scalable storage that costs less, performs better and lasts longer than old-style SAN's. Welcome to the post-SAN world, where the sun shines, things are simple and life is so good, it makes you want to KISS someone in a new way.

  • Better Software Lets You Compete Against Giants and Win

    There are lots of personal and emotional characteristics that help winners build terrific new companies. They're important. But when it's a software company, guess what? The software really matters!

    There are a number of elements that contribute to success, including concentrating your forces, targeting poorly defended or new territory, and rapid iterations in response to new knowledge and changing conditions. However, there's nothing like superior weapons to help your cause.

    Your competitors are typically GIANTS compared to you, so giant that they darken the sky…

    Giants 2

    Darkening the sky is bad enough, but what's really scary is when they notice you, pay attention to you, and get in your face…

    Giants 1

    Now you have one choice and one choice only — pull out your weapons. If your weapons (software) are truly, fundamentally superior (and you don't screw up the other things too badly), then you've got a good chance of vanquishing a larger, established, well-equipped foe…

    Giants 3

    Which feels awfully good and definitely deserves raising your arms in victory.

    Conclusion

    If giant competitors cover the sky with dirigibles, you had better be dive-bombing them with powered, fixed-wing airplanes. When invading a place that is well-defended by armies with swords and war horses, you'd better have guns and armor. When the giants in your industry dominate with a kind of software, your software had better have the same advantage that guns have over swords and airplanes have over balloons…and that smart little girls have over giant adults.

     

  • “Top Nerd:” Nerd Values and Attitudes

    Though the subject of unapologetic humor, nerd values and attitudes are wonderful. Society would be better off with more nerds.

    When other groups have gotten status and prestige, or just respect, in the past, they have had to either (1) exercise aggressive dominance, or (2) do the victim thing. Nerds are getting respect like never before by just coming out, valuing who they are and what they do, making a bit of fun of themselves, and by the simple fact that people actually know stuff and can get stuff done with passion and complete dedication are really valuable, admirable people who tend to enjoy what they do! 

    I found a delightful blog post by Liz Andrade, a self-described nerd, who describes some nerd values that also illustrates why nerds are so valuable.

    Liz_is_nerdy

    Here are a couple excerpts:

    Nerds are Inspiring!

    Part of being a nerd has to do with having some strong opinions on whatever it is you’re nerdy for — be it Star Wars, video games or typography — nerds pride themselves on knowing a lot about what they are into and your opinions on the matter are part of your identity.

    This past year I shopped for glasses … and the experience I had … made something that in the past was nothing more than a necessary task into a remarkable experience! How? The people at these locations were total eye wear nerds!

    This is a sect of nerd I was not even aware existed … They were able to suggest ideas based on my face shape and style, they knew about eye wear designers, frame shapes, materials, vintage styles and their enthusiasm for the subject was infectious!

    When you are passionate about what you do, you inspire the people around you – and who doesn’t want to work with someone inspiring!?

    Nerds are Authentic

    Part of being nerdy is accepting yourself for who you are and what you are into even if isn’t what fits into the status quo or flow into the mainstream. Those who are able to embrace their nerdisms and not be ashamed of them have this obvious badge of honesty.

    Whether it is real or imagined, if someone can be totally open and honest about their Red Dwarf obsession, you feel they are probably transparent about other things in their life, like business practices and ethics.

    Nerds are Memorable

    Nerds are usually stand out from the crowd… and being unique makes you easier to remember, as simple as that. It is each of our unique experiences and abilities that make us valuable individuals, blending in has become a liability to any business trying to be remarkable!

     

    Of course the work of nerdy, remarkable people like Temple Grandin has gotten some attention and helped the cause as well. Temple has had to overcome some obnoxious and inexcusable barriers … to make the world a better place! Not to mention, to get her job done! More of us should be more like Temple….

     

  • Software Quality: Theory and Reality

    The theory of software quality makes my head hurt; the reality makes me want to cry.

    There is a great deal of material written about software quality. It's a HUGE subject. It's also a diverse subject with lots of experts and lots to study. There is one simple reason for this: Software quality is a horrible %^*%^* mess, and it's not getting better!!!

    Software Quality Theory Makes My Head Hurt

    Just scan through the Wikipedia article on the subject and your head will probably hurt too.

    I particularly like this alert at the top of the Software Quality Factors section:

    This section needs attention from an expert on the subject. See the talk page for details. WikiProject Software or the Software Portal may be able to help recruit an expert. (September 2008)

    Note that they've been seeking this expert for nearly three years!

    Big government agencies have whole organizations devoted to the subject. For example, there's DACS, the Department of Defense (DoD) Information Analysis Center (IAC). What does DACS do? Read this (warning: reading this may make your head hurt):

    Designated as the DoD Software InformationClearinghouse, specifically aimed to serve as an authoritative source for state-of-the-art software information providing technical support for the software community, the DACS offers a wide variety of technical services and supports the development, testing, validation, and transitioning of software engineering technology to the defense community, industry, and academia. DACS subject areas encompass the entire software life cycle and include software engineering methods, practices, tools, standards, and acquisition management. Also included are programming environments and language techniques, software failures, test methodologies, software quality metrics and measurements, software reliability, software safety, cost estimation and modeling, standards and guides for software development and maintenance, and software technology for research, development, and training.

    I could go on and on, but my head hurts, so I'll stop.

    Software Quality Reality Makes Me Want to Cry

    With all these impressive-sounding things, books, conferences, experts, criteria, methods and certifications, software quality should be totally nailed, right? To the contrary: something is nailed when … people stop talking about it! Take the disease smallpox, for example. It's nailed! There aren't theories, experts, or much of anything beyond historical references and scare-talk about potential re-emergence.

    This is one the better summaries of the reality of software quality that I've seen; ironically, it's from a zombie website for obsolete software written for a long-obsolete machine, that is/had been(?) run by a couple people from some little island in the Carribean.

    Tree

     

  • “Top Nerd” Activities: Work Hard, Save the Day and Have Fun

    There are nerds. There are super-nerds. And then there are … Top Nerds. What do Top Nerds do? Simple:

    • Top Nerds work hard. Really hard. Why? They like hard work.
    • Top Nerds save the day. One of my Top Nerds buddies is doing it as I write this. No "software project" of any kind, however well-run, could possibly pull off what he's pulling off.
    • Top Nerds have fun. They're exploring, pioneering, accomplishing, learning. This is the best kind of fun!

    I just hosted a gathering of Top Nerds. Having them all together for a long weekend was fun for everyone — who else spends the first more-than-half of the Fourth of July in a conference room going deep into the details of machine learning techniques and applications, solves leading-edge problems in their practical application and discovers new, transformative uses for them, … and has a blast? Well, we sure did!

    2011 07 04 Nerdfest Monday 005s
    Naturally, Top Nerds scramble and solve obstacles as they arise. One of our number was too busy saving the day for his company to be able to commit a holiday weekend to being in New York City with the rest of us. But he skyped in (that's him on the screen in the picture), told us what he was doing, how and why, and in the interaction that ensued, everyone ended up ahead of the game, and thoroughly entertained as a side-effect!

    Because that's what Top Nerds do!

  • The new “Top Gun” is “Top Nerd”

    "Top Gun" is so last-century. Now nerds are on top of the heap, and being "Top Nerd" is best of all.

    When the hottest, coolest thing around is fighter planes…

    800px-F-15,_71st_Fighter_Squadron,_in_flight

    … it makes sense that the coolest dude around would be the best fighter pilot. This is what the 1986 movie "Top Gun" was all about.

    Top_gun_maverick_tom_cruise_suited

    Top Gun is macho alpha male behavior to the max. It's competitive guys who look at other capable guys as something in the spectrum of rival to enemy. Everyone else is just stuff to be vanquished in primal combat.

    Top-Gun-movie-03

    Fast-forward to 2011. When I walk around the streets, everyone who isn't about to be retired (and an amazing number of ones who are) is either plugged in, communicating via their portable device and/or (increasingly on buses and trains) absorbed in their e-readers. Advertising is rapidly shifting to digital media, and similar digital transformations are taking place in other domains. Are fighter planes at the heart of this world-wide, all-pervading transformation? Hardly. It's computers and the software who make them do what we want (mostly). And who's the fighter pilot for the computers? It's the people who write the software; in other words, it's nerds!

    This is a really good changing of the guard. With rare exceptions, nerds are much nicer people than "Top Gun" types. Nerds are much more interested in learning and accomplishing things than non-nerds. Cooperation and collaboration are characteristics that are well within nerd-normal behavior. This is illustrated by the fact that when you've got a "Top Gun," you've often got a bunch of bitter, defeated rivals, while "Top Nerd" is normally designated by acclamation by hard-working, admiring fellow nerds.

    In my opinion, this is a good thing. Good for nerds, and good for the world.

     

Links

Recent Posts

Categories