Category: Software Evolution

  • Summary: Computer Software History and Evolution

    This is a summary with links to my posts of the history and evolution of computer hardware and software.

    In computers and software, history is ignored and its lessons spurned. What little software history we are taught is often wrong. Everyone who writes or uses software pays for this, and pays big. This is true at a large scale for computers.

    https://blackliszt.com/2012/03/computer-history.html

    Software history can be spectacularly bad about important things.

    https://blackliszt.com/2019/08/the-comes-before-is-not-causation-fallacy-in-software-evolution.html

    Bad software history is also rife on a smaller scale, for example cryptography.

    https://blackliszt.com/2020/10/elizebeth-smith-friedman-the-cancelled-heroine-of-cryptography.html

    Software history is largely ignored.

    https://blackliszt.com/2015/06/dont-know-much-about-history.html

    When you learn math, you are learning its history. When you get to calculus, you’re essentially up to the 1600’s. You build from the ground up. Not in computing, to its detriment.

    https://blackliszt.com/2015/05/math-and-computer-science-vs-software-development.html

    Studying history would show modern software people who think they’re solving big problems that the problem is decades old and that the box of failed solutions is overflowing.

    https://blackliszt.com/2021/08/the-crisis-in-software-is-over-50-years-old.html

    Software development is a giant mess. Studying its history and the attempts to clean up the mess is an essential part of constructive change.

    https://blackliszt.com/2022/04/how-to-fix-software-development-and-security-a-brief-history.html

    The history of other fields can have lessons that are highly relevant for software.

    https://blackliszt.com/2012/07/what-can-software-learn-from-steamboats-and-antiseptic-surgery.html

    https://blackliszt.com/2014/02/lessons-for-software-from-the-history-of-scurvy.html

    https://blackliszt.com/2019/02/what-software-experts-think-about-blood-letting.html

    Some aspects of computer history are so important and relevant that, when understood and applied, they make dramatic improvements to software development.

    https://blackliszt.com/2013/12/fundamental-concepts-of-computing-speed-of-evolution.html

    Software Evolution

    Software evolves. But it evolves differently than other things studied by science. Virtually no one studies software history in any way, much less the patterns of evolution that become apparent when you study that history.

    https://blackliszt.com/2019/06/the-evolution-of-software.html

    Why should anybody bother to study software evolution? Knowing software evolution can help you predict the future! How? Software does not evolve the way most people think it does

    https://blackliszt.com/2019/09/understanding-software-evolution-helps-you-predict-the-future-of-software.html

    When studying software evolution it’s important to understand the underlying principles of automation – the things that computers and software are capable of doing.  The fundamentals will tell you WHAT will happen, but not WHO or WHEN.

    https://blackliszt.com/2020/01/the-fundamentals-of-computer-automation.html

    The way you learn software evolution is to study both its history in general and how it has been used in different industries and domains.

    https://blackliszt.com/2014/04/continents-and-islands-in-the-world-of-computers.html

    One clear pattern of evolution is that a kind of software will be established in one domain and then slowly, sometimes over decades, appear in other domains, one by one.

    https://blackliszt.com/2019/08/the-slow-spread-of-linear-programming-illustrates-how-in-old-vation-in-software-evolution-works.html

    A similar pattern often occurs with simple kinds of software.

    https://blackliszt.com/2019/09/simple-data-entry-technology-illustrates-how-in-old-vation-in-software-evolution-works.html

    A peculiar aspect of software evolution is software fashions. They appear, grow strongly and often fade away as their awfulness becomes hard to avoid. Then they come back, usually re-named.

    https://blackliszt.com/2019/05/recurring-software-fashion-nightmares.html

    A typical result of a fashion-driven software trend is to increase costs without delivering benefits, which can happen even when it involves an intrinsically good thing like workflow automation.

    https://blackliszt.com/2019/09/laser-disks-and-workflow-illustrate-the-insane-fashion-driven-nature-of-software-evolution.html

    One of the forces of evolution is resistance to automation.

    https://blackliszt.com/2020/01/luddites.html

    Software Programming Language Evolution

    Software people love to talk about progress and innovation. Most of what they claim as progress is little but milling around in a small box.

    https://blackliszt.com/2020/09/software-programming-languages-50-years-of-progress.html

    There is an odd evolution in programming languages, in which the relationship between data definitions and programs cycles from inside to outside the program and back.

    https://blackliszt.com/2015/06/innovations-that-arent-data-definitions-inside-or-outside-the-program.html

    However, there have been a couple major advances in the evolution of programming languages.

    https://blackliszt.com/2020/09/the-giant-advances-in-software-programming-languages.html

    While some truly powerful advances have become standard, others have been thoughtlessly discarded.

    https://blackliszt.com/2021/09/software-programming-language-evolution-structures-blocks-and-macros.html

    Other powerful advances, widely used for a time in the market, ended up being ignored or abandoned by academia and industry in favor of a productivity-killing combination of tools and technologies.

    https://blackliszt.com/2020/11/software-programming-language-evolution-beyond-3gls.html

    The emergence from academia of “structured programming” and the associated effort to find all GOTO statements and burn them at the stake was a particularly shameful instance of the evolution of programming languages.

    https://blackliszt.com/2021/09/software-programming-language-evolution-the-structured-programming-goto-witch-hunt.html

    While loads of people focus on language, programming tools that make huge contributions to the productivity of programmers have evolved: libraries and frameworks.

    https://blackliszt.com/2021/10/software-programming-language-evolution-libraries-and-frameworks.html

    Some programming languages that were supposed to be so much better than older ones led to major failures. But failures are expected in the normal world of software development, so the bad new languages kept marching along.

    https://blackliszt.com/2021/01/software-programming-language-evolution-credit-card-software-examples-1.html

    The small world of functional software languages has an interesting history.

    https://blackliszt.com/2021/05/software-programming-language-evolution-functional-languages.html

    Software Applications and Systems Evolution

    One of the recurring evolution patterns is that functionality emerges on a new platform in roughly the same order as it emerged on earlier platforms. The timescale of the emergence may be compressed; the important aspect of the pattern isn’t the timing but the order. Here is an explanation of the general concept and how it worked for operating systems.

    https://blackliszt.com/2020/10/software-evolution-functionality-on-a-new-platform.html

    Here is how the same pattern of functionality on a new platform worked out in security services.

    https://blackliszt.com/2020/11/software-evolution-functionality-on-a-new-platform-security-services.html

    Here’s the pattern of existing functionality on a new platform as seen for remote access software.

    https://blackliszt.com/2020/11/software-evolution-functionality-on-a-new-platform-remote-access.html

    Just because you’re building a version of existing functionality for a new platform doesn’t mean you’ll succeed. You can screw it up in various ways, including being too early – if the market doesn’t think it has a problem, it won’t buy a solution.

    https://blackliszt.com/2020/11/software-evolution-functionality-on-a-new-platform-market-research.html

    Transaction monitors are a classic example of the pattern of functionality emerging on a technology platform, and then emerging in pretty much the same way in the same order on other platforms.

    https://blackliszt.com/2021/02/software-evolution-functionality-on-a-new-platform-transaction-monitors.html

    Once an application appears on a software platform, there is a consistent way the category of applications evolves on that platform. It goes from a custom application to a basic product, then parameters are added and finally a workbench.

    https://blackliszt.com/2020/01/the-progression-of-abstraction-in-software-applications.html

    Here are examples of the progression from prototype through increasing levels of abstraction.

    https://blackliszt.com/2021/03/the-progression-towards-abstraction-on-a-software-platform-examples.html

    Software automates human effort to varying degrees. One dimension of automation is depth, in which software evolves from recording what people do through helping them and eventually to replacing them.

    https://blackliszt.com/2021/04/the-dimension-of-software-automation-depth.html

    There is surprising pain and trouble of going from one stage of automation depth to the next. Unlike the progression to increasing levels of abstraction, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.

    https://blackliszt.com/2021/05/the-dimension-of-software-automation-depth-examples.html

    Here’s how the automation depth pattern plays out in “information access,” the set of facilities that enable people to find and use computer-based information for decision making.

    https://blackliszt.com/2021/12/the-dimension-of-automation-depth-in-information-access.html

    The patterns of automation play out in multiple dimensions. In addition to automation depth, there is automation breadth.

    https://blackliszt.com/2021/11/the-dimension-of-software-automation-breadth.html

    Here are a couple of clear examples of the evolution of software applications to increasing automation breadth.

    https://blackliszt.com/2022/07/the-dimension-of-software-automation-breadth-examples.html

    The evolution of spreadsheets is a good example of how these patterns work out in history, and clearly demonstrate the predictive power they have for anyone who cares to look.

    https://blackliszt.com/2023/09/software-evolution-spreadsheet-example.html

    As software evolves in these ways, it becomes increasingly nuts for a potential customer to build it for themselves. However, there are a surprising number of cases where a smart user can build a private version of commercially available software and win with it.

    https://blackliszt.com/2023/09/software-evolution-build-buy-then-build.html

    You would think that user interfaces would be thoroughly understood. When you look at a variety of UI’s you see that the underlying principles are clear but usually ignored.

    https://blackliszt.com/2020/12/software-evolution-user-interface.html

    https://blackliszt.com/2021/01/software-evolution-user-interface-concepts-whose-perspective.html

    https://blackliszt.com/2020/01/how-to-design-software-user-interfaces-that-take-less-time.html

    Does Software Always Evolve?

    There is a large body of core software that evolves extremely slowly.

    https://blackliszt.com/2023/07/does-software-evolve-rapidly.html

    Once a kind of software gets built, it tends to live long past the problem for which it was the solution. Data was so big you needed storage. Special SW invented to handle it. Now it will all fit in memory (for most applications). But the SW and code practices live on!

    https://blackliszt.com/2010/09/databases-and-applications.html

    Yes, they’ve been obsoleted by modern hardware – but needed by software.

    https://blackliszt.com/2010/01/paleolithic-mainframes-discovered-alive-in-data-center.html

    While there are things that change in software and management, a surprising number of common things just get new names. The “cloud” is a typical example.

    https://blackliszt.com/2011/12/the-name-game-of-moving-to-the-cloud.html

    Another example is pure software, shown with SOA and micro-services.

    https://blackliszt.com/2021/03/micro-services-the-forgotten-history-of-failures.html

    One of the explosive growth areas of the internet boom was the invention and spread of social media. Most of the people who look at it ignore its deep roots in non-digital media.

    https://blackliszt.com/2018/10/social-media-has-a-long-history.html

    If and when serious study of computer software history and evolution finally starts to take place, perhaps Computer Science will start on the path to being, you know, scientific. And normal software development will stop being dominated by fashions.

    https://blackliszt.com/2023/04/summary-computer-science.html

    https://blackliszt.com/2023/07/summary-software-fashions.html

     

  • Software Evolution: Build, Buy then Build

    When a kind of software isn’t available, it makes sense to build it. If it is available to buy, everyone says you should buy it. But in a surprising number of cases, building makes sense even when you can buy software.

    Here's an example of a company that wasn't in the software development business decided to build what they needed, and grew an amazing business from there.

    There was a little bank in Georgia that somehow got the ambition to process credit cards for their customers when that was a new thing. There was no software available to buy, so they built some primitive software. It worked well enough. Then they improved the software and sold some local banks on processing credit cards for them (not necessarily in that order). One thing led to another, and the software evolved into the second-largest credit card processing company in the US, TSYS. Particularly when software categories are new, that’s what people do – they build software.

    As the software becomes available, it starts to make sense to buy it. There are always issues at the beginning (see this about developing functionality on a new platform) about the time and effort to make the software work for you, and whether it has all the features you need. Sooner or later, the surviving vendors reduce the pain of customization and implementation, and develop a rich enough set of functionality so that the needs of nearly everyone in a business segment are met. At some point on that spectrum, it becomes kind of nuts to build it when you could buy it. It might take you years to build what you need, and then you’ll have to support it, and you’ll always be behind your competitors who are using off-the-shelf software, not to mention the risk that the software you try to build doesn’t end up working or really meeting your needs. So the mantra in the corporate offices becomes “buy, buy!” And only if you really can’t buy it should you build it; and even then, you should look real hard for excuses to put it off. If nothing else, there is less risk associated with buying software, and corporate managers nearly always vote for the least-risky of alternatives.

    However, as these products have been getting built, tools and components for building software have also been advancing, and the ability to make your software systems adapt to changing business conditions has becoming increasingly important as a business advantage. Meanwhile, the major products in any category tend to get bloated, as pressures from various customer groups result in feature after feature being added to the product. In the early days of a product, you are most concerned that the features you need are in the product; as the product ages, the concern shifts to the burden on the software and users from the vast majority of the features in the product that someone may need, but not you. It may be that you only need a tiny fraction of the available features. An example of this that is close to home for many people is the Microsoft Office products – who even knows what most of the menu items in Word mean, much less uses them?

    The case against purchased products gets stronger when you need several of them, and they all have to work together. Then you have the problem of application integration, which is typically compounded by different release and upgrade schedules from the vendors, who, try as they might, don’t seem to be able to release a new version without screwing something up that worked in the older version. If, on top of it all, you have an ambitious and low-cost programming group at your disposal, then you’re a good candidate for the final stage in this evolution, which is, “OK, I bought it and I’m using it, now when do I get to turn it off?” “Programming” shops whose main purpose in life is baby-sitting purchased products can’t even imagine doing something like this, which is probably just as well, because they are highly unlikely to pull it off. But groups who are itching to write code, and do stealth projects to prove what they can do, are another matter.

    So the evolution returns to the starting point, only typically at a much more advanced level, with more advanced tools and a knowledgeable view of the business and the required functionality. “Why pay the vendors their ransom and put up with the crap they dish out, when I can write it, own it, change it any time I need to, and control my own destiny?” If the numbers work out, why not, indeed?

  • Software Evolution Spreadsheet Example

    The story of the evolution of spreadsheets is well-known, though rarely studied. Spreadsheet evolution illustrates the patterns of software evolution. It shows the power of those patterns; if the leaders of Apple, Lotus and Visicalc had known and acted on the predictive power of those patterns, Office would not have been a Microsoft product, but one of theirs.

    As a reminder, here are the basic patterns of software evolution:

    Software is created for a new platform.

    Software grows more capable on the platform with increasing abstraction.

    The automation provided grows deeper.

    The breadth of automation grows wider, doing more things.

    The Spreadsheet products in evolutionary terms

    The first popular spreadsheet program appeared on the then-new Apple computer. It was Visicalc. In terms of the stages of evolution, it was this:

    • Emergence on a new platform: Conceptual Breakthrough
    • Development on a platform: Basic product
    • Automation depth: Recorder
    • Automation breadth: Point product

    If the platform became dominant and remained stable, we would expect competition by increasing the depth of automation on the Apple, increasing the breadth of automation e.g. by creating a collection of related products, or by moving along the same-platform development dimension. Instead, a powerful and popular new platform emerged, the PC. Visicalc themselves should have become the leaders in spreadsheets for the new platform, but as so often happens, the emergence of the new platform is an opportunity for a new, focused company to do it.

    The company that won was Lotus, with their 1-2-3 product.

    • Old platform: Apple I/II
    • Old functionality: Spreadsheet, like Visicalc
    • New platform: IBM PC/DOS and clones
    • New functionality: Spreadsheet, much like Visicalc, along with some hype of other applications
    • Outcome: They won big time

    All the other categories (Basic product, etc.) remained the same.

    Meanwhile, back in the land of Apple, the Mac came out, the first practical GUI on a personal computer platform. You would think this would be the big chance for Apple themselves to jump into the business, or for Lotus to convert their category-killing program. And of course, it was their big chance, but they blew it. Instead, Microsoft of all unlikely companies, jumped in.

    • Old platform: Apple I/II, PC/DOS
    • Old functionality: Character mode spreadsheet, like Visicalc and Lotus 1-2-3
    • New platform: Apple Mac GUI
    • New functionality: Excel Spreadsheet, but with attractive fonts and graphics
    • Outcome: They won on the Mac

    It wasn’t easy to get a spreadsheet right for GUI, but Microsoft kept plugging away and really took advantage of the graphics and the mouse.

    At the same time, Microsoft was plugging away at their own clone of the Mac operating system for the PC, Windows, which they announced in 1975 and finally came out with a usable release, 3.1, in 1982 (seven loo-oo-oong years later – but that’s another story). Here we have a GUI platform for the PC, Lotus’ home base. The GUI had shown its attractiveness elsewhere. There was a GUI backer who was clearly going to see it through. You would think that the savvy business executive who was in charge of Lotus at the time, surrounded by a heavy-weight group of Cambridge thinkers, would have been all over this. Didn’t happen. While Apple built the GUI but failed to build the spreadsheet for it, and while Lotus built the spreadsheet but failed to convert it to a GUI on their own dominant platform, Microsoft built both the GUI and the spreadsheet.

    • Old platform: Apple Mac, PC/DOS
    • Old functionality: Mac GUI spreadsheet; PC/DOS character spreadsheet
    • New platform: PC Windows
    • New functionality: Same as on Mac
    • Outcome: Huge win/win: PC/Windows helped Excel, Excel helped Windows

    At the time, Lotus had much more revenue than Microsoft, and seemed reluctant to rewrite its industry-leading spreadsheet for Microsoft’s Windows, which was seen as a move that would help the success of that platform, and thus help Lotus’ competitor. But once it was clear that Windows was here to stay, Lotus finally did the re-write, and got 1-2-3 onto Windows. So now 1-2-3 and Excel were competing as point products on the same technology platform.

    But now Lotus had another problem: 1-2-3 was a point product (I know, I know, they said they had three applications, thus the “1-2-3,” but who used the “2-3”?), and by this time, Microsoft had an actual product suite: Microsoft Office, combining spreadsheet with word processing and presentations via OLE and drag-and-drop. On the movement from point product to product collection to product suite, Microsoft was consistently a step ahead, and thus had more market pull than Lotus. While Lotus was plugging the virtues of 1-2-3 vs. Excel, Microsoft was promoting the value of its product collection. Then, while Lotus was promoting the value of its acquired collection of office products, Microsoft had a real product suite. The end result, we know today, is the complete dominance of the Microsoft Office product suite.

    Conclusion

    The evolution of the spreadsheet fits nicely into the general patterns of software evolution I have described. If strategists and company leaders had been aware of these patterns and acted on their proven predictive power, the outcome of the spreadsheet wars would have turned out differently. Leaders today would be well-advised to understand and apply the patterns of software evolution.

  • Does Software Evolve Rapidly?

    Some software is changed over time. But there is a great deal of software that is simply part of the landscape or part of the underpinning of the software we see. This largely ignored category of software is important  to understand.

    Keeping up with Software

    I had a nice conversation with the non-technical CEO of one of our software-based companies. He expressed the widely-held view that the process of developing software is undergoing change at a dizzying pace. Just in the last couple of years, new languages and tools have emerged and are being used and talked about. How can anyone possibly keep up, he wondered?

    That certainly is how things seem to most people, both inside and outside of the computer industry. We had desktop machines, personal computers, 25 years ago, and we have them today; of course, they’re less expensive and have way more power and capacity than they did back then, but they’re still personal computers, we put them on or beside our desks, and use them. Software, on the other hand, undergoes constant change. For programmers, we’ve gone through an incredible changing landscape of programming languages and tools during that time. I’ll just give a short list: assembler, BASIC, turbo Pascal, Smalltalk, C, Objective-C, 4-GL’s too numerous to list, C++, Java, C#, PERL, javascript, php, python, ruby and on and on. It’s no wonder that it seems like it’s hard to keep up! And of course, in a way it is – it does take work. But most of that language change is little but noise. The claims of advances don't hold up when examined.

    That’s how it seems. Hardware is hardware. Yes, it gets better and cheaper, but software, wow – there’s always something new!

    The reality is different. True software change, whether in methods or bodies of code, is slower than glacial.

    The Non-Evolution of Software Categories

    It’s hardware that undergoes dramatic change, not software. Think Moore’s Law. Think about mobile phones just ten years ago. Here are details. By contrast, change in software is actually glacial. Yes, the surface weather of software changes rapidly and unpredictably; but the underlying principles and structure of software don’t even move as fast as glaciers – it’s more like tectonic plates.

    Things are changing all the time.  Software is certainly “soft” compared to hardware. But programmers spend most of their time in their minds, not wandering around touching up the hardware. What do they spend time thinking about? It’s not the hardware – it’s the software environment in which they work, and in which the software they create will live.

    Think about kids in a sand box. The sand is something they can push around, pile up, makes holes in, and generally have fun with. How about the box the sand is in? It’s hard. It keeps the sand in one area. No kid thinks about it much – it’s just something that’s there when you’re playing.

    It works the same in software. The “part” of the software you are playing with (a.k.a. “programming”) seems “soft” and easily changeable, like sand. The “part” of the software you’re not playing with seems “hard” to you. It’s just part of the landscape. This effect is strongest when the programmer has no access to the source code of the landscape software. An example is a DBMS or an operating system. These things feel like immutable reality. They are what they are, and you have to deal with them. Hardware speed and storage capacity has grown so much, for example, that DBMS's are a data storage solution to a problem that no longer exists. But most software design continues to be built on them.

    Just as other pieces of software feel like given realities if you’re a programmer, so does the programming language in which you’re working. The verbs and the syntax are as real as the grains of sand. Just as when you play with sand, you have no sense that each grain is actually a very large aggregation of molecules, so when you use a verb in the programming language, you have no sense that a compiler or interpreter will actually use many machine language instructions to execute each verb.

    The “hardness” of software to which you have no source code access (or otherwise no ability to change) is a strong psychological effect.

    The Glacial Evolution of Software Applications

    Programmers have a special view of software. Regular people also use software personally and/or professionally. While society gives us the impression that the software people use changes like crazy, the reality is that people learn new things slowly, and successful software evolves very slowly. The changes we see are largely incidental things that are enabled by the astounding advance of hardware.

    The Bedrock of Production Systems

    Large bodies of working, production code are the bedrock on which fancy new layers may be built, but change very slowly. Here's an example of the impact of that hard-to-change code.

    It isn’t as though people don’t want to change these systems. Often they are desperate to change them! Sometimes managers will put huge investments into changing or replacing them, only to quietly admit failure and stop trying. Until the next bunch of ignorant-of-history managers come along.

    These bedrock systems aren't just systems software like DBMS's. They are also plain old applications. For example, the software that handles the largest number of credit card accounts in the world is written in the prehistoric language COBOL. Several attempts were made to re-write card processing software using fancy new languages. Failed. See also this and this.

    Conclusion

    Hardware changes with amazing speed, giving no signs of slowing down. While there's always something new to talk about in software, it changes much more slowly than hardware, and is remarkably hard to change.

     

  • The Dimension of Software Automation Breadth Examples

    One of the major ways software evolves is by increasing along the dimension of automation breadth. A domain can be dominated by products at a given breadth of automation, and suddenly a existing or new competitor starts winning by increasing its breadth of automation, offering its customers more value for less effort and money. It's a classic move and a good way for new entrants to disrupt a market.

    One of the most frequently given pieces of advice, including by me, is to “focus,” i.e. basically solve fewer problems, try to satisfy a narrower range of customers, etc. While this advice is applicable more often than not, the natural and recurring progression of products through the spectrum of “automation breadth” makes it clear that, sometimes, when the conditions are right, the winning strategy is to be among the first to increase the breadth of automation that you incorporate into your product.

    Example: Athena Health

    A clear example of this is shown by the story of AthenaHealth. At the time the founders started the company, a wide variety of products were already available to run physician offices, from small single-office practices to extended medical groups. These products generally ran on inexpensive machines that the practice would keep in some back room, and would support multiple users via a LAN or terminals. Most of the products were sold by license, so that the office had to pay only a modest price for the license, and then annual maintenance.

    Along comes little Athenahealth, with a better way of doing things. Athena had a cool new practice management system (PMS). Unlike all PMS’s at the time, it was built using internet technologies, so that it could be operated as a service, with people at the office accessing the system using machines with browsers and internet connections. Athena took care of the computers, relieving the medical office of a burden it basically didn’t want.     

    But they ran into a little problem: the people in charge of medical practices are doctors, and doctors really don’t care about PMS’s – they care about patients and medicine. A PMS is a necessary evil, something you should buy for as little as you can get away with and ignore until things get so bad you are forced to buy a new one. Money spent on the PMS is just money out of the doctors’ pockets, as far as they’re concerned. Oh, you have a “better” one, do you, whatever that means? Stop wasting my time.

    The folks at Athena noticed that one little thing the PMS does is produce bills and claims, the purpose of which is to get patients and insurance companies to send them money. No claim, no money. Unfortunately, merely producing the claims rarely proves to be sufficient to get the money flowing. People are required to do special things to the claims, provide additional information, harass the payers, etc. This is so specialized and time-consuming that it either consumes the time of a number of people at the office, or is outsourced to a “billing service.”

    The Athena folks went on to notice additional important things: (1) the chances of getting paid are a direct reflection of the quality and appropriateness of the information on the claim; (2) the PMS and how it is used is the main source of this information; (3) by actually performing the billing service, you can learn how to produce a better PMS that produces better claims, increasing the effectiveness of the billing service while reducing the cost of running it at the same time. Finally, they found out that there is something the doctors who run medical practices care about other than medicine – surprise, surprise, that something is money.

    So Athena introduced an outsourced billing service, but required practices that use it to also use their practice management system – at no additional cost! And they got so good at collecting the money that doctors could essentially get more money and a really cool, state-of-the-art PMS (like they cared…) for free!

    This is a nice story for Athena, but the point of telling it here is that it illustrates the principle of product automation breadth evolution. While products are evolving within a “level” of automation breadth (i.e., how many of an organization’s functions it automates), it is normally a good idea to maintain discipline, avoid distractions, and concentrate on automating that function. But at a certain point in the evolution of each product category, pretty much everyone in a space has automated everything within that function, and everyone is reduced to concentrating on sales strategies and niggling little details. At that point, and PMS’s were at that point when Athena came along, it makes sense to do what you’re normally supposed to avoid – look for another function inside the organization to automate, particularly if there are synergies in implementing the two functions within a single framework, as there certainly were in this case.

    Example: Bank and Retail Credit Cards

    In 1983 a small company called CCS (Credit Card Software, later called Paysys) released a body of COBOL code that would enable a bank to process credit cards. A number of small and regional banks bought copies of the code and ran it successfully. The code was enhanced over the years.

    A major retailer, Michael's Jewelers, approached CCS and asked if they could make a version of the bank software that could handle purchases from their stores, including a variety of payment plans and financing options offered by the store that were not supported by bank card software.

    The company's programmers quickly gave up on the idea of modifying the bank code to handle the problem. Many aspects of bank card processing, such as the difference between issuing and acquiring, were irrelevant to retail. In addition, the many financing options supported by retailers went far beyond anything banks did. So they borrowed from the bank code to the extent that it helped and ended up creating a separate body of software called Vision 21. Once it was available, it proved to be a big success in the market, and was quickly enhanced by customer demand to include all the options desired by retailers Before long it supported the needs of retailers in other countries as diverse as Japan and South Aftrica.

    Finally, there was very large processor, Household International, that was running multiple copies of both products, separate because they had been customized for a variety of reasons, for example to support methods of credit that were unique to a market (for example “hire-purchase” in South Africa). While CCS, now called Paysys, had failed to create a generic bank/retail product when confronted with an example of the generic problem, unifying multiple bodies of related code into a single, highly parameterized code base proved to be a far more tractable problem, particularly with a single important customer who insisted that these variations were the only ones to worry about.

    The industry quickly rallied to this new product, called Vision PLUS, that could be directed at so many different problems with such relative ease. For example, it enabled retailers to issue co-branded cards that worked like regular bank cards, except when used in the issuer's retail store, when it acted like a classic store card with features like "90 days same as cash" options that bank cards don't support. While “parameterized product” may sound like an abstract concept, it translates directly into business advantage compared to more primitive product types, by enabling the product to be customized, installed, upgraded and maintained with less labor, less time and lower risk of error.

    The company that built Vision PLUS was bought by First Data, a major card processor. The reason is interesting: in spite of having thousands of programmers (Paysys had only dozens), First Data was unable to modify their US-centric code base to handle processing in Japan. At the time of the sale, Vision PLUS was processing about 150 million cards world-wide. The code lives on and currently runs over 600 million cards.

    Conclusion

    These are two examples of companies growing by broadening their focus — in a strategic way, driven by just a couple of representative customers. In neither case did they address a whole new market at the beginning, though that was the eventual goal. They took their existing software along with a cooperative customer and met that customer's needs. Athena started with just one specialty in one state with a single payer. Paysys started with a single existing customer. In each case they grew the broadening of their focus a step at a time, making each customer happy as they went. As they grew, word got around in the industry, and they shifted to saying "no" to the vast majority of inquiries in order to maintain the step-wise customer success they were building.

    This is a classic pattern of focus broadening that can bring transformative success to companies when handled well.

     

  • The Dimension of Automation Depth in Information Access

    I have described the concept of automation depth, which goes through natural stages starting with the computer playing a completely supportive role to the person (the recorder stage) and ending with the robot stage in which the person plays a secondary role. I have illustrated these stages with a couple examples that illustrate the surprising pain and trouble of going from one stage to the next.

    Unlike the progression of software applications from custom through parameterized to workbench, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.

    Automation depth in Information Access

    Each of the patterns of software evolution I've described are general in nature. I’ve tried to give examples to show how the principles are applied. In this section, I’ll show how the entire pattern played out in “information access,” which is the set of facilities for enabling people to find and use computer-based information for business decision making.

    Built-in Reporting

    “Recorder” is the first stage of the automation depth pattern of software evolution. In the case of information access, early programs were written to record the basic transactions that took place; as part of the recording operation, reports were typically produced, summarizing the operations just performed. For example, all the checks written and deposits made at a bank would be recorded during the day; then, at night, all the daily activity would be posted to the accounts. The posting program would perform all the updates and create reports. The reports would include the changes made, the new status of all the accounts, and whatever else was needed to run the bank.

    At this initial stage, the program that does the recording also does the reporting. Reporting is usually thought to be an integral part of the recording process – you do it, and then report on what you did. Why would you have one program doing things, and a whole separate program figuring out and reporting on what the first program did? It makes no sense.

    What if you need reports for different purposes? You enhance the core program and the associated reports. What if lots of people want the reports? You build (in the early days) or acquire (as the market matured) a report distribution system, to file the reports and provide them to authorized people as required.

    Efficiency was a key consideration. The core transaction processing was “touching” the transactions and the master files; while it was doing this, it could be updating counters and adding to reports as it went along, so that you wouldn’t have to re-process the same data multiple times.

    Report Writers

    The “power tool” stage of automation depth had two major sub-stages. The first of these was the separation of reporting from transaction processing. Information access was now a key goal in itself, and was so important and done so frequently that specialized tools were built to make it easy, which is always the sign that you’re into the “power tool” phase.

    This first generation of power tools were specialized software packages generally called “report writers.” The power tool was directed at the programmer who had to create the report. Originally, the language that was used for transaction processing was also used for generating the report. The most frequent such language was COBOL. The fact that COBOL was cumbersome for this purpose was reflected in the fact that specialized syntax was added to COBOL to ease the task of writing reports. But various clever people saw that by creating a whole new language and software environment, the process of writing reports could be tremendously enhanced and simplified. These people began to think in terms of reporting itself, so naturally they broke the problem into natural pieces: accessing the data you want to report on, processing it (select, sort, sum, etc.), and formatting it for output.

    The result of this thinking was a whole industry that itself evolved over time, and played out in multiple environments and took multiple forms. The common denominator was that they were all software tools to enable programmers to produce reports more quickly and effectively than before, and were complete separate from the recorder or transaction processing function.

    At the same time, data storage was evolving. The database management system emerged through several generations. This is not the place for that story, which is tangential to the automation depth of information access. What is relevant is that, as the industry generally recognized that information access had moved to the report writer stage of automation, effort was made to create a clean interface between data and the programs that accessed the data for various purposes.

    Data Warehouse and OLAP

    Report writers were (and are) important power tools – but they’re basically directed at programmers. But programmers are not the ultimate audience for most reports; most reports are for people charged with comprehending the business implications of what is on the report and taking appropriate action in response. And the business users proved to be perennially dissatisfied with the reports they were getting. There was too much information (making it hard to find the important things), not enough information, information organized in confusing ways (so that users would need to walk through multiple reports side-by-side), or information presented in boring ways that made it difficult to grasp the significance of what was on the page. And anytime you wanted something different, it was a big magilla – you’d have to get resources authorized, a programmer assigned, suffer through the work eventually getting done, and by then you’d have twice as many new things that needed getting done.

    As a result of these problems, a second wave of power tools emerged, directed at this business user. These eventually were called OLAP tools. The business user (with varying levels of help from those annoying programmers) had his own power tool, giving him direct access to the information. Instead of static reports, you could click on something and find out more about it – right away! But with business users clicking, the underlying data management systems were getting killed, so before long the business users got their own copy of the data, a data warehouse system.

    In a sign of things to come, the business users noticed that sometimes, they were just scanning the reports for items of significance, and that it wasn’t hard to spell out exactly what they cared about. So OLAP tools were enhanced to find and highlight items of special significance, for example sales regions where the latest sales trends were lower than projections by a certain margin. This evolved into a whole system of alerts.

    Predictive Analytics

    OLAP tools are certainly power tools, but the trouble with power tools is that you need power users – people who know the business, can learn to use a versatile tool like OLAP effectively, and can generate actions from the information that help the business. So information access advanced to the final stage in our general pattern, the “robot” stage, in which human decision making is replaced by an automated system. In information access, that stage is often called “predictive analytics,” which is a kind of math modeling.

    As areas of business management are better understood, it usually turns out that predictive analytics can do a better, quicker job of analyzing the data, finding the patterns, and generating the actionable decisions than a person ever could. A good example is home mortgage lending, where the vast majority of the decisions today are made using predictive analytics. Many years ago, a person who wanted a home mortgage would make an appointment with a loan officer at a local savings bank and request the loan. The officer would look at your information and make a human judgment about your loan worthiness.

    That “power user” system has long since been supplanted by the “robot” system of predictive analytics, where all the known data about any potential borrower is constantly tracked, and credit decisions about that person are made on the basis of the math whenever needed. No human judgment is involved, and in fact would only make the system worse.

    Predictive analytics is the same in terms of information utilization as the prior stages, but the emphasis on presenting a powerful, flexible user interface to enable a power user to drive his way to information discovery is replaced by math models that are constantly tuned and updated by the new information that becomes available.

    Sometimes the predictive analytics stage is held back because of a lack of vision or initiative on the part of the relevant industry leaders. However, a pre-condition for this approach really working is the availability of all the relevant data in suitable format. For example, while we tend to focus on the math for the automated mortgage loan processing, the math only works because it has access to a nationwide database containing everyone’s financial transactions over a period of many years. A power user with lots of experience, data and human judgment will beat any form of math with inadequate data; however, good math fueled with a comprehensive, relevant data set will beat the best human any time.

    Conclusion

    All these stages of automation co-exist today. One of the key rules of computing is that old programs rarely die; they just get layered on top of, given new names, and gradually fade into obscurity. There are still posting programs written in assembler language that have built-in reporting. In spite of years of market hype from the OLAP folks, report writing hasn’t gone away; in fact, some older report writers have interesting new interactive capabilities; OLAP and data warehouses are things that some organizations aspire to, while others couldn’t live without them; finally, there are important and growing pockets of business where the decisions are made by predictive analytics, and to produce pretty reports for decision-making purposes (as opposed to bragging about how well the predictive analytics are doing) would be malpractice.

    Even though all these stages of automation co-exist in society as a whole, they rarely co-exist in a functional segment of business. Each stage of automation is much more powerful than the prior stage, and it provides tangible, overwhelming advantages to the groups that use it. Therefore, once a business function has advanced to use a new stage of information access automation, there is a “tipping point,” and it tends to become the new standard for doing things among organizations performing that function.

  • The Dimension of Software Automation Breadth

    Computers and software are all about automation. See this for the general principles of automation. When you dive into details and look at lots of examples, patterns emerge. The patterns amount to sequences or stages of automation that emerge over time, with remarkable consistency. When you apply your knowledge of the pattern to the software used in an industry at any given time, you can identify where the software is in the sequence. You can confirm the earlier stage in the sequence and predict with accuracy the stage that will follow. This gives you ability to say what will happen, though not when or by whom.

    The patterns of automation play out in multiple dimensions. I have described what may be called the depth of automation, in which software evolves from recording what people do through helping them and eventually to replacing them.

    In this post I will describe another dimension, which may be thought of as the breadth of automation. The greater the automation breadth the more functions are incorporated into the automation software and the more highly integrated the functions are with each other.

    The Dimension of automation breadth

    Automation breadth has these basic levels, independent of the automation depth of the products involved:

    Component

    Not a stand-alone product, but a component that could be incorporated into many custom applications and/or products

    Point product

    Implements a single function

    Product collection

    A group of point products from a single vendor

    Product suite

    An integrated set of separate products, with meaningful benefits from the integration

    Integrated product

    A single body of source code that performs a variety of related functions that would otherwise require separate products, with meaningful benefits from the unification

    Integrated product with selective outsourcing

    A product that is written and delivered in such a way that the using organization can choose to have the vendor staff a number of functions off-site.

    Components rarely appear first in historical terms, but they are the beginning of this sequence. Typically, functionality that is very difficult to write or whose requirements change rapidly is separated out and delivered in the form of a component. The quality of the component may become so important that it becomes an industry standard.

    Example: The Ocrolus service (disclosure: Oak HC/FT is an investor) is a classic example of a valuable, narrow component. It takes images of documents of nearly any kind, recognizes them, extracts their data and returns the data to the component user. This functionality is so challenging to write, and the consequence of errors so great, that most applications that need the functionality are likely to use the component, which is delivered as a service, rather than write their own.

    In a time of rapidly emerging functionality, point products are usually first, because they can be gotten to market quickly. Buyers typically talk in terms of “best of breed,” but have the problem of negotiating and maintaining relationships with multiple vendors, who frequently have conflicting interests. The buyer also has to take responsibility for integrating the various products he has bought. Buyers reasonably worry about whether their typically small vendor will stay in business and continue to invest in their product.

    Example: Captura (now part of Concur) provided a point product to enable a company to automate the process of entering, approving and paying expense reports.

    Product collections solve some of these problems. There is now a single vendor; the vendor is probably much larger and more likely to stay in business; the products will be maintained and there is no conflict of interest. Once a product collection becomes available in a category, buyers will typically prefer an adequate product from a collection to a superior point product. Product collections can be formed by acquisition.

    Example: CA, Computer Associates, is a typical vendor that acquires products in a category and puts them together into a product collection.

    It is often desirable to share data among the products in a collection. Frequently, they maintain copies of essentially the same information, have essentially the same security roles defined, etc. Users have to go through considerable time and effort to accomplish this on their own, and then again when there is a new release, and desirable integrations are sometimes not even possible. Product suites solve these problems to a large extent, since a single vendor performs the integration and sells the integration as part of the product collection. Once a product suite becomes available in a category, buyers will typically prefer an adequate product suite to a product collection whose products are superior, because the cost of installation and maintenance is lower and the benefits of integration often outweigh the benefits of individual product features. Building a product suite typically requires source-code level coding and functionality design changes, but the code bases of the products can be separate.

    Classic example: Microsoft Office was one of the first product suites for office products. While the benefits of integration are not overwhelming in this functional area, users clearly benefit from having a suite rather than individual products. Once the benefits of a suite became accepted, buyers no longer wanted individual office products.

    Recent example: The market for business formation has long been dominated by the point product Legalzoom. A new, rapidly growing product suite called Zenbusiness (disclosure: Oak HC/FT is an investor) is taking classic advantage of moving to the next stage of automation breadth, by offering small business customers not only business formation services, but also services for websites, banking and accounting.

    Not all functionality areas benefit from having an integrated product, but for those that do, integrated products are better for vendors (reduced costs due to elimination of redundancy) and for buyers (a simpler, more unified product that is easier to learn and operate, and has deep, fully-automated function integration), and typically win in the marketplace.

    Example: Everyone thinks of SAP’s R3 as being the first client/server ERP product, but its real break-through was being the first truly integrated product on which a large enterprise could run its business. Using a single body of code and a common shared DBMS, its many modules each ran different parts of the enterprise business. In spite of the high cost of implementation and operation, the advantages of running your business on a single integrated product were overwhelming. Like most projects to build a unified product, the vendor had deep domain knowledge, it took a long time to get it right, and the early installations were painful.

    Once functionality markets reach a level of maturity, the competition becomes intense, and organizations often have to choose areas of distinction, outsourcing the functions they choose not to compete in to a low-cost provider. Products that enable organizations to selectively outsource in this way typically take business from products that must be fully staffed by the buyer.

    Example: FDC is the leading processor for credit cards in the US. With a billion cards outstanding, the market is huge and highly competitive. While the technology base of the FDC product is decades old, the functionality it delivers is highly sophisticated. In addition to delivering a completely integrated, multi-department product, FDC offers off-site staffing for the various departmental functions, where the staff sounds on the telephone as though they worked for you.

    Conclusion

    The dimension of automation breadth helps us understand the evolution of software and the progression that naturally takes place in a given market space. The companies that win are most often the ones that dominate a narrow market segment with a particular product and then broaden the range of software they can sell to a given organization. They typically move step by step according to the progression I have described here.

  • Software Programming Language Evolution: Libraries and Frameworks

    We've talked about the major advances in programming languages and the enhancements that brought those languages to a peak of productivity — a peak which has not been improved on since. Nonetheless there has been an ongoing stream of new programming languages invented, each of which is claimed to be "better" — with no discussion about what constitutes "goodness" in a language! This is high on the list of causes of the never-ending chaos and confusion that pervades the world of software. While loads of people focus on language, the most important programming tools that make HUGE contributions to the productivity of programmers using them goes largely unremarked, in spite of the fact that they are part of every programmer's day-to-day programming experience. What are these essential, always-used but largely in the background thing? Libraries and frameworks.

    Libraries

    A library is a collection of subroutines, often grouped by subject area, which perform commonly used functions for programs. Every widely used language has a library that is key to its practical use.

    For example, the C language has libraries that contain hundreds of functions for

    • input and output, including formatting
    • string manipulation and character testing
    • math functions
    • standard functions, dozens of them
    • date and time

    There are libraries of functions for controlling displays and input devices and many other things. After all these years and the "fatal" flaw of not being object-oriented (gasp!!), it is currently the #2 language in popularity. While the libraries don't play as important part of its popularity as certain other languages, they are still important.

    Python became widely used among people doing analytics not particularly because of the virtues of the language itself, but because it grew one of the richest modern libraries of routines for calculations available. It ended up with great support for statistics and the things you need to do with large number sets including array and matrix manipulation. When doing analytics, the library functions do nearly all the heavy lifting. All that the program has to do is deploy the rich set of functions against the problem in the right way!

    A 2021 survey of language popularity confirms the popularity of Python, and the importance of its library in giving it its popularity. Here's a summary:

    Capture

    Have you ever heard of the language R? It's an open-source language which became usable in the year 2000. Chances are if you work in statistics, data analytics, operations research or other areas involving serious data manipulation you know all about it and probably use it. The language itself has valuable features for math programming that most languages don't have, for things like vectors and matrices. More importantly it has an amazing rich library of the R equivalent of libraries, called packages. R packages contain reusable collections of functions and data definitions to perform valuable calculations. When you're trying to do something you're pretty sure someone else has tried to tackle in some way, the first thing you do is look for an appropriate package. Packages are available for most common data science tasks; they do much of the work and nearly all the "heavy lifting" of performing any job for which the package applies. R is a prime example of a language having virtues, but the bulk of the value and productivity coming from the available libraries.

    One of the sad turns taken in programming language evolution was driven by the ascendancy of the modern RDBMS. Ironically, the DBMS emerged at about the same time as do-everything language environments dominated the new minicomputer landscape. With a single environment like MUMPS or PICK you could get a whole job done — user interface, core program, data storage and access, everything! These environments enabled massive productivity gains. See this for more. But the DBMS blasted in, took over, and HAD to be used. So 3-GL's that were less productive than the new environments became even less so by finding cumbersome ways to use DBMS's. The way they did it I explain here. This was a first major step down the degenerate, productivity-killing road of layers. How did 3-GL's pull off the integration? With libraries, of course! For example, Java's JDBC library became a requirement for burdensome, error-prone access to standard databases from within Java, while those who didn't mind the overhead and "impedance mismatch" could use one of the ORM's (Object-Relational Mapping systems) that emerged.

    Not all libraries are tightly associated with a language! One such library is redis.io, an open source project started by an Italian developer to build a powerful in-memory key/value datastore and cache. It has taken off and has many more powerful features including queuing. While unknown among non-programmers, it is incredibly popular, widely used and available on the major cloud providers.

    Other libraries are directed towards a particular aspect of programming. With the emergence of the web UI, a succession of libraries to ease their creation have emerged.

    A UI library that has become dominant is React, created and then open-sourced by Facebook. It is specifically for the Javascript language and greatly enhances the productivity of building and the resulting performance of web UI's. The benefits of javascript or any other language are trivial compared to the productivity gain produced by using React. Unlike most libraries, React comes close to being a framework; it resembles an R package, in that it's a comprehensive solution to the problem of building web UI's. Interestingly, part of the programmer productivity gain is due to the fact that it has a major declarative aspect to its design, like AngularJS (see below).

    Bottom line: while the details of software language features have some impact on productivity, the availability and richness of libraries enhances productivity many times over. It's no contest.

    Frameworks

    Frameworks take the idea of libraries but with an important reversal. Libraries are sets of routines, any of which may be called by a program written in a supported language. The program written in the language is completely in charge. Frameworks, by contrast, provide an environment in which a language can operate. You select exactly one framework to work in. When you play by its rules, you normally enjoy large productivity benefits.

    A library is a rich set of resources covering many issues, like a book library. A framework is a selected set of resources enabling rapid work on a given category of issues, like a kitchen for cooking.

    One impressive framework is the RAILS framework for the language Ruby. While the Ruby language had become fairly successful, Ruby on Rails (as it was called) established it as a player because many groups discovered that it was many times faster than other tools at building a web application that has a UI and a database. The reason is simple: normally to build such an application you have one expert creating the UI probably using javascript and some library, another one building the server-side business logic, and another one creating the DBMS — with a great deal of work and trouble relating the database to the server program; see for example JDBC and ORM's above. With RAILS, you defined your database, which automatically defined the names you use inside Ruby to access the data. Similar story with the UI.

    The philosophy of RAILS centered on the DRY principle — Don't Repeat Yourself.

    Capture

    Unlike the endless repetition of the usual layered environment, RAILS took a major step towards the principle of Occamality. Was RAILS original, a real break-through? In its object-oriented environment, yes. In programming in general, no. RAILS is a typical example of the appalling lack of knowledge of history in software. Most of the benefits of RAILS were delivered in a more integrated way many years before by the Powerbuilder environment. I discuss the context here.

    What comprehensive frameworks like this REALLY do is attempt to re-create the comprehensive, everything-in-one-place development environments that emerged to enhance programmer productivity beyond what was achievable in 3-GL's, mostly by incorporating data storage and user interface functions, as I explain here. Their emergence and widespread adoption is clear evidence of the power of eliminating the insanity of layers in software.

    There are also frameworks that are much narrower in scope, addressing only part of the programming puzzle. While they leave much of an overall programming job alone, they can give a major boost to the narrow area on which they focus. One that is focused exclusively on building web UI's  that caught on and became widely used is AngularJS, a new version of which is simply Angular. This framework is highly declarative, focused more on describing the elements of the UI than the actions required to implement it.

    Capture

    This led to HUGE programmer productivity gains. Why would anyone build a web UI from scratch? Nonetheless, the similarities between the ReactJS library and the AngularJS framework are strong, and both of them powered huge programmer productivity gains that had very little to do with the virtues (or lack thereof) of the associated language, javascript.

    Languages vs Libraries and Frameworks

     New languages continue to flow out of the creative minds of groups of programmers who obviously don't have enough to do to keep themselves fully occupied. Each new language tends to be lauded, if only by its creators, with extreme claims of virtue on many dimensions. None of these claims are ever subjected to verification and testing, much less of the rigorous kind. In any case, there is simply no contest between the gains delivered by a rich library or framework and a new programming language.

    I'll just give glaring example. One of the main virtues that Object-oriented languages are supposed to have is code reuse. The concept that is bandied about is that good class system is like a set of Lego blocks, enabling new programs to be easily assembled. Mostly it doesn't happen. For re-use, libraries and frameworks are the gold standard. Think of a normal, old-fashioned book library. The whole reason they exist is that people reuse books! It's the same thing for software libraries — code gets into software libraries because it's re-used often! That's how they got to be called "libraries." Duh.

     

  • Software Programming Language Evolution: the Structured Programming GOTO Witch Hunt

    In prior posts I’ve given an overview of the advances in programming languages, described in detail the major advances and defined just what is meant by “high” in the phrase high-level language. I've described the advances in structuring and conditional branching that brought 3-GL’s to a peak of productivity.

    The structuring and branching caught the attention of academics. Watch out! What happened next was that a theorem was proved, a movement was declared and named, and a certain indispensable part of any programming language, the GO TO statement, was declared to be a thing only used by bad programmers and should be banned. Here's the story of the nefarious GOTO.

    Structures in Programming Languages

    I've described how structures were part of the first 3-GL's and how they were soon elaborated to more clearly express the intention of programmers, making code even more productive to write. The very first FORTRAN compiler, delivered in 1957, included primitive versions of conditional branching and loops, two of the foundations of programming structure. It was so powerful that the early users figured it decreased the number of statements needed to achieve a result more than 10 times.

    These are the people who actually WRITE PROGRAMS! They want to make it easier and jumped on anything that gave a dramatic improvement.

    “Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used cross-platform programming language.”

    Before long, the structuring capabilities of the original IF (conditional branching) and DO (controlled looping) statements were enhanced and augmented to something close to their current form. I describe this here. The result was a peak of programmer productivity that has not substantially been increased since, and often been degraded.

    The Bohm-Jacopini Theorem

    Completely independent of the amazing advances in languages and programming productivity that were taking place, math-oriented non-programmers were hard at work deciding how software should be written. Here is the story in brief:

    The structured program theorem, also called the Böhm–Jacopini theorem,[1][2] is a result in programming language theory. It states that a class of control-flow graphs (historically called flowcharts in this context) can compute any computable function if it combines subprograms in only three specific ways (control structures). These are

    1. Executing one subprogram, and then another subprogram (sequence)
    2. Executing one of two subprograms according to the value of a boolean expression (selection)
    3. Repeatedly executing a subprogram as long as a boolean expression is true (iteration)

    The structured chart subject to these constraints may however use additional variables in the form of bits (stored in an extra integer variable in the original proof) in order to keep track of information that the original program represents by the program location. The construction was based on Böhm's programming language P′′.

    The theorem forms the basis of structured programming, a programming paradigm which eschews goto commands and exclusively uses subroutines, sequences, selection and iteration.

    This theorem got all the academic types involved with computers riled up. The key to good software has been discovered! The fact that math theorems are incomprehensible to the vast majority of people, and the fact that perfectly good computer programs can be written by people who aren't math types didn't concern any of these self-anointed geniuses.

    The important thing to note about the theorem is that it was NOT created in order to make programming easier or more productive. It just "proved" that it was "possible" to write a program under the absurd and perverse constraints of the theorem to compute any computable function. Assuming you were willing to use a weird set of bits to store location information in ways that would make any such program unreadable by any normal person. Way to go, guys — let's go back to the days of writing in all-binary machine language!

    The Crisis in Software and its solution

    Not long after this, the academic group of Computer Science “experts” formed. They had a conference. They looked at the state of software and declared it to be abysmal. The whole conference was about the "crisis" in software. See this for details.

    One of the most prominent of those Computer Scientists was Edsger W. Dijkstra. He looked at the powerful constructs for conditional branching, loops and blocks that had been added to 3-GL's and invented the term "structured programming" to describe them. He related those statements to the wonderful but useless math proof about the minimal requirements for programming a solution to any "computable function." The proof "proved" that such programs could be written without the equivalent of a GOTO statement. BTW, I do not dispute this. He wrote "the influential "Go To Statement Considered Harmful" open letter in 1968."

    Among the solutions to the software crisis they proclaimed was strict adherence to the dogma of what Dijkstra called “structured programming,” which prominently declared that the GOTO statement had no place in good programming and should be eliminated.

    Does the fact that's it is POSSIBLE to program a solution to any computable function without using GOTO mean that you SHOULD write without using GOTO's? When children go to school, it's POSSIBLE for them to crawl the whole way, without using "walking" at all. Everyone accepts that this is possible. When you're on your feet all sorts of bad things can happen — you can trip and fall! Most important, you can get the job done without walking … and therefore you SHOULD eliminate walking for kids getting to school. QED.

    This is academia for you – a prime example of how Computer Science works hard to make sure that programs are hard to write, understand and deliver, all in the name of achieving the opposite.

    The debate about structured programming

    There was no debate about the utility of the conditional branching, controlled looping and block structures that rapidly became part of any productive software language. They were there and programmers used them, then and now. The debate was about "structured programming," which by its academic definition outlawed the use of the GOTO statement. That wasn't all. It also outlawed having more than one exit to a routine, breaks from loops and other productive, transparent and generally useful constructs.

    I remember clearly as a programmer in the 1980's having a non-technical manager type coming to me and quizzing me about whether I was following the rigors of structured programming, which was then talked about as the only way to write good code. I don't remember my answer, but since I knew the manager would never go to the trouble of actually — gasp! — reading code, my answer probably didn't matter.

    The most important thing to know about the leader of the wonderful movement to purify programming is his lack of interest in actually writing code:

    Dijkstra quote

    Fortunately, there are sane people in the world, including the incomparable Donald Knuth (an academic Computer Scientist who's actually great!) and a number of others.

    An alternative viewpoint is presented in Donald Knuth's Structured Programming with go to Statements, which analyzes many common programming tasks and finds that in some of them GOTO is the optimal language construct to use.[9] In The C Programming Language, Brian Kernighan and Dennis Ritchie warn that goto is "infinitely abusable", but also suggest that it could be used for end-of-function error handlers and for multi-level breaks from loops.[10] These two patterns can be found in numerous subsequent books on C by other authors;[11][12][13][14] a 2007 introductory textbook notes that the error handling pattern is a way to work around the "lack of built-in exception handling within the C language".[11] Other programmers, including Linux Kernel designer and coder Linus Torvalds or software engineer and book author Steve McConnell, also object to Dijkstra's point of view, stating that GOTOs can be a useful language feature, improving program speed, size and code clarity, but only when used in a sensible way by a comparably sensible programmer.[15][16] According to computer science professor John Regehr, in 2013, there were about 100,000 instances of goto in the Linux kernel code.[17]

    Any programmer can make mistakes. Any statement type can be involved in that mistake. For example, I think nearly everyone accepts that cars are a good thing. But over 30,000 people a year DIE in car accidents! So where's the movement to eliminate cars because of this awful outcome! It makes as much sense as outlawing the GOTO because sometimes it's used improperly. Like every other statement type.

  • Software Programming Language Evolution: Structures, Blocks and Macros

    In prior posts I’ve given an overview of the advances in programming languages, described in detail the major advances and defined just what is meant by “high” in the phrase high-level language. In this post I’ll dive into the additional capabilities added to 3-GL’s that brought them to a peak of productivity.

    History

    Let’s remember what high-level languages are all about: productivity! They are about the amount of work it takes to write the code and how easy code is read.

    The first major advance, from machine language to assembler, was largely about eliminating the grim scut-work of taking the statements you wanted to write and making the statements “understandable” to the machine by expressing them in binary. Ugh.

    The second major advance, to 3-GL’s like FORTRAN and COBOL, was about eliminating the work of translating from your intention to the assembler statements required to express that intention. A single line of 3-GL code can easily translate into 10 or 20 lines of assembler code. And the 3-GL line of code often comes remarkably close to what you actually want to “say” to the computer, both writing it and reading it.

    FORTRAN achieved this goal to an amazing extent.

    “with the first FORTRAN compiler delivered in April 1957.[9]:75 This was the first optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code with performance approaching that of hand-coded assembly language.[16]

    “While the community was skeptical that this new method could possibly outperform hand-coding, it reduced the number of programming statements necessary to operate a machine by a factor of 20, and quickly gained acceptance.”

    The reduction in the amount of work was the crucial achievement, but just as important was the fact that each set of FORTRAN statements were understandable, in that they came remarkably close to expressing the programmer’s intent, what the programmer wanted to achieve. No scratching your head when you read the code thinking to yourself “I wonder what he’s trying to say here??” This meant easier to write, fewer errors and easier to read.

    Enhancing conditional branching and loops

    The very first iteration of FORTRAN was an amazing achievement, but it’s no surprise that it wasn’t perfect. At an individual statement level it was nearly perfect. When reading long groups of statements there were situations where the code was clear, but the intention of the programmer not clearly expressed in the code itself – it had to be inferred from the code.

    The first FORTRAN had a couple of the intention-expressing statements: a primitive IF statement and DO loop. Programmers soon realized that more could be done. The next major version was FORTRAN 66, which cleaned up and refined the early attempts at structuring. Along with the appropriate use of in-line comments, it was nearly as clear and intention-expressing as any practical programmer could want.

    The final milestone in the march to intention-expressing languages was C.

    It’s an amazing language. While FORTRAN was devised by people who wanted to do math/science calculations and COBOL by people who wanted to do business data processing, C was devised to enable computer people to write anything – above all “systems” software, like operating systems, compilers and other tools. In fact, C was used to re-write the first Unix operating system so that it could run on any machine without re-writing. C remains the language that is used to implement the vast majority of systems software to this day.

    I bring it up in this context because C added important intention-expressing elements to its language that have remained foundational to this day. It enhanced conditional statements, creating the full IF-THEN-ELSE-ENDIF that has been common ever since. This meant you could say IF <condition is true> THEN <a statement>. The statement would only be executed if the condition was true. You could tack on ELSE <another statement> ENDIF. This isn’t dramatic, but the only other way to express this common thought is with GOTO statement – which certainly can be understood, but takes some figuring. In addition, C added the ability to use a delimited block of statements wherever a single statement could be used. When there are a moderate number of statements in a block, the code is easy to read. When there would be a large number, a good programmer creates a subroutine instead.

    C added a couple more handy intention-expressing statements. A prime example is the SWITCH CASE BREAK statement. This is used when you have a number of conditions and something specific to do for each. The SWITCH defines the condition, and CASE <value> BREAK pairs define what to do for each possible <value> of the SWITCH.

    Lots of following languages have added more and more statements to handle special cases, but the cost of a more complex language is rarely balanced with the benefit in ease and readability.

    The great advance that's been ignored: Macros

    C did something about all those special cases that goes far beyond the power of adding new statements to a language, and vastly increases not just the power and readability of the language but more important the speed and accuracy of making changes. This is the macro pre-processor.

    I was already very familiar with macros when I first encountered the C language, because they form a key part of a good assembler language – to the extent that an assembler that has such a facility is usually called a macro-assembler. Macros enable you to define blocks of text including argument substitution. They resemble subroutine calls, but they are translated at compile time to code which is then compiled along with the hand-written code. A macro can do something simple like create a symbol for a constant that is widely used in a program, but which may have to be changed. When a change is needed, you just change the macro definition and – poof! – everywhere it’s used it has the new value. It can also do complex things that result in multiple lines of action statements and/or data definitions. It is the most powerful and extensible tool for expressing intention and enabling rapid, low-risk change in the programmer’s toolbox. While the C macro facility isn’t quite as powerful as the best macro-assemblers, it’s a zillion times better than not having one at all, like all the proud but pathetic modern languages that wouldn’t know a macro if it bit them in the shin.

    The next 50 years of software language advances

    The refrain of people who want to stay up to date with computers is, of course, "What's new?" Everyone knows that computers evolve more quickly than anything else in human existence, by a long shot. The first cell phones appeared not so long ago, and they were "just" cell phones. We all know that now they're astounding miniature computers complete with screens, finger and voice input, cameras and incredible storage and just about any app you can think of. So course we want to know what's new.

    The trouble is that all that blindingly-fast progress is in the hardware. Software is just along for the ride! The software languages and statements that you write are 99% the same as in the long-ago times before smart phones. Of course there are different drivers and things you have to call, just as in any hardware environment. But the substance of the software and the lines of code you use to write it are nearly the same. Here is a review of the last 50 years of "advances" in software. Bottom line: it hasn't advanced!

    Oh, an insider might argue: what about the huge advances of the object-oriented revolution? What about Google's new powerhouse, Go? Insiders may ooh and ahh, just like people at a fashion catwalk. The come-back is simple: what about programmer productivity? Claims to this effect are sometimes made, but more often its that the wonderful new language protects against stupid programmers making errors or something. There has not been so much as a single effort to measure increase of programmer productivity or quality. There is NO experimental data!! See this for more. Calling what programmers do "Computer Science" is a bad joke. It's anything but a science.

    What this means is simple: everyone knows the answer — claims about improvement would not withstand objective experiments — and therefore the whole subject is shut down. If you're looking for a decades-old example of "cancel culture," this is it. Don't ask, don't tell.

    Conclusion

    3-GL's brought software programming to an astounding level of productivity. Using them you could write code quickly. The code you wrote came pretty close to expressing what you wanted the computer to do with minimal wasted effort. Given suitable compilers, the code could run on any machine. Using a 3-GL was at least 10X more productive than what came before for many applications.

    A couple language features were incorporated into the very first languages, like conditional branching and controlled looping, that were good first steps. The next few years led early programmers to realize that a few more elaborations of conditional branching and controlled looping would handle the vast majority of practical cases. With those extra language features, code became highly expressive. All subsequent languages have incorporated these advances in some form. Sadly, the even more productive feature of macros has been abandoned, but as we'll see in future posts, their power can be harnessed to an even greater extent in the world of declarative metadata.

  • The Dimension of Software Automation Depth Examples

    I have described the concept of automation depth, which goes through natural stages starting with the computer playing a completely supportive role to the person (the recorder stage) and ending with the robot stage in which the person plays a secondary role. I will illustrate these stages with a couple examples that illustrate the surprising pain and trouble of going from one stage to the next. Unlike the progression of software applications from custom through parameterized to workbench, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.

    I will use companies to illustrate this progression that are known to me personally but not widely known. Software history generally is the history of the winners as written by the winners, like most history. However if you want to guide your own software strategy by taking advantage of the clear patterns of history, it is essential to include examples of companies that are rarely discussed.

    Nextpage

    NextPage was active from 1999 to around 2010. It had a distributed document management product that was prominent in service industries, for example lawyers and accountants. The product operated at a “recorder” level in terms of automation depth. It wasn’t directive, it just sat there waiting for a document request, and when it got one, responded with a list of applicable documents and lets you view the documents.

    They and some of their larger customers saw that, in fact, many of their engagements weren’t so very different. As a high-powered, highly paid lawyer, you may think that every job is unique and your ability to handle the differences very important, but it was obvious to people involved that certain transactions involved very similar documents and workflows. It made sense to attempt to capture and automate the similarities.

    With the cooperation of one of the world’s largest law firms, NextPage built a new product on top of the existing distributed document platform; this new product was automation at the “power tool” level. While recognizing that each transaction of a particular type would end up producing a unique set of documents, for example, it operated like a series of factory distribution belts, assuring that all lawyers working on an M&A transaction, for example, worked on the right things in the right order with the right base documents, reviewed in the right order by the right people, etc. etc. Lawyers continued to craft their documents. But instead of treating each (for example) M&A transaction as though it were the first one the firm had ever handled, the new system treated them like a repeatable process, with variations.

    But the lawyers didn’t think of it that way at all! They were used to thinking of themselves as powerful, autonomous, self-directing agents. Now they were to be subservient to an assembly line in a factory, having their work and output measured as though they were hourly workers? No way – I’m a professional – there’s no way I’m going to put my head in that yoke!

    NextPage had a good position in the market, existing customers, a prestigious lead customer for the product, a strong business case, and tangible results. But the resistance to moving to a “power tool” level of application was stronger than the force NextPage was able to bring to bear at that time, and the effort failed to bear fruit. Anyone who looks at the application knows it’s going to happen, the benefits are definitely there. But anyone with sense would be reluctant to predict even the decade when it would be likely to actually happen.

    The pattern of resistance encountered by Nextpage was strongly correlated with the prestige and power of the people whose work would be affected. The same resistance is seen elsewhere; for example, how eager are the highly paid and prestigious category of medical doctors to have their work automated?

    Captura/Concur

    My VC firm invested in Captura, which is now part of SAP Concur, the leader in on-line expense management systems. These applications basically automate the process of filling out expense reports, getting them checked and approved, and finally paid.

    If you think about expense reports, you imagine a form that you fill in with your expenses during a trip. You then attach your receipts, drop it off, and eventually get reimbursed for out-of-pocket expenses. So it seems like it would be a “recorder” type application. In fact several companies were started and eventually failed with products that weren’t much more than recorder type applications, with some automatic routing added on. This kind of application didn’t provide enough value to offset the expense of installing and running it.

    Captura went much farther. While parts of this application operate at the “recorder” level, particularly the entry of un-automated expense information, much of it actually operates at the “robot” level, which is why the payback for its use is so high. It has built-in rules for approval levels and documentation requirements and many other things. It knows whether and when and to whom expense reports should be routed for approval from the HR system. It gets feeds from the corporate card system. It kicks out exceptions. It allocates expenses to the right accounts. It feeds directly into the accounts payable system to cut checks. When it needs something it can’t get from a computer, it asks a human for it, and otherwise does its job without help.

    Unlike other “power tool” type applications, this application tended not to threaten people who were powerful in the adopting organizations. The most powerful people who used it tended to regard it as saving them time, and everyone got their reimbursements more quickly and accurately. The administration group saw it as saving thankless clerical work, and the accounting executives saw automatic enforcement of all their standards. Still, this application category didn’t really take off until the cost of implementation was reduced to a painless level; in other words, it had to wait for nearly universal access to the web to be practical and affordable. And it worked much better as a secure service than as an enterprise application, because the burden of installation and maintenance on IT added so much friction that only the most motivated potential customers would go for it.

    ClickSoftware

    ClickSoftware provides a classic example of a “robot” level application. It takes the problem of a field service operation, in which there are calls for service from companies in different locations, with different equipment to be repaired, with varying levels of urgency and service level agreements, with different hours of operation and times to travel to them, and finally with different service technicians who have varying skills and qualifications and constraints about hours of work. Who gets sent to which location to fix which problem in which order?

    Without something like Click Software, this problem is solved in a rudimentary way by the service manager, who uses some combination of experience and common sense to work out the best solution. However, for a large service organization, the difference between a good solution and the optimal solution can be worth a great deal of money. Click Software takes all those inputs and produces the optimal solution, and asks the human manager for help when the problem simply can’t be solved; in response, for example, the human may ask a critical resource to work overtime, or might call a customer and see if their request can be deferred.

    Click illustrates another characteristic common to robot-type applications. It is typical that “power tool” type applications do not become robot type ones by incremental addition. A robot application requires a whole new approach to the problem, and a level of math programming that is likely to be entirely foreign to the software group that wrote the power tool application.

    Tape backup systems

    In tape backup, whenever a new platform has come out, tape backup software products tend to emerge in the same order, starting with products that just record what the backup operator does, moving to power products that give the operator hints and suggestions and automate the repetitive operations, and ending with true robot products, which are driven by a set of rules and which request operator intervention at only and exactly those times when human intervention is required. For example, Palindrome and Cheyenne (now part of CA) introduced robot-level backup products in the early 1990’s for the PC server platform, at a time when similar products had been in general use on earlier platforms (e.g. IBM mainframe) for decades.

    Having cycled from primitive to robot level automation several times as new platforms came out that had tape backup, the whole category is dying, frozen in time as the latest computer systems use the kind of disks that have become incredible cheap and tape has become obsolete.

    Conclusion

    It seems inevitable that an industry would move, step by step, towards robot-level automation. The historical fact is that while the incentives for advancement are always there, advancement happens only when an industry is "ready" to move, which can be decades after it is technically possible and economically practical.

  • The Dimension of Software Automation Depth

    I have discussed the fundamental concept of automation.

    Software automates human effort to varying degrees. In doing so, software emerges that performs this automation to an increasing extent. In this post I'll describe the basic stages through which automation progresses. In later posts I'll give specific examples.

    Automation depth

    Automation depth can be understood in terms of the extent to which human effort, observation, knowledge and control is replaced by the computer system.

    Stages: help a human do something; do something a human would otherwise have to have done; make a decision a human would have made; gather information about all the work that is arriving, in process and completed; make work and resource allocation decisions. There are a few different  ways automation depth can be understood. Here are a couple of them:

    • Stages: As depth increases, the human does things faster and has less work to do.
    • Automation depth is correlated with moving from an “open loop” system to a “closed loop” system.
    • Effort. In the earlier stages the computer augments or replaces the person’s efforts. This is like power steering in a car, in which everything is the same except turning the wheel takes less effort.
    • Knowledge. As automation depth increases, less knowledge is in the heads of people and more in the computer.
    • Control. As automation depth increases, the human loses control and eventually becomes controlled by the computer.

    Automation depth has these basic levels:

    Recorder

    The software essentially tracks what people do and records the results of what they create or decide. The most primitive software of this class does this for a single job task. More advanced software does it for multiple tasks, eventually for a person’s whole job.

    Power tool

    The software takes high-level instructions from the operator, and does most of the work. In less skilled environments, the power tool will often select and order the work to be done.

    Robot

    Like an auto-pilot, the software replaces the human for most functions, performing those functions better than a human could, leaving the humans only to set goals and processing rules, and handle exceptions.

    Unlike some of the software evolution progressions I've described, iIt is not generally a quick win to get to the next level of automation depth. An industry can stay stuck at a level of automation depth for decades. Normally, the people directly involved with an application strongly resist moving to the next level of automation depth, because they see it as deeply threatening to their autonomy, power or skills.

    If a company tries to go to the next level, it is essential that it understand its position, and have the backing to last out the transition and the flexibility to keep trying until it establishes a beach-head from which it can expand. The benefits have got to be dramatic and tangible. Feel-good stuff tends not to work here. Some people are going to get bent out of shape, and managers of managers have got to impose the solution. The only way that typically happens is if things are arranged for their risk of failure to be low and the rewards great, and indisputable when they actually happen.

    I will give examples of this progressions in future posts.

  • The Progression Towards Abstraction on a Software Platform: Examples

    I have described the way that once an application appears on a software platform, there is a consistent way the category of applications evolves on that platform. In this post I'll describe examples of this pattern. Briefly, the sequence starts with a prototype, and goes through these stages:

    • Custom application
    • Basic product
    • Parameterized product
    • Workbench product

    There can be several levels of sophistication at each of these levels. The sequence is important because each step increases the speed and decreases the cost of making a body of code meet a set of customer needs.

    Metasys (bought by Optum), Syntra (Clearcross)

    These were medium-sized software companies, with revenues of tens of millions of dollars a year, which practically no one would have any reason to know existed today. Both these companies had a “custom application” that they were attempting to upgrade to a “basic product.” Both code bases evolved from large projects that were done on a consulting basis for a particular customer. Because both companies marketed what they had as though they were real products and failed to put the intellectual energy and money into properly upgrading their code bases, they were always coming up short in terms of features in sale situations, and when they were able to get sales, the implementation and customization efforts were harrowing to everyone concerned. The plain fact was, everyone was excited by the vision of how lovely it would be if their code were at the level of “basic product” and built a sales effort as though it were, but in the end, what each had was a “custom application” that had been severely hacked and “marketectured.”

    Aurum

    Aurum had a “basic product” for the emerging CRM market. At the time I looked at them with my VC firm, they were trying to bring it to the parameterized level. Because of the need for extensive customization in the CRM market, most of their implementations involved source code modifications, which made support and upgrading to new releases a major challenge. We initially chose not to invest because the company failed to recognize this. Later, with new management, the company at least recognized the issue, took steps to correct it and made business arrangements to minimize the impact on customers. We then tried to invest, but didn’t have the opportunity. The company went public and was a good win for its investors, but failed to get its product to the next level, and ended up selling to what at the time was one of the major ERP companies, Baan.

    Paysys VisionPLUS

    This credit card processing product was a “parameterized product,” and met all the relevant criteria. The company gained ground against the competition before and during the time I was CTO of the company because it fully met the criteria for “parameterized product,” while the competition was somewhere between that and “basic product.” Customers were able to make the product perform many more functions without source code changes due to this fact, an advantage the market appreciated. If the parameters that are made available correspond to the kind of changes you want to make, a parameterized product is ideal, and worlds better than a basic product.

    The VisionPLUS product had its origin in two earlier products that were created by the company. The first of these products was CardPac, a fairly parameterized product for processing bank cards, like Visa and MasterCard. The product enjoyed a good deal of success at a time when most such card processing was performed by custom applications, which only the largest institutions could afford to write. Cardpac was sold to smaller banks, and parameters were introduced to reduce the cost of customization, maintenance and support.

    A couple of retail institutions approached the company and asked it to modify Cardpac so that it could process retail (closed-loop) cards. After a couple of failed attempts, the company produced a completely new product for this purpose, Vision21. This product enjoyed considerable success among high-end retailers.

    Finally, there was very large processor, Household International, that was running multiple copies of both products, separate because they had been customized for a variety of reasons, for example to support methods of credit that were unique to a market (for example “hire-purchase” in South Africa). While Paysys had failed to create a generic bank/retail product when confronted with the generic problem, unifying multiple bodies of related code into a single, highly parameterized code base proved to be a far more tractable problem, particularly with a single important customer who insisted that these variations were the only ones to worry about. The unified product was called VisionPLUS.

    The industry quickly rallied to this new product that could be directed at so many different problems with such relative ease. While “parameterized product” may sound like an abstract concept, it translates directly into business advantage compared to more primitive product types, by enabling the product to be customized, installed, upgraded and maintained with less labor, less time and lower risk of error.

    Paysys was bought by a major card processor, First Data, when First Data needed to expand into international markets with tough requirements. First Data's code base at the time was written in assembler language and was pretty much at the basic product level with minimal parameterization. Buying the Paysys COBOL code with its parameterized power was a major step forward for them. The number of accounts managed by VisionPLUS expanded by a factor of four under its new owner, to over 600 million.

    Pivotal

    Pivotal was early in the CRM market with a workbench-type product. Not all markets value this implementation method equally. CRM is one of the markets that values it most highly, because nearly every installation needs to undergo substantial customization. The benefits to customers were the ability to migrate their applications to new platforms as they emerged (e.g., Pivotal automatically converted Windows apps to browser apps) and the ability to extensively alter data, screens and application logic without source code changes. These characteristics took a good deal of effort on the company’s part to explain to customers. Their early efforts were much too technical, and didn’t resonate. Once they reached a level of market penetration, however, reference selling was the key. A potential customer would talk with an existing Pivotal customer who had first installed someone else’s application that seemed similar to Pivotal’s, which in fact it probably was, as it came out of the box. Then the reference would talk about all the time, effort and money to make simple-sounding customizations. Out would go the competing product, and in would come Pivotal. Suddenly customizations were fast, low-effort and low-cost. End of sales effort. Take order.

    This implementation worked in a text-book manner for Pivotal, and is typical of the pattern. The company went public in 1999. However, its further growth was greatly limited by the fact that they chose to operate exclusively within the confines of the Microsoft Office product suite.

    ERP products

    The major ERP products have been at various stages of parameterized and workbench for a number of years. At one point, SAP seemed to be on the way to dominating the field, and armies of consultants were engaged in years-long projects to use SAP’s proprietary language, ABAP4, to modify and customize the base system to meet customer needs. Then along came PeopleSoft, originally just a vendor of HR software, with a complete ERP suite. While the software was not as richly functional out of the box as SAP’s, everyone knew by this point that no one used the software out of the box anyway – you had to wait for years while endless customization took place. What PeopleSoft had that was different was a set of development tools, PeopleTools, which was a generation ahead of SAP’s. SAP’s ABAP4 was at the time your basic 4GL, and you could program nearly anything with it, but you did have to spend a great deal of time programming. The PeopleSoft alternative, while no technical break-through, was still miles ahead of SAP in terms of ease of use and programmer productivity; they actually had a screen painter!

    PeopleSoft convinced many buyers to care more about the implementation level of the product than the base functionality, and convinced those buyers that they were farther along the scale to a truly workbench product (although that term was not used) than the alternatives. As a result, they enjoyed years of outstanding growth. They were bought by Oracle for over $10 billion.

    Conclusion

    While not the only factor in success, companies whose products are higher up the tree of abstraction than their competitors enjoy a clear strategic advantage over their customers. As products evolve on a platform, the ones that appear or migrate to new levels of product abstraction tend to be more successful than ones at a lower level. While the discussion within a company is often about this or that customer or feature, raising the discussion to a new level of abstraction and acting on it can provide a huge competitive boost to a company's fortunes.

     

  • Software Evolution: Functionality on a New Platform: Transaction Monitors

    This is the fourth in a series of examples to illustrate the way that functionality that had been implemented on an older platform appears on a newer platform.

    See this post for a general introduction with example and explanation of this peculiar pattern of software evolution. This earlier post contains an example in security services software, this earlier post describes an example in remote access software and this earlier post describes an example in market research.

    Unlike the prior examples, this one is well known and I had no personal involvement.

    Example: BEA systems

    Old platform

    IBM mainframe MVS

    Old function

    Transaction application execution enhancement: CICS

    New platform

    UNIX

    New function

    Essentially the same, with special support for UNIX-oriented applications, databases and UI’s

    Outcome

    Slow growth at the beginning and some market education required, but became virtually a standard, with just a couple competitors. It was acquired by Oracle in 2008 when it had about $1.5 Billion in revenue.

    This is a classic example of the pattern of functionality emerging on a technology platform, and then emerging in pretty much the same way in the same order on other platforms.

    IBM mainframes were originally used in a batch processing mode. They had operating systems that were really good at it. Groups of users increasingly wanted on-line access, and they wanted transaction processing control and security. The operating system didn’t provide it. So they built what the operating system thought of as an application, but which special applications that ran with it thought of as a specialized kind of operating system, a “transaction processing monitor.” Over time, the value of the main IBM TP monitor, CICS, became recognized, and it evolved to quasi-operating system status.

    UNIX gets developed by a bunch of smart people at Bell Labs, and eventually becomes widely deployed on many kinds of computers. It was originally developed with interactive use in mind, and the early UNIX people would have been offended by the idea that it would need to be augmented by a primitive thing like a TP monitor. But that’s exactly what happened, and the UNIX-oriented TP monitors were built along very similar lines and for similar reasons as CICS!

    The original authors of Unix ignored TP monitors when they built it in the 1970's, but AT&T had its own version of a TP monitor that ran on IBM mainframes to control its network. AT&T decided it would be cheaper to use their own computers instead of IBM mainframes to run its control software. Their computers had the Unix operating system, but transaction wouldn't work properly without a transaction monitor. So they set about building their own specifically to run on top of Unix, Tuxedo starting in 1983. It was successful and became widely used.

    The founders of BEA were ex-Sun Microsystems employees who were amazingly prescient in seeing this need. But they didn't actually build anything! They started by buying Tuxedo, the Unix-centric TP monitor that had been owned by Novell since 1993. They played with other middleware companies but most importantly bought WebLogic, a TP monitor built specifically for Sun's Java language.

    It was the availability of Unix-based TP monitors that helped enable the massive growth of the Unix platform during the internet explosion in the late '90's and early 2000's. I was active during this period, both with CICS applications and internet applications. The whole internet tech world, techies and investors, became convinced that certain things were required to build an application with that all-important characteristic, "scalability." It had to run on Sun computers, with the Sun version of Unix. It had to use the Oracle DBMS. And it had to be written in the Java language using the J2EE (usually spoken as jay-two-double-E) library, intended to ride on … a TP monitor! Oracle ended up being the winner; it first acquired BEA in 2008 and shortly after acquired Sun.

    The key, just as it was for CICS, was the integrity of transactions when many people are accessing a shared set of data. UNIX simply did not provide this capability, and just like on the mainframe, the capability was added on top of the operating system, with an application that the operating system thought of as a normal application, but which the applications that made use of it thought of as a specialized kind of operating system.

    You might well think that this whole set of functionality is dead today. Who talks about TP monitors? Many companies showed that you don't need all the complexity and overhead of a TP monitor to build scalable applications.

    In reality, it's a clear example of another strong, repeating pattern: bad ideas in software rarely die; they just fade for a bit and then re-emerge with a new name in slightly different form. Here's a good example of the morphing of data warehouse into Big Data.

    I won't go into detail here, but while J2EE and classic Unix TP monitors are on life-support, the same idea of distributed applications for scalability lives on today in the form of microservices woven together with an enterprise message bus. Even though, just like 20 years ago, they're not needed for building scalable applications!

  • Software Programming Language Evolution: 4GL’s and more

    Not long after third-generation computer languages (3-GL’s) got established, ever-creative software types started inventing the next generation. In a prior post, I’ve covered two amazing programming environments that were truly an advance. They were both widely used in multiple variations, and programs written using them continue to perform important functions today – for example powering more hospitals than any competing system. But they were pretty much stand-alone unicorns; the academic community ignored them entirely and nearly all the leading figures, experts and trend-setters in software ignored them and looked elsewhere.

    Experts “in the know” directed their attention to what came to be called fourth-generation languages (4-GL’s) and object-oriented (O-O) 3-GL’s. These were supposed to be the future of software. Let’s see what happened with 4-GL's.

    The background of 4-GL’s

    The earlier posts in this series give background that is helpful to understand the following discussion.

    In prior posts I’ve given an overview of the advances in programming languages, described in detail the major advances and defined just what is meant by “high” in the phrase high-level language. I’ve described two true advances beyond 3-GL’s. And then there were 4-GL’s, supposedly a whole generation beyond the 3-GL’s. Let’s take a look at them.

    The best way to understand 4-GL’s is to look at the context in which they were invented. First, the academic types were busy at work creating languages that essentially ignored how data got into and out of the program. The first of these was Algol, followed by others. The academic community got all excited by this class of languages, but they were ignored by the large community of programmers who had to get things done with computers. That was in the background. In the foreground, modern DBMS’s were invented and commercialized.

    4-GL's!

    Apparently everywhere new languages sprang to life, created inside, around and on top of DBMS's. It's a revolution, a once-in-a-lifetime opportunity to become a major milestone in software history! My name could be right up there with von Neumann and Turing!

    All the major DBMS vendors created their own languages, usually with snappy names like Informix 4GL and Oracle's PL/SQL. How could they fail to respond to this massive opportunity for market expansion?

    Brand-new vendors popped up to take advantage of the hunger for DBMS along with the new hardware configuration of client-server computing, in which an application ran on a group of Microsoft Windows PC's, all connected with and sharing a DBMS running on a server. One startup that powered to great commercial success was a company called PowerSoft which created a product called Powerbuilder. The Powerbuilder development environment enabled you to work directly with a DBMS schema and create a program that would interact with a user and the data. The central feature of the system was an interactive component called a DataWindow, which enabled you to visually select data from the database and create a UI for it supporting standard CRUD (create, read, update and delete) functions without writing code. This was a real time-saver.

    The 3-GL's Respond

    Vendors of 3-GL's couldn't ignore the tumult raging outside their comfy offices. Before long support was added to most COBOL systems to embed SQL statements right in the code. Sounds simple, right? It was anything but. COBOL programs had data definitions which the majority of lines of COBOL code used. The way to handle the mis-match between SQL tables and COBOL record definitions wasn't uniform, but in many cases a single COBOL Read statement was replaced with embedded SQL with additional new COBOL code to map between the DBMS results and the data structures already in the COBOL. Ditto when data was being updated and written. Then there's the little detail that DBMS performance was dramatically worse than simple COBOL ISAM performance, since DBMS's were encumbered with huge amounts of functionality not needed by COBOL programs but which couldn't be circumvented or turned off.

    Net result: the 3-GL's were worse off, by quite a bit.

    What Happened?

    Naturally the programming landscape is dominated by 4-GL's today, right? Or maybe their successors? How could it be otherwise? Just as each new generation of languages represented a massive advance in productivity from the earlier one and became the widely-accepted standard, why wouldn't this happen again?

    It didn't happen. 4-GL's are largely of historic interest today, mostly confined to legacy code that no one can be bothered to re-write. Even the systems that genuinely provided a productivity advantage like Powerbuilder faded into stasis, rarely used to build new programs.

    There is a great deal to be said about this fact. One of the factors is certainly the rise to dominance of object-oriented orthodoxy, which in spite of supposedly being centered on data definitions (classes) is nonetheless highly code-centric and has NO productivity gain over non-O-O languages. Where have you read that before? Nowhere? Probably the same place you haven't read all the studies showing in great detail how it achieves productivity gains. What can I say? Computer Non-Science reigns supreme.

    Conclusion

    I won't be writing a follow-up blog post on 5-GL's. Yes, they existed and were the hot thing at the time. I remember vividly all the hand-wringing in the US over the massive effort in Japan with the government funding research into fifth-generation languages. The US would be left in the dust by Japan in software, just like they're beating us in car design and manufacturing! When was the last time you heard about that? Ever?

    Computers are objective things. Software either works or it doesn't. Unlike perfume, clothes or novels, it's not a mater of taste or personal preference; it's more like math. So what is it with the mis-match between enthusiasm and reality in software? It would be nice to understand it, but what's most important is to understand that much of what goes on in software is NOT based on objective right-or-wrong things like math but on fashion trends and the equivalent of Instagram influencers. Don't know anything about computer history? If you want to be accepted by the experts and elite, that's a good thing. If you want to get things done, quickly and well, ignore it at your peril.

     

  • Software Programming Language Evolution: Credit Card Software Examples 1

    In prior posts I've discussed the nature of programming languages and their evolution. I have given an overview of the so-called advances in programming languages made in the last 50 years. Most recently I described a couple of major advances beyond the 3-GL's. The purpose of this post is to give a couple real-life examples of how amazing new 4-GL’s and O-O languages have worked out in practice.

    I was CTO of a major credit card software company in the late 1990’s. Because of that I had a front-row seat in what turned out to be a rare clinical trial of the power and productivity of the two major new categories of programming languages that were supposed to transform the practice of programming. Of course no one, in academia or elsewhere, has written about this real-world clinical trial or any of the similar ones that have played out over the years.

    Bank One and 4-GL's

    Bank One, based in Columbus Ohio, was a major force among banks in the 1990’s. They were growing and projected a strong image of innovation. During the 1990’s the notion that applications should be based on a DBMS was becoming standard doctrine, and the companies that valued productivity over Computer Science (and internet) purity were united behind one form or another of 4-GL as the tool of choice to get things done. Together with Anderson Consulting, one of the giant consulting companies at the time, Bank One proceeded on a huge project to re-write all their credit card processing code into a 4-GL.

    After spending well north of $50 million (I heard nearly $100 million) and taking over 3 years, the project was quietly shelved, though industry insiders all heard the basic story. No one had an explanation. 4-GL’s are amazing, so much better than ancient things like COBOL – and card processing is just simple arithmetic, right, with a bit of calculating interest charges thrown in. How hard could it be? Harder than a 4-GL wielded by a crack team of one of the country’s top tech consulting firms could pull off with years of time and a giant budget, I guess.

    On top of everything else, they had a clear and unambiguous definition of what the 4-GL program needed to do in the form of the existing system. They had test cases and test data. This already eliminates a huge amount of work and uncertainty in building new software. Compared with most software projects, the work was simple: just do what the old program did, using existing data as the test case. This fact isolated the influences on the outcome so that the power and productivity of the 4-GL was the most important factor. Fail.

    Word of this should have gotten out. There should have been headlines in industry publications. The burgeoning 4-GL industry should have been shattered. Computer Science professors who actually cared about real things should have swarmed all over and figured out what the inherent limitations of 4-GL's were, whether they could be fixed, or whether the whole idea was nothing but puffery and arm-waving. None of this happened. Do you need to know anything else to conclude that Computer Science is based on less rationality than Anna Wintour and Vogue?

    Capital One and Java

    Capital One was the card division of a full-service bank that was spun out in 1994, becoming an unusual bank whose only business was to issue credit cards. In just a couple years the internet boom started, and with it enthusiasm for the most prominent object-oriented language for the enterprise, java. Capital One management was driving change in the card world and presumably felt they needed a modern technology underpinning to do it fully. So they authorized a massive project to re-write their entire existing card software from COBOL to Java. I remember reading at the time that they expected incredible flexibility and the power to evolve their business rapidly from the unprecedented power of Java.

    The project took a couple years and was funded to the tune of many tens of millions of dollars; the amounts were never made public. As time went on, we heard less about it. Then there was a small ceremony and the project was declared a success, a testimony to the forward-looking executive management and pioneering tech team at the company. Then silence. I poked around with industry friends and discovered that the code had indeed been put into production – but just in Canada, which was a new market for the company at the time, handling a tiny number of cards. Why? It didn’t have anywhere close to the features and processing power that the existing COBOL system had to handle the large US card base. Just couldn't do it and company management decided to stop throwing good money after bad.

    Conclusion

    Executives and tech teams at major corporations bought into the fantasy that the latest 4-GL's and O-O languages would transform the process of writing software. They put huge amounts of money with the best available teams to reap the benefit for their business. And they failed.

    These projects and their horrible outcomes should have made headlines in industry publications and been seared in the minds of academics. Software experts should have changed their tune as a result, or found what went wrong and fixed it. None of this happened. It tells you all you need to know about the power and productivity gains delivered by 4-GL's and object-oriented languages. Nothing has changed in the roughly twenty years since these events took place except for further evidence for the same conclusion piling up and the never-ending ability of industry participants and gurus to ignore the evidence.

     

  • Software Evolution: User Interface

    User interfaces have gone through massive evolution since their first appearance in the 1950's. Lots of people talk about this. But not many separate the main two threads of UI evolution: technical and conceptual.

    The technical thread is all about the tools and techniques. Examples of elements in the technical thread are the mouse, function keys, menus, and graphical windowing systems. Advances in the technical thread of UI evolution are created by researchers, systems people and systems makers, both hardware and software. People who build actual UI’s generally have to use the tools they’ve been given.

    The conceptual thread of UI evolution is about thoughts in the heads of application builders about what problem they’re trying to solve and how they’re supposed to go about solving it. Application builders are generally taught the base concepts they are supposed to use, and then usually apply those concepts throughout their careers. But all application builders don’t have the same thoughts in their heads. The thoughts they have exhibit a clear progression from less evolved to more evolved. It is interesting that the way application builders think about what job they are supposed to do is almost completely independent of the tools they have, i.e., the technical thread. Yes, they can and do use the tools available to them, but this conceptual thread of UI evolution rides “above” the level of the technical tools.

    The evolution of UI on the technical side is widely discussed and understood. As hardware has gotten better and less expensive, the richness of the interaction between computer and human has increased, with the computer able to present more information to the user more quickly, and with immediate reaction on the computer’s part to user requests. For the most part, this is a good thing, although people who think only about user interfaces can make serious product design mistakes when they fail to put the user interface in the broader context of product design. For example, generally speaking, pointing at a choice with a mouse is better than entering a code on a keyboard, and giving users lots of control through a rich user interface is better than giving them no control. However, in situations where there are repetitive tasks and efficiency is very important, the keyboard beats the mouse any day of the week, and in situations where tasks performed by humans can be automated, it is far better to have the computer do it — quickly, effectively and optimally — rather than depending on and using the time of a human being, regardless of how wonderful his UI may be. This post goes into detail with examples on this subject.

    Conceptual UI evolution, by contrast to evolution on the technical side, is not widely discussed and not generally understood. Understanding this evolution enables you to build superior software by creating software that enables tasks to be accomplished with less human effort and greater accuracy.

    UI Concepts

    The conceptual level of user interfaces is most easily understood by asking two questions: (1) whose perspective is the primary one in the mind of the application UI builder – the computer’s or the user’s; and (2) to what extent is the user relied upon to operate the software correctly and optimally? The most primitive UI’s “look” at things from the computer’s point of view, and, somewhat paradoxically, rely almost entirely on the user to get optimal results from the computer. The most advanced UI’s “look” at things from the user’s point of view, while at the same time imposing as little burden of intelligence and decision-making as possible on the user.

    When you state it this way – a UI should be user-centered and should help the user to be successful – you may well assume that building UI’s in this way would be standard operating procedure, and that building UI’s in any other way would be considered incompetent. Sadly, this is not the case. Like all the patterns I describe in my series on software evolution, most people, companies and even industries tend to be “at” a particular stage of evolution in the subject areas I describe here; companies gain comparative advantage by taking the “next” step in the pattern evolution earlier than others, and exploit it for gain more vigorously than others.

    Some of the patterns I've observed in software evolution just tend to repeat themselves historically with minor variations. Other patterns, of which this is an example, don't seem to be as inevitable or time-based. This pattern is much like the pattern of increasing abstraction in software applications, described in detail here. Competitive pressures and smart, ambitious people tend to drive applications to take the next step on the spectrum of goodness.

    For UI, the spectrum can be measured. The UI that requires the least time and effort by a user to get a given job done is the best. That's it!

    Do UI experts think this way? Is this a foundational part of their training and expertise? Of course not! Just because computers are involved, no one should be under the illusion that we live in a numbers-driven world. For all the talk of numbers, people are more influenced by the culture they're part of, and generally want validation from that culture. Doing something further up the UI optimization curve than is customary in their milieu is nearly always an act of rebellion, and most people just don't do it.

  • Software Programming Language Evolution: Beyond 3GL’s

    In prior posts I’ve given an overview of the advances in programming languages, described in detail the major advances and defined just what is meant by “high” in the phrase high-level language. In this post I’ll dive into the amazing advances made in expanding programmer productivity beyond the basic 3-GL’s. What's most interesting about these advances is that they were huge, market-proven advances, and have subsequently been ignored and abandoned by academia and industry in favor of a productivity-killing combination of tools and technologies.

    From the Beginning to 3-GL's

    The evolution of programming languages has been different from most kinds of technical evolution. In most tech development, there’s an isolated advance. The advance is then copied by others, sometimes with variations and additions. There follows a growing number of efforts concentrating on some combination of commercialization, enhancement and variation. This resembles biological evolution in that once mammals were “invented” there followed a growing number of varied mammalian species with ever-growing variations and enhancements.

    If you glance at the evolution of programming languages, it can easily seem as though the same kind of evolution has taken place. It makes sense: software languages are for computers, and don’t computers get faster, smaller and cheaper at an astounding rate?

    Let’s start by reviewing the evolution of programming languages up to what are commonly called 3-GL’s. For details see this.

    First generation languages are all the native language of a particular computer, expressed as the computer executes the language: in binary. A program in a 1-GL, normally called machine language, is a big block of 1’s and 0’s. If you understand it, you can break the numbers up into data and instructions, and the instructions into command codes and arguments. Necessary for a machine, but a nightmare for humans.

    A program in a 2-GL, normally called assembler language, is a text version of a 1-GL program, with nice additions like labels for locations of instructions and data. 2-GL’s were a night-and-day advance over machine language.

    A program in a 3-GL, for example COBOL, FORTRAN or C, is a text language that is independent of the computer that can be translated (compiled or interpreted) to run on any machine. There are statements for defining data and for defining the actions that will be taken on the data. The action statements normally constitute the vast majority of the lines. For many programs, 3-GL’s were 5 to 10 times more productive than assembler language, with the added advantage that they could run on any machine.

    We’re done, right? In some sense, we are – there are still vast bodies of production code written in those languages. No later language can create a program that has greater execution speed. But maybe we’re not done. As I’ve described, the “high” of high-level isn’t about the efficiency of the computer; it’s about the efficiency of the human – the time and effort it takes a human program to write a given program using the language. There have been a host of languages invented since the early days of 3-GL’s that claim to do this.

    Let’s look at a couple of languages that no one talks about and don’t have a category name, were wildly popular in their day and that live on today, unheralded and ignored. I’ll use two examples.

    The Way-beyond-3-GL's: MUMPS

    The first of these languages I’ll describe is MUMPS, developed at Mass General Hospital for medical  processing. Have you ever heard of it? I didn’t think so.

    In modern terms, MUMPS is definitely a programming language; it has all the capability and statement types that 3-GL’s have. But MUMPS goes way beyond the boundaries of all 3-GL’s to encompass the entire environment needed for building and running a program. Normally with a 3-GL someone needs to pay lots of attention to things “outside” the language to achieve an effective solution, particularly in the areas of data access, storage and manipulation, but also in the operating system. A MUMPS program is inherently multi-user and multi-tasking. It has the ability to reference data without the potential danger of pointers. It has the power and flexibility of modern DBMS technology built in – not just relational DBMS but also key-value stores and array manipulation features that are still missing from most subsequent languages. In other words, you can build a comprehensive software application in a single programming environment without external things like databases, etc.

    The result of this wide variety of powerful features all available in one place implemented as an integral part of the language was definitely outside the mainstream of programming languages – but wildly productive. MUMPS had strong uptake in the medical community. For example, the hospital software with dominating market share today is Epic, which was originally written in MUMPS (now called Cache) and remains so today.  An amazing number of other leading medical systems are written in the language, as well as in the financial sector.

    Net-net: MUMPS is truly a beyond-3-GL high level language in that the total amount of human effort required to reach a given programming result is much less. Even better, all the skills are normally in a single person, while modern languages require outside skills to achieve a given result, for example a database expert.

    The Way-beyond-3-GL's: PICK

    PICK is another beyond-3-GL that delivered a huge up-tick in programmer productivity. PICK, like MUMPS, is largely forgotten today. It’s an afterthought in any discussion of programming language history, ignored by academics, and generally erased. The title of its entry in Wikipedia is even wrong – it’s called an operating system! Of course, it is an operating system – AND a database AND a dictionary AND a full-featured programming language AND a query system AND a way to manage users and peripherals — everything you need to build and deliver working software, all in one place. PICK was a key driving factor in fueling the minicomputer industry during its explosive growth in the 1970’s and 80’s, while also running on mainframes and PC’s.

    Wikipedia says: "By the early 1980s observers saw the Pick operating system as a strong competitor to Unix.[13] BYTE in 1984 stated that "Pick is simple and powerful, and it seems to be efficient and reliable, too … because it works well as a multiuser system, it's probably the most cost-effective way to use an XT."

    PICK was the brainchild of a modest guy named Dick Pick. During the early 1980’s I worked at a small company in the Boston area that attempted to build a competitor to PICK, which seemed to be everywhere at the time. As you might imagine, programmer humor emerged on the subject, including such gems as

    If Dick Pick picked a pickle, which pickle would Dick Pick pick?

    PICK lived on in many guises and with multiple names. But it has zero mind-share in Computer Science and among most people building new applications today.

    Conclusion

    All-encompassing programming environments like MUMPS and PICK should have become the dominating successors to the 3-GL languages, particularly as the total effort to develop working systems based on 3-GL’s like Java exploded with the arrival of independent DBMS’s, multi-tier hardware environments and the orthodoxy of achieving scalability via distributed applications. Yet another step on the peculiar path of software evolution.

    I remember the frenzy during the internet explosion of the late 1990’s and early 2000’s of money flowing in and the universal view among investors and entrepreneurs about how applications must be written in order to be successful. I encountered this personally when my VC partner introduced me to what appeared to be a promising young medical practice management system company that was having some trouble raising money because investors were concerned that the young programmer doing much of the work and leading the effort wasn’t using java. I interviewed the fellow, Ed Park, and quickly determined that he was guided in his technical decision-making by smart, independent thinking rather than the fashionable orthodoxy. I endorsed investing. The company was Athena Health, which grew to become a public company with a major market share in its field. And BTW achieved linear scalability while avoiding all the productivity-killing methods everyone at the time insisted were needed.

    The history of amazing, beyond-3-GL's like MUMPS and PICK that deliver massive programmer productivity gains demonstrates beyond all doubt that software and all its experts are driven by fashion trends instead of objective results, and that Computer Science is a science in name only.

  • Software Evolution: Functionality on a New Platform: Market Research

    This is the third in a series of examples to illustrate the way that functionality that had been implemented on an older platform appears on a newer platform.

    See this post for a general introduction with example and explanation of this peculiar pattern of software evolution. This earlier post contains an example in security services software and this earlier post describes an example in remote access software.

    This example is known to me personally because my VC firm was an investor, and I was involved with them through the life of the investment. 

    Example: Knowledge Networks

    Old platform

    Telephone, mail, focus groups

    Old function

    Conducting surveys for market and opinion research

    New platform

    Internet

    New function

    Essentially the same, with much greater knowledge of the activities of panel members before and after taking a survey, and the ability to conduct interactive surveys

    Outcome

    The premise was valid, but the company was ahead of the market. It was acquired in 2011.

    Organizations that depend a great deal on the opinions and actions of a large number of people sometimes conduct market research to help them shape the details of a product, an advertising campaign, a political campaign or other relevant effort. This kind of research long pre-dates computers. It normally starts using informal methods for selecting the people to ask and evaluating the results. But then it moves in stages towards increasing amounts of scientific control and analysis in order to reduce costs and improve accuracy.

    Market research was already well-established on the prior technology “platform” of the telephone when the internet started to spread quickly in the second half of the 1990’s, when a substantial and growing fraction of the US population got internet access. People in the field were familiar with the issues of selection bias, people without telephones and random digit dialing methods of assuring statistically valid panels. But when the web (the new platform) started spreading quickly, did market research transfer its knowledge, methods and techniques to the new platform? It didn’t (I think you probably guessed the answer), because brand-new people put together the first web-based market research systems. It was quick, easy and had the advantage of being inexpensive – but it was as scientifically primitive as telephone-based surveys were prior to the introduction of statistical methods.

    Knowledge Networks was started by a couple of professors with stature in market research in the pre-internet world, with the goal of keeping the cost and speed advantages of the internet, but bringing it up to pre-internet scientific standards.

    While there has definitely been a migration of internet-based market research to higher levels of scientific standards, which Knowledge Networks has both led and benefited from, their experience is an example of the danger of getting too far ahead of the pattern. One of the key facts about the “emergence of functionality on a new platform” pattern is that the functionality emerges in the same order as it did on the earlier platforms – but it doesn’t skip steps or leap right to the end! These professors knew that internet-based market research would evolve to greater scientific integrity – and they were right! – but they didn’t fully appreciate that the market would get there in its own sweet time, and that it would insist on dawdling on intermediate steps. By insisting that Knowledge Networks provide only the best, highest-integrity market research methods, up to the standards of the best available on earlier technology platforms, the original leaders of the company caused it to be “out of step” with the market. They were “ahead” of the market, which is a great place to be if you want to be a business “visionary,” but is rarely a good place to be if your goal is to build a substantial business.

    I have to say that this is a really hard one to get right in practical situations. I personally was involved with Knowledge Networks at the time some crucial decisions were made, but I didn’t know enough about or appreciate the power of the pattern to help make the company as successful as it could have been. In fact, I was probably part of the problem. The professors who started the company were really smart, and they were on top of all the issues of market research. I knew this, appreciated it, and was excited by the possibilities of benefiting by translating the best practices from traditional market research to the internet. What’s painful is that I also knew in general terms the dangers of being ahead of a market. But that’s exactly what we were, and yet again, for the umpteenth time, I didn’t see it and didn’t call it. Arghhh!

    Lesson: being too early is just as bad as being too late.

  • Software Evolution: Functionality on a New Platform: Remote Access

    This is the second in a series of examples to illustrate the way that functionality that had been implemented on an older platform appears on a newer platform.

    See this post for a general introduction with example and explanation of this peculiar pattern of software evolution. This earlier post contains an example in security services software.

    This example is known to me personally because my VC firm was an investor, and I was involved with them through the life of the investment.

    Example: Aventail

    Old platform

    Dedicated IP-SEC VPN

    Old function

    Remote access to internal LAN resources

    New platform

    Web server

    New function

    Use existing Web infrastructure and https to provide old functionality, enhanced by application-level security, reducing costs and increasing flexibility and security.

    Outcome

    Some market education required in the early years, but strong position vis-à-vis the competition and good growth. The company was acquired by SonicWall in 2007.

    Aventail built functionality for remote access that has been implemented over and over again, each time a new technology platform has emerged. But they rode what was at the time the latest wave (internet protocols and SSL encryption), and so were participating in a growing market.

    I remember using teletype paper terminals running at 110 baud in the late 1960’s for remote access to computers. Whenever a new platform would come out, the new technology wouldn’t support remote access, but for some strange reason, people would want it! So, focused entirely on getting something working in the new environment, and either ignoring or simply being ignorant of earlier solutions to the same problem, someone would build a remote access solution. But then inadequacies would be found, and a release two would come out. All in what appears to be ignorance of solutions built on prior platforms, blind to their lessons-learned..

    A good example is the identification and access control system for remote access. The system you want to connect to has some system for user ID’s and passwords, and then some method of access control based on user groups. The remote access is normally first built in the simplest possible way, having its own system administration, user identification and access control. As the use of the system grows, this parallel administration is a burden, and so some level of integration with the core security system is then implemented. The pattern is that the separate system is normally built first; the need for integration is “discovered;” the integrated control systems are supplied in a later release.

    When you see this pattern of stupidity and ignorance for the first time, you scratch your head. These are programmers and experienced product people! How could they have missed such an obviously valuable feature of the same functionality built on an earlier platform? Well, that's the pattern, as I described in detail in the first post in this series. It's a wonderful pattern — it enables anyone who understands it to predict the future with great accuracy and precision!

Links

Recent Posts

Categories