Category: Microservices

  • Micro-Services: the Forgotten History of Failures

    Micro-services are one of today's leading software fashions; see this for more on software fashions in general. I have treated in depth the false claims of micro-services to make applications "scalable," and separately treated in depth the false claim that micro-services enhance programmer productivity.

    Where the heck did such a crappy idea that has taken such a large mind-share in the software and management community come from?

    I don't know. Who knows why skirts are short or long this year? Or whether shirts are tucked or not tucked? What I do know is that bogus fashion trends infect the software industry, some of them delivering decades of perniciousness. Looking at the spectrum of software fashions, I've noticed similarities among some of them. Some fashions well up, spread and peter out, only to recur again later with minor changes and a new name. The new name appears to be important, so that everyone is fooled into thinking the hot fashion is truly "new," not tarnished by the abject failures of its prior incarnations. I describe a clear example of an old idea with new name here.

    The History of Micro-services

    Why do we have micro-services? How did this bad idea get to be so popular? Was it someone's recent "great idea?" There are probably people who think it's their idea.

    Do micro-services have a history? Of course they do! Nearly every major practice in software has a history of some kind. The number of software practices that are newly invented is truly tiny and going down rapidly. So why do we hear about software advances and the "new thing" so often? The whole field of software ignores its own history in truly radical ways. It even ignores what other software people in other fields and domains are doing.

    Nonetheless, anyone who knows a broad range of software practices over a period of decades can easily recognize similarities and patterns — patterns that surely constitute "history" whether or not the actors recognize the precedents of what they do.

    What are Services?

    Let's step back and understand what "micro-services" are. By the name, it's obvious that a micro-service is an itty-bitty "service." So what's a service? Once you peel back all the layers, a service is a plain old subroutine call with a bunch of fancy, time-consuming stuff stuck in between the code that calls and the code that is called. See this for my explanation of the fundamental concept of the subroutine.

    Subroutines (or functions or methods or whatever) are an essential aspect of programming. Every programming language has its syntax, statements and other things that people get all wrapped up in. Every programming language also has an associated library — as in function/subroutine library. If you look at a manual for a language and one for the associated library, the library is nearly always larger. The richness of a library is a key factor in the utility of a language. In modern terms, an associated framework serves essentially the same purpose; an example is the RAILS framework for Ruby, without which Ruby would just be one line on the ever-growing list of mostly-forgotten languages.

    When you're writing a program, knowing and making great use of the associated subroutine library is essential. But what if you see that you're doing the same kind of thing over and over in your program? It makes sense to create a subroutine to perform that common function as part of the overall application you're building. As time goes on, most well-structured applications include a large portion of nested subroutines.

    What if there's a program that can only run on another computer and you want to call it as a subroutine — give it some data to work on and get the results back? This problem was addressed and solved in many variations decades ago. It's most often called an RPC — Remote Procedure Call. To make a call to a distant subroutine you implement a version of it locally that looks and acts the same, but does some networking to send the fact of the call and the parameters to a cooperating routine on the distant computer. That routine gets the network call, retrieves the data, and makes a local call to the subroutine. When the return happens, the data is sent back by the same mechanism. It's just like writing a memo and putting in your out-box. The clerk who picks up the outgoing memos delivers the ones that are local, and puts the ones that aren't into an envelope, addresses it and puts the memo with return instructions into the mail. And so on. The calling code and the called code act like they normally do and the RPC makes it happen.

    Along comes the internet. People notice that http is supported all over the place. Why not leverage it to implement the mechanics of the RPC? Voila! Now we have SOAP — Simple Object Access Protocol, which uses http and XML to send and return the data. A subroutine with a SOAP interface was called a "service." This all got standardized roughly 20 years ago. Then along came a simpler version to use that had even more overhead called Restful, which is still in widespread use.

    If you need to call a subroutine that can't or shouldn't run on your computer but on some other computer, having some form of RPC is extremely useful. The time penalty can easily amount to a factor of thousands or even millions compared to a normal local subroutine call, but you'll gladly pay the price if there's no other way.

    Now let's recall what all this amounts to. A "service" is a subroutine that has a HUGE amount of overhead. There is no reason to ever take on the overhead unless, in the vast majority of cases, the service you want to call is on a different computer than the one doing the calling, a computer that can only be accessed by some kind of networking.

    Prior Incarnations of Service Insanity

    There have been a couple waves of service fashion mania. One of the waves took place roughly twenty years ago during the internet bubble. People were very concerned to be able to support growth in the use of their applications by unprecedented factors. Expert opinion rapidly agreed that building a "distributed application" was the way to accomplish this. The idea was that the load on the computer would greatly exceed the capacity of even the largest, most capable computer to handle it. To anticipate this, the application should be architected to be able to run on multiple computers that could be changed at will. Instead of a "central" application, the application would be "distributed." This was further elaborated by replacing the simple RPC mechanism with queues called "service buses," as in a bus to route large numbers of service calls from one place to another. The bus itself had to be robust so that messages weren't dropped, so it evolved into an "enterprise service bus," something which exists today. A huge, complex mechanism for accomplishing this became part of java, J2EE (Java 2 Enterprise Edition). See this for the complex, reincarnating history of the transaction monitor.

    It turned out that building a "distributed application" was vastly more expensive and time-consuming than just building a plain old application, the kind that today is sneeringly dismissed as being "monolithic." The computational and elapsed time running cost of such an application was also huge. Hardly any applications had loads so large that they needed to be distributed. Even worse, the few applications that ended up with huge loads that required many computers found vastly simpler, more effective ways to get the job done. It's worth noting that articles with titles like "Why were we all fooled into thinking building distributed applications was a good idea" never appeared. The whole subject simply faded away. Here's a description from another angle of the distributed computing mania.

    Another incarnation of this nutty idea took place largely inside large enterprises. Many of these places had highly diverse collections of applications running the business, accumulated by acquisition, multiple departments building applications for themselves, etc. Instead of doing the sensible thing of reducing the total number of applications by evolving the best versions to be able to handle more diverse tasks, the idea of a SOA, service-oriented architecture, became the focus. All these applications could be turned into services! Instead of building new applications, people would now build services. Tools were built to manage all these new services. There were directories. There was tracking and control. All sorts of new demands came latching onto the IT budget.

    SOA never got to the fever pitch of distributed applications, but it was big. It went the same way — fading as it turned out to be a lot of time and trouble with little benefit.

    Now we come to the present. Remember that the original motivation of the RPC in any form is that a subroutine you want to call is ONLY running on some other computer. An RPC, whatever the form or cost, is better than nothing. Sensible then and sensible now. Then came ways of building programs so that their parts COULD BE run on different computers, if the computational requirements became huge. This also turned out to be rarely needed, and when it was needed, there were always better, faster, cheaper ways than a service approach. Now, with micro-services, we have reincarnated these proven-to-be-bad ideas, claiming that not only will micro-services effortlessly yield that wonderful virtue "scalability," but it will even make programmers more productive. Wrong and wrong.

    Conclusion

    I like the phrase "oldie but goodie." Sometimes it's applicable. In computing when a hot new tech trend comes along that "everyone" decides is the smart way to go, it rarely is. It most often is an "oldie but baddie" with a new name and new image. You would think that software people were so facts-oriented that they wouldn't be subject to this kind of emotionally-driven, be-part-of-the-group, get-with-the-program-man kind of thinking. But most people want to belong and want increased status. If believing in micro-services is the ticket to membership in the ranks of the software elite, a shocking (to me) number of people are all in.

  • How Micro-Services Boost Programmer Productivity

    There's a simple way to understand the impact of micro-services on programmer productivity: they make it worse. Much worse. How can that be?? Aren't monolithic architectures awful nightmares making applications unable to scale and causing a drain on programmer productivity? No. No. NO! Does this mean that every body of monolithic code is wonderful, supporting endless scaling and optimal programmer productivity? Of course not. Most of them have problems on many dimensions. But the always-wrenching transition to micro-services makes things worse in nearly all cases. Including reducing programmer productivity.

    Micro-services for Productivity

    Micro-services are often one of the first buzzwords appearing on the resumes of fashion-forward CTO's. They are one of today's leading software fashions; see this for more on software fashions. I have treated in depth the false claims of micro-services to make applications "scalable."

    I recently discussed the issue with a couple CTO's using micro-services who admitted that their applications will never need to get up to hundreds of transactions a second, a tiny fraction of the capacity of modern cloud DBMS's. In each case, they have fallen back on the assertion that micro-services are great for programmer productivity, enabling their teams to move in fast, independent groups with minimal cross-team disruption.

    The logic behind this assertion has a couple major aspects. The basic assertion that small teams, each concentrating on a single subject area, are more productive than large, amorphous teams is obviously correct. This is an old idea.  It has nothing to do with micro-services. It takes a fair amount of effort for members of a team to develop low-friction ways of working together, and it also takes time to understand a set of requirements and a body of existing code. Why not leverage that investment, keeping the team intact and working on the same or similar subject areas? Of course you should! No-brainer!

    Here's the fulcrum point: given that it's good to have a small team "owning" a given set of functionality and code, what's the best way to accomplish this? The assertion of those supporting micro-services is that the best way is to break the code into separate, completely distinct pieces, and to make each piece a separate, independent executable, deployed in its own container, and interacting with other services using some kind of service bus, queuing mechanism or web API interface shared by the other services. The theory is that you can deploy as many copies of the executables as you want and change each one independently of the others, resulting in great scalability and team independence. In most cases, each micro-service even has its own data store for maximum independence.

    I covered the bogus argument about scalability here. You can deploy all the copies of a monolith that you want to. Having separate DBMS's introduces an insane amount of extra work, since there's no way (with rare exceptions) each service database would be truly independent of the others, and sending around the data controlled by the other services at least triples the work. The original service has to get the data and not only store it locally, but send it to at least one other service, which then has to receive it and store it. That's 4X the work to build in the first place and 4X again every time you need to make changes. And sending things from one service to another is thousands of times slower than simply making a local call.

    Now we're down to productivity. Surely having the group concentrate on its own body of code is a plus, isn't it? YES! It is! But how does having a separation of concerns in a large body of code somehow require that the code devoted to a particular subject be built and deployed as its own executable???

    Let's look at a "monolithic" body of code. Getting into detail, it amounts to a hierarchy of directories containing files. Some of the files will have shared routines (classes or whatever) and others won't be shared. For most groups, going to micro-services means taking those directories and converting them to separate directories, each built and deployed by the team that owns it. Anyone who's tried this and/or knows a large body of code well knows that there's a range of independence, with some routines being clearly separable, some being clearly shared and others some of each.

    A sensible person would look at this set of directories as a large group of routines organized into files and directories as it was built. They would see that as changes were made and things evolved, the code and the organization got messy. There were bits of logic that were scattered all over the place that should be put in one place. There were things that were variations on a theme that should be coded once, with the variations handled in parameters or metadata. There were good routines that ended up in the wrong place. The sensible person realizes that not only is this messy, but it makes things harder to find and change, and makes it more likely that something will break when you change it.

    The sensible person who sees redundancy and sloppy organization wants to fix it. One long-used way to organize code to avoid the problem is to create what are called "components," which are sort of junior versions of microservices. Here is a detailed description of the issues with components, and how sensible people respond to the hell-bent drive towards components. The metaphor of a kitchen with multiple cooks as an apt one.

    Then there's the generic approach of technical debt. While you can go crazy about this, the phrase "paying down technical debt" is a reasonable one. For my take on this tricky subject, see this. Here's a simple way to understand the process and value, and here's a more far-reaching explanation of the general principle of Occamality.

    The sensible person now has things organized well, with most of the redundancy squeezed out. Why is this important? Simple. What do you mostly do to code? Change it. When you look for the thing that needs changing, where would you like it to be? In ONE PLACE. Not many places in different variations. Concerns about basically the same thing should be in a single set of code. You can build it, deploy it and test it easily.

    What's the additional value of taking related code and putting it in its own directory tree, with its own build and deployment? None! First, it's extra work to do it. Second, there are always relationships between the "separate" bodies of code — that's why, when they're separate services, you've got enterprise service buses, cross-service calls, etc. Extra work! And HUGE amounts of performance overhead. Even worse, if you start out with a micro-services approach instead of converting to one, your separate teams will certainly confront similar problems and code solutions to them independently, creating exactly the kind of similar-but-different code cancer that sensible people try to avoid and/or eliminate!

    Separate teams with separate executables also have extra trouble testing. No extra testing trouble you say? Because you do test-driven development and have nice automated tests for everything? I'm sorry to have to be the one to tell you, but if you really do all this obsolete stuff, your productivity is worse by at least a factor of two than if you used modern comparison-based testing. Not to mention that your quality as delivered is worse. See this and this.

    Bottom line: separation of concerns in the code is a good thing. Among other things, it enables small groups to mostly work on just part of the code without being an expert in everything. Each group will largely be working on separate stuff, except when there are overlaps. All of this has nothing to do with separate deployment of the code blocks as micro-services.  Adding micro-services to the code clean-up is a LOT of extra work that furthermore requires MORE code changes, an added burden to testing and ZERO productivity benefit.

    Conclusion

    The claim that small teams working closely together on a sensible subset of a larger code base is a productivity-enhancing way to organize things is true. Old news. The claim that code related to a subject should be in one place also makes sense, kind of the way that pots and pans are kept in the kitchen where they're used instead of in bedroom closets. Genius! Going to the extreme of making believe that a single program should be broken into separate little independent programs that communicate with each other and that the teams and programs share nothing but burdensome communications methods is a productivity killer. It's as though instead of being rooms in a house, places for a family to cook, eat, sleep and relax each had its own building, requiring travel outside to get from one to the other. Anyone want to go back to the days of outhouses? That's what micro-services are, applied to all the rooms of a house.

  • Why is a Monolithic Software Architecture Evil?

    Why is a monolithic software architecture evil? Simple. There is no need to explain “why,” because monolithic is not evil. Or even plain old bad. In fact it’s probably better than all the alternatives in most cases. Here’s the story.

    The Cool, Modern Programmers Explain

    The new, modern, with-it software people come in and look at your existing code base. While admitting that it works, they declare it DOA. They say DOA, implying “dead on arrival.” But since the software apparently works, it can’t be “dead,” except in the eyes of the cool kids, as in “you’re dead to me.” So it must be “disgusting,” “decrepit,” “disreputable,” or something even worse.

    Why is it DOA (whatever that means)? Simple: … get ready … it’s monolithic!! Horrors! Or even better: quelle horreur!!

    Suppose you don’t immediately grimace, say the equivalent of OMG, and otherwise express horror at the thought of a code base that’s … monolithic!! … running your business. Suppose instead you maintain your composure and ask in even, measured tones: “Why is that bad?” Depending on the maturity level of the tech team involved, the response could range from “OK, boomer,” to a moderate “are you serious, haven’t you been reading,” all the way up to a big sigh, followed by “OK, let me explain. First of all, if an application is monolithic, it’s so ancient it might as well be written in COBOL or something people who are mostly dead now wrote while they were sort of alive. But whatever the language, monolithic applications don’t scale! You want your business to be able to grow, right? Well that means the application has to be able to scale, and monolithic applications can’t scale. What you need instead is a micro-services architecture, which is the proven model for scalability. With micro-services, you can run as many copies of each service on as many servers as you need, supporting endless scaling. Even better, each micro-service is its own set of code. That means you can have separate teams work on each micro-service. That means each team feels like they own the code, which makes them more productive. They’re not constantly stepping on the other teams’ toes, running into them, making changes that break other teams’ work and having their own code broken by who-knows-who else? With monolithic, nobody owns anything and it’s a big free-for-all, which just gets worse as you add teams. So you see, not only can’t the software scale when it’s a monolith, the team can’t scale either! The more people you add, the worse it gets! That’s why everything has to stop and we have to implement a micro-service architecture. There’s not a moment to lose!”

    After that, what can a self-respecting manager do except bow to the wisdom and energy of the new generation of tech experts, and let them have at it? All it means is re-writing all the code, so how bad can it be?

    One of the many signs that “computer science” does little to even pretend to be a science is the fact that this kind of twaddle is allowed to continue polluting the software ecosphere. You would think that some exalted professor somewhere would dissect this and reveal it for the errant nonsense it is. But no.

    Some Common Sense

    In the absence of a complete take-down, here are a few thoughts to help people with common sense resist the crowd of lemmings rushing towards the cliff of micro-services.

    Here's a post from the Amazon Prime Video tech team of a quality checking service they had written using classic microservices architecture that … couldn't scale!! The architecture that solves scaling can't scale? How is that possible? Even worse is how they solved the problem. They converted the code, re-using most of it, from microservices to … wait, try to guess … yes, it's your worst nightmare, they converted it to a monolith. The result? "Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities."

    Here's the logic of it. Let’s acknowledge that modern processor technology has simply unbelievable power and throughput. Handling millions of events per second is the norm. The only barrier to extreme throughput and transaction handling is almost always the limits of secondary systems such as storage.

    Without getting into too many details, modern DBMS technology running on fairly normal storage can easily handle thousands of transactions per second. This isn’t anything special – look up the numbers for RDS on Amazon’s AWS for example. Tens of thousands of transactions per second with dynamic scaling and fault tolerance are easily within the capacity of the AWS Aurora RDBMS;  with the key-value DynamoDB database, well over 100,000 operations per second are supported.

    Keeping it simple, suppose you need to handle a very large stream of transactions – say for example 30 million per hour. That’s a lot, right? Simple arithmetic tells you that’s less than ten thousand transactions per second, which itself is well within the capacity of common, non-fancy database technology. What applications come even close to needing that kind of capacity?

    The database isn't the problem, you might think, it's the application! OK, there is a proven, widely used solution: run multiple instances of your code. As many as you need to handle the capacity and then some — you know, kinda like microservices! It's more than kinda. Each transaction that comes in gets sent to one of what could be many copies of the code. The transaction is processed to completion, making calls to a shared database along the way, and then waits for another transaction to come in. Since all the code required to process the transaction resides in the same code instance, all the time and computational overhead of using the queuing system for moving stuff around among the crowd of services is eliminated. Both elapsed time and compute resources are like to be much better, often by a factor of 2 or more.

    OK, what if something extreme happens? What if you need more, and that somehow it’s your code that’s the barrier. Here the micro-services groupies have it right – to expand throughput, the right approach is sometimes to spin up another copy of the code on another machine. And another and another if needed. I talk about how to scale with a shared-nothing architecture here. Why is this only possible if the code has been re-written into tiny little slivers of the whole, micro-services?

    The micro-service adherent might puke at the thought of making copies of the whole HUGE body of code. Do the numbers. Do you have a million lines of code? Probably not, but suppose each line of take 100 bytes, which would be a lot. That’s 100 MB of code. I’m writing this on a laptop machine that’s a couple years old. It has 8GB of RAM in it. That’s 80 times as large as the space required for the million lines of code, which is probably WAY more than your system has. Oh you have ten million lines? It’s still 8 times larger. No problem. And best of all, no need to rewrite your code to take advantage of running it on as many processors as you care to allocate to it.

    I can see the stubborn micro-services cultist shaking his head and pointing out that micro-services isn’t only about splitting up the code into separate little services, but making each service have its own database. Hah! With each service having its own database, everything is separate, and there are truly no limits to growth!

    The cultist is clearly pulling for a “mere” tens of thousands of transactions a second not being nearly enough. Think of examples. One might be supporting the entire voting population of California voting using an online system at nearly the same time. There are fewer than 20 million registered voters in that state. Fewer than 60% vote, usually much less. Suppose for sake of argument that voter turnout was 100% and that they all voted within a single hour, a preposterous assumption. A monolithic voting application running on a single machine with a single database would be able to handle the entire load with capacity to spare. Of course in practice you’d have active-active versions deployed in multiple data centers to assure nothing bad happened if something failed, but you’d have that no matter what.

    Suppose somehow you needed even more scaling than that. Do you need micro-services then?

    First of all, there are simple, proven solutions to scaling that don’t involve the trauma of re-writing your application to micro-services.

    The simplest one is a technique that is applicable in the vast majority of cases called database sharding. This is where you make multiple copies of not just your code but also the database, with each database having a unique subset of the data. The exact way to shard varies depending on the structure of the data, but for example could be by the state of the mailing address of the customer, or by the last digit of the account, or something similarly simple. In addition, most sharding systems also have a central copy of the database for system-wide variables and totals, which usually requires a couple simple code changes.

    Sharding is keeping the entire database schema in each copy of the code, but arranging things so that each copy has a subset of all the data. Micro-services, in contrast, usually involve creating a separate database schema for each micro-service, and attempting to arrange things so that the code in the service has ALL the tables it needs in its subset of the overall schema, and ONLY the tables it needs. In practice, this is impossible to achieve. The result is that micro-services end up calling each other to get the fields it doesn’t store locally and to update them as well. This results in a maze of inter-service calling, with the attendant errors and killing of elapsed time. If all the code and the entire schema were in one place, this wouldn’t be needed.

    I am far from the only person who has noticed issues like this. There was even a list of problems in Wikipedia last time I looked.

    Making sure your application is scalable and then scaling it doesn’t arise often, but when it does you should definitely be ready for it. The answer to the question of how best to architect a software application to be scalable from day one is simple: assure that it’s monolithic! Architect your application so it’s not database centric – this has been a reasonable approach for at least a decade, think it might be worth a look-see? If you do have a RDBMS, design your database schema to enable sharding should it be needed in the future. Make sure each software team “owns” a portion of the code; if you work towards eliminating redundancy and have a meta-data-centric attitude, you’ll have few issues with team conflict and overlap.

    Do yourself and your team and your customers and your investors a BIG favor: stubbornly resist to siren call to join the fashion-forward micro-services crowd. Everything will be better. And finally, when you use the term “monolithic,” use it with pride. It is indeed something to guard, preserve and be pleased with.

  • What Software Experts think about Blood-letting

    Software experts do NOT think about blood-letting. But ALL medical doctors thought about blood-letting and considered it a standard and necessary part of medical practice until well into the 1800's. They continued to weaken and kill patients with this destructive "therapy," even as the evidence against it piled high.

    The vast majority of software experts strongly resemble medical doctors from those earlier times. The evidence is overwhelming that the "cures" they promote make things worse, but since all the software doctors give nearly the same horrible advice, things continue.

    Blood-letting

    Blood-letting is now a thoroughly discredited practice. But it was standard, universally-accepted practice for thousands of years. Here is blood-letting on a Grecian urn:

    11

    Consider, for example, the death of George Washington, a healthy man of 68 when he died.

    GW death

    Washington rode his horse around his estate in freezing rain for 5 hours. He got a sore throat. The next day he rode again through snow to mark trees he wanted cut down. He woke early in the morning the next day, having trouble breathing and a sore throat. Leaving out the details, by the time of his death, after treatment by multiple doctors, about half the blood in his body had been purposely bled in attempt to "cure" him of his sickness!!! If he hadn't been sick before, losing half the blood in his body would have killed him.

    If you are at an accident and you or someone else is bleeding badly, what do you do? You stop the bleeding, because if you don't, the person will bleed to death. That's now. Then? You bleed the sick person because it's the universally accepted CURE for a wide variety of sicknesses.

    Bloodletting was first disproved by William Harvey in 1628. It had no effect. It remained the primary treatment for over 100 diseases. Leaches were a good way to keep the blood flowing. France imported over 40 million leaches a year for medicinal purposes in the 1830's, and England imported over 6 million leaches from France in the next decade.

    While blood-letting faded in the rest of the 1800's, it was still practiced widely, and recommended in some medical textbooks in the early 1900's. We are reminded of it today by the poles on barber shops — the red was for blood and the white for bandages; barbers were the surgeons who did the cutting prescribed by doctors.

    Blood-letting in software

    By any reasonable criteria, software is at the state medicine was in 1799, when everyone, all the experts, agreed that removing half the blood from George Washington's body was the best way to cure him.

    If you think this is an extreme statement, you either don't have broad exposure to the facts on the ground or you haven't thought about what is taken to be "knowledge" in software compared to other fields.

    I hope we all know and accept that the vast majority of what we learn and come to believe is based on authority and general acceptance. This is true in all walks of life. Of course not everyone believes the same thing — there are different groups to which you may belong that have widely varying belief systems. But if you're somehow a member of a group, chances are very high that you accept most things that most members of that group believes.

    This is no less true in science-based fields than others. The difficulty of changing widely-held beliefs in science has been deeply studied, and the resistance to change is strong. See for a start The Structure of Scientific Revolutions. I have described this resistance in medical-related subjects, and in particular showed how the history of scurvy parallels software development methods all too well.

    But at least, to its great credit, medicine has gone through the painful transition to demanding facts, trials and real evidence to show that a method does what it's supposed to do, without awful side-effects. That's why we hear about evidence-based medicine, for example, while there is no such thing in software!

    I hear from highly-qualified and experienced software CTO's that they are going to lead a transition of their code base so it conforms to some modern cool fashion. One of the strong trends this year has been the drive to convert a "monolithic code base" (presumed to be a bad thing) to a "micro-service-based architecture." When I ask "why" the initial response ranges from surprise to a blank stare — they never get such a question! It's always smiling and nodding — my, that CTO is with-it, no question about it.

    Eventually I get the typical list of virtues, including things like "we've got a monolithic code base and have to do something about it" and "we've got to be more scalable," none of which solves problems for the company. When I press further, it becomes obvious that the CTO has ZERO evidence in favor of what will be a huge and consequential investment, and has never seriously considered the alternatives.

    As is typical in cases like this, when you scan the web, you see all sorts of laudatory paeans to the micro-service thing, very little against it. Most important, you find not a shred of evidence! No double-blind experiments! No evidence of any kind! No science of any kind! What you also don't find is stories of places that have embarked on the micro-service journey and discovered by experience all the problems no one talks about, all the problems it's supposed to solve but doesn't, and the all-too-frequent declarations of success accompanied by a quiet wind-down of the effort and moving on to happier subjects. Because of my position working with many innovative companies, this is exactly the kind of thing I do hear about — quietly.

    Conclusion

    We've got a long way to go in software. While software experts don't wear white coats, the way they dress, act and talk exudes the authority of 19th century doctors, dishing out impressive-sounding advice that is meekly accepted by the recipients as best practice. No one dares question the advice, and the few who demand explanations generally just accept the meaningless string of words that usually result — empty of evidence of any kind. It's just as well; the evidence largely consists of "everyone does it, it's standard practice." And that's true!

    Software experts don't think about blood-letting. But they regularly practice the modern equivalent of it in software, and have yet to make the painful but necessary transition to scientific, evidence-based practice.

     

Links

Recent Posts

Categories