Category: Computer security

  • Russia Hacks DNC, Podesta Email: Fake News

    The US government has declared that the Russian government has hacked important US entities. It has retaliated against the Russian government in response. It has now issued its official report providing the evidence of hacking. 

    The "evidence" is a joke. It proves nothing but the incompetence and/or duplicity of the agencies that issued it. The near-certain declaration that the Russian government was behind this and related hacks is fake news. The majority of the US press echos the fake news, supporting it with whatever is left of their credibility.

    Cybersecurity background

    Most large organizations have a big computer security problem. They just don't know how to get it done and don't seem to care, as repeated massive breaches have demonstrated. Government agencies are just as helpless. They issue regulations that tell corporations how to achieve security, but the regulations make things worse, and are ineffective for the government itself. There are solutions, but no one is interested.

    The Hacks

    The overall results of the hacks are well-known. In July, Wikileaks released 44,053 emails from officials of the DNC. In October, it released a large batch of Hillary Clinton campaign director John Podesta's email. Many important people immediately accused the Russians of performing the hack and providing the documents to Wikileaks.

    The Official Evidence

    The government's long-awaited official report of evidence that the Russians performed the hack was released last week by this government agency:

    US-CERT

    Here is how the report is described:

    US-CERT 1

    The report is 13 pages long, with a couple of linked files. The first thing that struck me was that, starting on page 5 and going to the end, the content had literally nothing to do with hacks or Russians — it was just a list of generic nostrums about how to be cyber-secure. One has to wonder where all this supposed powerful wisdom was while the US government Office of Personnel Management (OPM) hack took place; this hack resulted in the loss of highly sensitive data on over 22 million people. People who live in glass houses…

    What about the "evidence" contained on the first few pages?

    I have personally dealt with computers for a long time. I've had to fix serious problems, evaluate reports of problems and recommend solutions. There is a clear pattern of good work:

    • The person and group that did the work is clearly identified.
    • There is some kind of narrative that describes the problem and the path of discovery that leads to the conclusion.
    • Full details about the computers and software affected are provided. Is it a personal computer or a server? What version of what operating system is installed? If an application is relevant, what is the name and version of the application?
    • Full details about event data are provided, for example log files.
    • If there are anomalies, full details about them, included where and how they were found.
    • Enough data is provided so you can double-check any conclusions that may be drawn.
    • If more than one event is involved, this information is provided for each event, with all the information for example servers and operating systems clearly associated with the corresponding event.

    None of this standard information was provided in the report!  Any conclusions that are drawn, given the total lack of real, professional evidence, are therefore baseless.

    Details of the non-evidence

    The report provides no separate information about the DNC or Podesta hacks. It says nothing about whether an email server was hacked or a client. Nothing! What the report does have is a little information with generic diagrams, a very techie listing of part of a script, and a list of IP addresses. The contents of what they provided has been competently analyzed by a security firm. Here is their summary:

    Wordfence

    Let's look at the Podesta hack for a bit.

    I looked at a broad sample of the emails on Wikileaks. Podesta had a gmail account, john.podesta@gmail.com. While some of the emails were sent to another address, podesta@law.georgetown.edu, a quick look at the source of the emails (kindly provided by Wikileaks) shows that this was set up as a forwarding address, i.e., automatically forwarded to the gmail account. The source code I examined was all typical, i.e., not faked.

    No one claims Google was hacked. So it was Podesta's email account and/or the computer he used to access it. The report, of course, doesn't say. The hack could have been accomplished by any number of techniques, and certainly doesn't require sophistication.

    The list of IP addresses given is completely irrelevant for this kind of hack. If the hackers got his user name and password, all they needed to do was log in — no "attack vectors" required.

    Turning to the DNC, the report implies (but doesn't state) that the DNC server was attacked. It talks about how the hacker:

    Escalationwhich is quite impressive. How exactly did the malware "escalate privileges?" That's like saying that a lieutenant in the army suddenly became a general! By making it happen himself! It's only possible if there's a bug in the system that was hacked. Was it Microsoft Exchange? What's the bug? We'd like to know!

    Going into this made me more suspicious, because the Wikileaks site lists exactly 7 senior officials whose emails were hacked. Here's what they say:

    DNC

    All that's needed to accomplish this is a bent insider, like a junior Edward Snowden, or some good social engineering. In other words, more of the same that worked on Podesta. Otherwise, why would the hack be limited to exactly those 7 and no more?

    In other words, an examination of what was hacked leads to the strong suspicion that the "evidence" provided by the government has nothing to do with how the hacking was actually accomplished, or by whom.

    Conclusion

    Cyber-security is incredibly important. I don't care one way or the other that the DNC and Podesta were hacked. Shame on them for not caring about security when the world is full of bad guys. But I do care that many of our most important institutions such as our government and healthcare institutions fail to take it seriously, and when they do, are incapable of getting the job done. It hurts many of us, and someday could hurt us really badly.

  • Apple can help fight crime while maintaining privacy

    Apple can and should maintain the privacy of the information their customers have on Apple devices. But what if the owner is a criminal or terrorist, and the relevant law enforcement agency has a court-ordered warrant? Apple should bend over backwards to help the agency fight crime and terrorism. It can do this without "back doors" or any of the awful things that some people talk about.

    The government

    The government scares me. I don’t want them anywhere near my private information. They have way too much power. If any little thing goes wrong, someone in government can trample all over me. My fear is equal opportunity. If Republicans are in charge, some of them will be corrupt and will decide to use my private information to trample on my rights. If Democrats are in charge, same thing. And bureaucrats of whatever stripe … I shudder. I want to be able to have my private information encrypted and secure, so that no one – including the institutions who are supposed to be keeping us safe – has access to it. PERIOD.

    Sadly, the government already has whole huge piles of my private information all over the place in their files and computers. Moreover, the government appears to be incompetent at keeping private information private. The IRS has been hacked. The White House itself has been hacked. Even that biggest and baddest of security agencies, the NSA, had a massive insider breach. This is not the sort of thing that’s going to be fixed, because they don’t even have the theory of information security right, much less the practice. Details here.

    On the other hand…

    There are bad guys out there!

    Bad guys are bad. They want to steal things. Some of them want to hurt me. They have all sorts of reasons. Some are crazy, some are sociopaths, some are evil, some are driven by a religious and/or political ideology that leads them to commit acts of violence; sometimes we call them terrorists. People in various institutions have the job of keeping law-biding people safe from the depredations of criminals, crazies and terrorists, and/or tracking them down after they’ve done one of the heinous things they are wont to do. These protectors including various branches of the military and other branches of the government, including the CIA, FBI, NSA and others. Like any normal, sane person, I want to be safe. I want someone to keep me safe from the bad guys, and when bad things happen, I want someone to track down the bad guys to prevent them from doing more bad, and to send a message to other bad guys that they probably won’t get away with whatever bad thing they have in mind.

    This means…

    The government needs to keep out of the private business of the citizens. We are part of a country ruled by a Constitution. There is a Bill of Rights, the fourth amendment in particular. HOWEVER: The government's job includes keeping us citizens safe while protecting our rights. Part of the job.

    The people who keep us safe and dig into crimes when prevention hasn’t prevented need to be able to do their jobs. If the courts agree to issue a subpoena, they need to be able to search for evidence. Under the fourth amendment and codified in long-standing procedure, there is a process for ensuring that the privacy of law-abiding citizens is maintained, while at the same time ensuring that, with proper judicial approval, searches and seizures can be performed to maintain the safety of citizens.

    Under the right circumstances and controls, sane people want government law enforcement agents to do their jobs, protect us and catch wrong-doers.

    What about Apple?

    Prior to iOS 8 and the current brouhaha, Apple responded as it should have to requests of this kind, thousands per year of normal requests and hundreds per year involving national security. See here for details. Suddenly they changed. Here is the choice they made.

    Currently Apple has a well-deserved reputation as a criminal’s friend and supporter of terrorists. Do you think the bad guys don't pay attention? They do.

    What Apple should do

    Apple should become:

    • the best friend of law-biding citizens who want to maintain the privacy that is their right under the Fourth Amendment, while at the same time becoming
    • the scourge of criminals and terrorists.

    Specifically, Apple should strengthen and grow the facility they already operate on their Cupertino campus to receive and crack the devices of criminals and others, under strict subpoena and court order control. As they do today. They can and should extend this valuable, safety-maintaining service to iOS 8 and all future hardware and software.

    Would this be expensive? What if it cost, say, $20 million a year? That amounts to less than 0.01% of the CASH that Apple has on hand. It would be a rounding error at ten times the cost.

    Apple could brand the center as the scourge of criminals and terrorists, and make their phones something that bad guys actively avoid using. That way, anyone who uses an iPhone is proclaiming that they’re a good guy – and they’re also proclaiming that Apple keeps their private information safe and secure, unlike (I’m sad to say) most government agencies.

    Is this possible? Yes. Apple has wisely avoided denying that they are incapable of cracking a phone that is in their physical possession. Which are the only phones they should be cracking anyway. Should they give their tools to anyone else? NO WAY!

    What about phones that are in the field? Could Apple remotely hack them? Of course they could! Strictly under court order, strictly from the Cupertino Bat-cave, and solely the identified phone under Warrant.

    Apple's ability to crack phones under these strictly limited circumstances has NOTHING to do with creating dangerous "back doors" or somehow defeating amazing encryption. It's about hardware and the software that runs on it, both of which are entirely of Apple's design and under their control.

    Apple has the opportunity to protect the privacy of its customers much more effectively than the government does, while at the same time helping law enforcement protect us against criminals and terrorists. I hope they'll step up and do the right thing.

  • Apple’s Cancer Prevention Strategy

    The CEO of Apple declared that he has joined the ranks of the nation's oncologists, and is working to prevent the government from forcing Apple to create a new form of cancer and "expose hundreds of millions of people to issues."

    ABC Cook

    The CEO of Apple is anxious to prevent future "issues."

    Let's look at the case of Brittney Mills,

    Mills pic

    This is an example of an "issue" that took place in April of 2015 in Baton Rouge, LA, long before the Apple CEO got worried about cancer. Here's the "issue" that Ms. Mills experienced:

    Mills killed

    Investigators still haven't been able to find who killed her and her unborn child. They've tried hard.

    Mills phone

    They went to Apple for help. Apple refused to help the police get the evidence that might lead them to the person who killed Brittney Mills and her unborn child. The local district attorney wrote to the US Senate Judiciary committee about the case:

    Mills letter

    His pleas and those of Brittney Mills' family were ignored. The case of Brittney Mills isn't the only one:

    Mills many

    Law enforcement getting information from a dead person's cell phone is similar to getting information from their wallet: not something anyone would normally do — but when the person is dead, the only way to proceed.

    Apple's refusal to help Baton Rouge law enforcement catch the person who murdered Brittany Mills is taking place in thousands of cases all over the US:

    Vance

    Apple's response? An escalating war of words. A half hour's worth in ABC's "exclusive" interview with the CEO.

    ABC Safety is important

    While declaring how important safety is, "doing this," i.e., helping get information from the cell phones of murdered pregnant women, "could expose people to incredible vulnerabilities." Does this mean the Apple CEO is concerned about future "incredible vulnerabilities" that are worse than being murdered?

    And then we have the old slippery slope argument:

    ABC turn on camera

    OOOhhhh: law enforcement might turn on the camera!! I guess the Apple CEO thinks that's worse than being a pregnant woman living alone, opening your door at night for someone you know, getting shot and dying. And not being able to find out who did it.

    Now we get to what Apple is being asked by the courts to do, which is the equivalent of creating cancer:

    ABC cancer

    I demonstrated in my prior post that Apple has cooperated with law enforcement in the past, and given out private information on literally tens of thousands of cases, including at least a thousand cases a year involving national security. Apple was able to provide this information because they had written for earlier releases of iOS a much stronger version of what is needed for iOS 8. Apple has written it. It wasn't cancerous before. How would it be cancerous now?

    ABC expose people to issues
    Similarly, when he claims that helping the court would "expose hundreds of millions of people to issues," he assumes this software would somehow escape from Apple's control, when the prior versions did not.

    Apple does know a way to avoid the problem. And it's had years of experience over tens of thousands of cases that the method is safe and effective.

    The issue is simple. Apple refused to provide the help needed to identify the murderer of Brittany Mills and her unborn child. Apple says providing that help is like unleashing a plague of cancer. I say to Apple: please unleash that cancer.

  • Apple’s Approach to Privacy, Terrorists and Criminals

    Apple is locked in a public battle with the prosecutors of the San Bernardino terrorist case about helping the FBI. Tim Cook has been in full public-relations mode asserting how this "unprecedented" request is like distributing a "master key" that will make everything on iPhones public. 

    The government's request (as opposed to how it's described in the media) is reasonable; it is a simple extension to iOS 8 of part of a service that Apple already provides to government agencies for tens of thousands of Apple devices. By refusing to continue providing the service, Apple prevents local police from returning stolen iPhones to their rightful owners. Apple prevents law enforcement from solving crimes of murder, sex abuse of children, sex trafficking, robbery and other crimes. And Apple prevents the FBI from keeping us safe from terrorists.

    The awful things Cook claims will happen if he complies are already enabled by horribly buggy and security-hole-ridden Apple software. Nothing the government has requested will make things worse.

    Apple’s official privacy policy

    What was Apple’s privacy policy before the recent war of words on the subject? The policy is clearly stated on the Apple website. There are lots of words about how Apple loves and respects it customers, and Apple is wonderful. The words lead to this conclusion:

    Apple privacy policy

    That sounds pretty stark! No back door and no server access. Ever! That sure sounds like my information is secure, no matter what!

    Apple’s actions on privacy

    As it turns out, those are weasel words. Which you can find out by a little digging. All you have to do is go to their “government information requests” page. There they admit that they respond to subpoenas and search warrants. But they “limit our response to only the data law enforcement is legally entitled to for the specific investigation.” Well, maybe it’s not so bad…

    Scanning down the page, in HUGE type, is this assurance that practically no one is affected by all this:

    Less than 00673

    An amazingly tiny fraction of “customers” have been affected by this grudging acceptance of government coercion.

    How much does that tiny, tiny fraction amount to? Being super-conservative about doing the calculation, I took the quarterly sales just of iPhones only for the last 3 years (2013 to 2015) as reported publicly by Apple. Truncating each reported result to the lower million, the total is 546 million iPhones. The real number, including iPads and going back further in time, is probably more than twice that. But the arithmetic even for that number is interesting. Using Apple’s own 0.00673% number, the total is 36,745 customers. 

    That number does not include “national security” requests, which according to the same page, is more than 750 requests for the first half of 2015:

    2015 Apple security

    To summarize rhetoric and reality about Apple and privacy:

    Rhetoric: We don’t create backdoors and “have never allowed any government to access our servers. And we never will.”

    Reality: We dish out customer data as required, and do so by the tens of thousands. But we pout while we’re doing it.

    What Apple really, really does

    Dig a bit further, and you can download the details of what and how customer information is handled at Apple, in this document:

    Apple legal process

    Here’s a bit of the table of contents:

    Information from Apple

    You can see that the range and scope of information available goes way beyond anything you might imagine from scanning Apple's website pages.

    The document also declares that Apple can provide an incredible amount of information from any iOS device prior to 8.0, but “will not” perform data extractions from 8.0 or later. The extraction “…can only be performed at Apple’s Cupertino, California headquarters…”

    What the government wants

    The government’s request is short and to the point.

    They want help defeating iOS 8’s PIN brute-force avoidance mechanisms:

    Feds request 1

    Here’s what they suggest an acceptable means of providing the help would be, a piece of loadable software:

    Feds request 2

    They specifically request software that works for only that phone:

    Only on that device

    They don’t demand possessing the software; it’s OK if Apple physically has the device and keeps the developed software on site, without even requiring that government agents be present:

    Remote access

    And if Apple can think of a different way to accomplish the same results, it’s OK with the court:

    Other means OK

    In summary, the court will provide Apple with the terrorist’s government-issued iPhone, and wants Apple to create software that will enable the government to do the hard work of figuring out the iPhone’s PIN code so that the government can access the data on the phone. The government is willing to let Apple do this work with the phone at Apple’s offices, with no government agents present, wants the software to work only for the iPhone in question, and does not request a copy of the software.

    Tim Cook’s response

    Apple hacks and gives the government the private data of tens of thousands of customers. Probably a thousand times a year for national security issues. It does this in its facilities, using software it developed for the purpose.

    The feds are investigating a terrorist attack on US soil in which 14 innocent people were murdered. The phone in question wasn’t personally owned by Syed Farook; it was owned by the government agency for which he worked, and whose employees he murdered. Breaking years of Apple practice, Tim Cook refuses to help. He explains himself on the Apple website:

    Message to customers

    He declares the request “unprecedented.” Sure, if you ignore the tens of thousands of other requests Apple had no trouble satisfying.

    He says the order “threatens the security of our customers.” And the possibility of future terrorist attacks doesn’t?

    He says the order “has implications far beyond the legal case at hand.” Yes it does. But not the way he means it.

    A little further down, he gets to the crux of the matter:

    Cook build backdoor

    He claims he doesn’t have what the government wants. Everyone knows that, and it’s implied in the court order. But he had the equivalent for earlier versions of iOS.

    He claims it’s “too dangerous to create.” While he blathers about encryption and about how Apple can’t get at your data, here he makes no claim that the software is impossible to write – and it’s not! He’s just saying he won’t create it, because he’s too moral or something, and the software would be too "dangerous." Although more powerful versions of the requested software were built by Apple for prior versions of iOS, and they somehow weren't dangerous.

    He claims the request is for a “backdoor to the iPhone.” Wow. You can review the actual request above. It’s no such thing. It’s a piece of software that circumvents the iOS 8 defense against brute-force PIN-breaking. Apple gets to create the software and use it at their offices on the provided phone.

    Cook goes on:

    New iOS

    “The FBI wants us to make a new version of the iPhone operating system.” Maybe that sounds technical and accurate to someone who didn’t read the documents, but it simply isn’t true.

    “In the wrong hands, this software…” How exactly is it going to get in the wrong hands, Mr. Cook? Apple employees have full and unfettered access to the source code of Apple software, including iOS. Any time one of them felt like it, they could make an unauthorized version and spirit it to some off-site server, and do all sorts of evil with it. That was true yesterday, is true today, and will remain true regardless of what happens here. The current situation doesn’t change the chances of malicious software being used for bad purposes one iota.

    “…would have the potential to unlock any iPhone in someone’s physical possession.” BZZZTTT! What this software would do would be exactly and only what the government is asking for: make it possible to brute-force hack the PIN code, which has one million possible combinations for the default 6-digit PIN. For normal humans, this means you would have to:

    • Acquire someone’s iPhone
    • Get and load the hacking software onto it, assuming it has somehow wafted out of Apple
    • Then, by hand, try 6 digit PIN codes until you got to the one that worked
    • On average, this would occur after entering half the possible codes, a total of 3 million digits. This would take more than 34 days of continuous one digit per second attempts.
    • Or, if you really are a super-hacker, you could automate the process. Which I won’t go into here.

    Cook then gets wilder:

    Cust letter Master key

    Yes, the software, once created, could, would and should be used on "any number of devices." Devices that were provided to Apple at their offices with proper documentation and court orders. Most of these devices, as today, would have been lost by their owners, and Apple is helping the owners identify them so they can be recovered. Many of these devices, as today, would be evidence in criminal proceedings. And hundreds of these devices per year will be related to national security issues, as they are today.

    I am very concerned about the FBI being blocked from tracking and stopping terrorists before they kill. But I'm equally concerned about the "merely" criminal aspects of this. For example:

    Post Vance

    Cook has more:

    Hack everything

    Because Apple built software used by Apple on specific phones delivered with court orders to Apple facilities, the government will now be able to listen to your microphone or camera. How exactly does this leap happen?

    The fact is, Apple software was, is and will be chock full of security holes and other problems. Here is Apple's own list of the dozens of security problems that were fixed in iOS 7. After fixing all those problems, iOS should be secure, right? Apple then found more bugs, refused to fix them in user's devices, and instead released iOS 8 with no less than 53 additional fixes to security flaws. So how did iOS 8 go, with all those fixes? Not so well, according to Wired:

    Buggiest

    Finally, Tim Cook once more:

    Conclude

    Apple products have been buggy and filled with security holes in every release. It's riddled with back doors, side doors and bottom doors, all because of Apple's ineptness. It's not getting better. Mr. Cook wants us to fear that the mean government will force us to walk around without privacy. Well, we already are! And it's Apple software that's responsible! Extending Apple's existing practice to iOS 8 will not create a new situation — it will maintain Apple's historic cooperation with the legitimate law enforcement operations of government, protecting us from terrorists and criminals.

    What is this really about?

    I wish I knew. But it's hard not to think of money and market positioning. There is a large portion of the public that thinks that Wall Street and Big Corporations are evil. Meanwhile, Apple makes products that are used by millions of people who think this way. Apple wants to market itself as being for the 99% of people.

    But it has a problem. It's one of the richest, most valuable corporations in the world. It charges top dollar for its products, which are entirely made in cheap-labor countries. It plays games to avoid paying taxes. It's bigger and richer than Wall Street! It's even richer than the US Treasury:

    Apple cash reserves

    It's quite reasonable to imagine that Tim Cook is following in the Steve Jobs tradition of marketing magic to divert its customers from looking at the numbers. Numbers that show that Apple is a corporate behemoth whose sales are slowing, whose new product initiatives have failed, and is desperate to bolster its brand and hold onto customer trust (and revenue) it does not deserve.

  • What Baseball can teach us about OPM, Anthem and other Cyber-Thefts

    In baseball, teams play against each other. Each half inning, one team does its best to attack the other and score, while the other does its best to stop them. The teams are similarly staffed, and they alternate playing offense and defense. In computer security, teams also play against each other. The "home" team always plays defense, while the "away" team comes to town and tries to score against their hosts.

    Tiny, often remote "visiting teams" in cyber-war score massive victories against huge, well-funded organizations like OPM and Anthem. These are rarely quick "hit and run" attacks — they are more often months-long penetrations, during which massive amount of information gold is marched out of the "well-guarded walls" of the clueless behemoth. What's worse, most people don't seem to care — imagine if a single gold bar were secreted out of Fort Knox: heads would roll! How can this happen? Why does no one seem to care?

    Baseball and Cyber-war

    First and foremost, baseball is visible. We can see it and understand it. Loads of fans come to stadiums to watch it. Yankeestadiumepa_450x300
    Cyberwar? It's largely invisible. It's as though the stadiums were empty. Yankees-268

    In baseball, we can actually see the team at bat competing against the defenders. Why MLB teams are shifting on defense more than ever - SI.com 2015-06-22 12-12-12
    It's pretty exciting! For the vast majority of people, there is no equivalent in cyber-war.

    The fans and managers understand the game; those closest to it have normally played it. They have strong opinions, for example, about the defensive shift maneuver, which is sometimes used against a pull hitter. Even if you've never heard of it, a simple diagram makes it easy to understand. Ortiz-shift
    In cyber-war there are also strong opinions, but the way most managers think about cyber-defense is simply inappropriate and ineffective. Not only is there no defensive shift, there is a complete lack of awareness when the enemy has been inside your walls for weeks, ransacking away. Because no one understands what's going on, including those in charge, the ineffective methods continue to be standard practice, even when there are better approaches available. Retail stores, who actually care about loss prevention, generally have better theft prevention measures.

    Above all, there's this. The people who play baseball care about it. 450897118_216591036
    So do the people who watch baseball. Cyber-war is way more than a game, but people just don't take it seriously. They don't even give the passion to it that they give to games! The individual computer users don't know or care, and neither do the managers.

    Conclusion

    Nothing will change in Cyber-war until we understand it, start caring about it and apply methods that work. In a fight between the smart and motivated against the clueless and unmotivated, the outcome is preordained.

  • Systemic Issues Behind the Cyber-Security Disasters at OPM, Citi, Anthem, etc.

    Our personal data is stored in the computers at large corporations and government organizations. We now have abundant proof that these large organizations are incapable of protecting our data. This is not a string of bad luck that will soon pass. These large organizations never had good security — they just weren't being attacked. Unfortunately, the security flaws are a direct outcome of the dysfunctional technical and management practices that lead to large-organization IT failures across the spectrum.

    Recent Security Disasters

    The security disaster at the government Office of Personnel Management (OPM) has been in the news recently. Here is a summary, and here is a timeline. OPM knew all about security, and tried its darndest to be secure, spending over $4.5 Billion dollars on a system to prevent breaches, including a recent $218 million upgrade on the security system known as Einstein. All for naught. 

    In the private sector, there was the breach at Anthem, preceded by a string of security disasters at major banks and retailers involving tens of millions of consumer records.

    The Response to the Attacks

    We're seeing the usual responses to the problems.

    First and foremost, try to avoid letting anyone know there's a problem.

    Second, try to draw attention to all the attacks that were thwarted. The OPM is actually bragging about all the attacks they defend against! That's like, when the bank has been totally cleaned out, bragging about how many attempts had been thwarted.

    Finally, talk about how much you care, offer completely counter-productive services to consumers, and spend even more money on the stuff that didn't, doesn't and won't work. Ignore the fact that the incentives are all wrong, that in fact no one cares.

    No one is losing their job. No significant changes are being made. No one is running around like their hair's on fire. Ho-hum, it's business as usual.

    Systemic Issues are behind the Disasters

    Security in large organizations is broken. But that's just a side effect of the fact that IT in large organizations is broken. Not in detail — in principle. When the foundation of a building is made out of jello instead of concrete, you don't fix it by adding more jello, trying a new flavor of jello, or getting everyone to walk slowly and carefully. You replace it with reinforced concrete — pronto! When the foundations are the wrong kind of stuff, making new foundations out of jello will never help. Even if it's jello that costs billions of dollars.

    The Systemic Issues

    This is a subject that is long and deep. All the problems come down to two simple core thoughts: (1) computers are just like all the other things to which management techniques are applied, so standard-issue "good management" will solve any problems; and (2) computer security is just like all the other computer issues, and can be managed using the same standard techniques.

    Wrong and wrong.

    Computers and software in general are radically different than anything else we encounter in our normal lives, and evolve more quickly by orders of magnitude than anything else in human experience. Managing a software building project as though it were a home building project leads to results that are, at best, 10X worse than optimal methods, and at worst, complete disaster.

    Computer security in particular is not just another issue to be managed using standard techniques, which in any case yield horrible results. In computer security, we're dealing with smart and motivated attackers who are at war with us, and naturally use the latest "weapons" in a rapidly evolving arsenal. While our attackers are at war with us, we plod along at a peace-time pace, scheduling security issues like just the other items in prioritized lists. When the armed gang breaks through the back door of the warehouse, we eventually discover the break-in and schedule a response for sometime in the next couple of months. By the time we've installed new alarms, the gangs are already on their third generation of tools for defeating them.

    Computers are different than the other things we manage

    Computers evolve at a pace that is completely unprecedented in human experience.

    Most of the things that managers do to manage computers is modeled on what they do for everything else, and make things worse.

    Computers are incredibly complex! But somehow, we imagine that people with no actual experience with computers can manage them, when we would never let someone who never saw a baseball game manage a team, or someone who never wrote an article manage writers.

    The vendors of hardware, software and services have evolved to provide incredibly expensive, ineffective products and services that are packaged to make top managers feel great.

    Computer security requires war-time actions, not peace-time ones

    Translating from physical security, managers insist that security is about walls, guards and kevlar vests. The bad guys are out there, our job is to keep them out. Wrong. The vast majority of security breaches result from either conscious or unknowing cooperation of insiders. Including OPM.

    The bad guys are at war with us. By the time we've figured out that we've been robbed, the bad guys are long gone. By the time we're just wrapping up the requirements documents for our response, the bad guys have cleaned us out again.

    Once we finally deploy our best defense, the art of war has advanced and our defenses are useless, just like the Maginot Line in World War I.

    Conclusion

    We all know that the definition of insanity is repeating the same actions and expecting different results. In that sense, the approach that large organizations, private and public, take to computer security is insane. All the people in charge propose is doing what they've always done, only somehow harder and better. The alternative approach, while radically different from the current one, is simple, clear and actionable. The people in charge actively resist it today. They've got to embrace it if there is to be any chance at all of improvement in cyber-security.

  • Internet Driver’s Licenses Needed for Users

    We give kids sex education. We give them driver education, and require a driver test and license before driving. But we let any fool onto the internet to wreak whatever havoc they can on themselves and others without a second thought. It's time for a change!

    Education for Meaningful Use

    Education on the basics of how the internet and associated technologies work and how to control, respond to and interpret what you see is totally neglected. There are no significant efforts that I know of to make people educated consumers of this important, ubiquitous service that is so widely used. But there is a more important issue…

    Education for Safety

    By far the most important subject for internet education is safety. Maintaining internet safety has some similarities to general safety, but is different in important ways.

    Internet "driving" safety

    The most important aspect of safety while driving is avoiding driving while impaired in any way, and paying sharp attention to the road and other vehicles at all times. Driving while impaired by drugs or alcohol or while engaged in texting or talking Image-3-4
    are recognized factors.

    So imagine how hazardous internet driving must be when people don't even know how to read the road signs (the URL's) and can't tell that they've wandered onto a road constructed by criminals specifically for the purpose of enabling them to steal your car, drive it to your bank and take out a big withdrawal! But that's exactly what it is! Here's an example of a more brazen attack (image from a good guy, Yoo Security), demanding that you send the money yourself: ICE
    Unfortunately, there are criminals out there who have grown far beyond simple smash-and-grab operations. These sophisticated criminals with a long-term view trick you to "drive" onto their criminally-constructed "road" for the sole purpose of making your car an instrument for stealing from other people or organizations. They can make your computer into a zombie to participate in botnets. It can serve that purpose for minutes or years without your awareness. Is the problem big? You betcha. There are more computers that have been hi-jacked into botnets (maybe yours!) than most people are aware of:

    Botnets
    Sometimes, of course, the criminals are stupid, greedy or malicious — I guess those are the drop-outs from the "criminals should be good citizens" certification program. So your hi-jacked device could slow to a crawl, do weird things, look over your shoulder as you type until they get the information needed to drain your bank account or max out your credit card, or even (just because it's fun!) wipe out your machine while leaving some cute "it was me! Have a nice life!" Message on your screen.

    Internet E-mail fraud

    How often do you get a letter purporting to be from your bank asking you to send them a letter containing your account number just so they can verify that everything's OK? If you got one, do you think you'd respond as requested? Apparently you're not alone — criminals are the supreme capitalists, and abandon efforts that are unprofitable before long.

    But how about letters on the internet, i.e., e-mail? Along with everyone I know, I get an amazing number of criminal solicitations, ranging from the laughable (at least to me) to the amazingly credible every day. Data-driven capitalists that they are, the only explanation for the persistence of these efforts is that more than enough of them work to cover the costs and trouble of running the schemes, certainly more than getting a legal job. I've seen fewer solicitations from Nigeria lately, but the slack has been taken up by Libya.

    Here's one of the new breed from Libya:

    Libya

    Here is a somewhat more plausible one from a place that really could be your bank:

    Chase

    Conclusion

    Uneducated internet users cause billions of dollars of harm to themselves and others every year. You think this would result in outcry by those users and people who know them for education. You might think this might merit a bit of attention from the institutions who so assiduously and expensively educate, authorize, license and otherwise keep us on the straight and narrow. When I'm in Central Park in New York, there are rangers watching my every move; they set me straight when I ride my bike where I'm not supposed to, or walk in one of the ever-changing restricted areas. The conclusion is obvious: every move I make in the Park is more worthy of watchful restriction by people in uniforms than the millions of actions on the internet that seem, at least to me, far more destructive. I must be missing something.

  • How to Achieve Cybersecurity: Motivation

    The problem is big. It's getting bigger. Here's one summary of what's been happening:

    Hack Attacks

    What's the problem here? Is it really so hard to achieve cybersecurity?

    I suggest that the issue is clear and simple: the people in charge of keeping your information safe are not motivated to keep it safe. The consequences to them personally of failing to keep it safe are minimal, and so they simply don't take the trouble to do it.

    Motivation and consequences

    Whether we like it or not, people are motivated on the positive side by rewards, and on the negative side by punishments. If you see people acting in a certain way, you ask, what is the incentive that is encouraging that behavior? The incentive could be positive (you get something good) or negative (something bad that used to happen when you did that thing no longer happens). A great deal of human behavior can be explained by personal incentives: rewards and punishments.

    Incentives in Cybersecurity

    So what happens to people in the companies when one of these big data thefts happen? Are the front-line drudges punished but the executives given a free pass? Do the people where the buck supposedly stops lose their jobs but the worker bees who were just executing according to a bad plan let off lightly? Answer: there's some bad publicity, but no one loses their job, no one's pay is docked, nothing!

    If no one at the companies even went through the motions of trying to keep your data secure, the publicity might be bad. But that's what regulations are for — CYA. The company claims it was following all the regulations that are supposed to keep data secure. So how is it their fault if, in spite of all their excellent, by-the-book efforts, the data walked out the door anyway? Case closed. The company and all its employees, from top to bottom, are off the hook!

    Incentives and Motivations

    When a company loses money and market share, the CEO is likely to lose his job. When a person in accounting delivers bad data, they're likely to lose their job. When a department does really well, the people in charge are frequently given bonuses or promotions. They get better jobs and make more money. In most industries, sales people are incentivized by commissions — if they sell more, they make more money. It's everywhere. To encourage good behavior, reward it. To discourage bad behavior, punish it.

    Everyone says they're concerned about protecting your data. They use as evidence the fact that they conform to all relevant regulations and spend lots of money on security. So if, in spite of all this, the data is lost, it can't possibly be their fault!

    Does that mean the regulations themselves are bad or ineffective? No one is claiming that (except for me and a few other voices in the wilderness), but think about this: when has any regulator lost anything because they were doing a bad job at regulating? The very notion boggles the mind!

    Bottom line: they have no incentive to protect your data! We know this because, when people are properly motivated to get a job done, they somehow find a way to get it done. The fact that they are unmotivated and have bad theories practically guarantees failure.

    Conclusion

    Lack of motivation.

    No incentives.

    Ineffective regulations.

    Therefore, cyberthefts will continue unabated until this changes. Q.E.D.

  • Methods for Effective CyberSecurity

    The methods for achieving effective cybersecurity for a large class of applications are simple and obvious, but almost never implemented. If the methods were implemented, they would prevent the kind of massive, high-profile data loss that has been increasingly in the news. The methods make common sense to most normal people – but as we all know, computer “experts” are anything but normal. The industry needs to get it together, stop spending massive amounts of money on futile efforts to secure consumer data, and start implementing common-sense measures that work!

    The current approaches to CyberSecurity are fundamentally flawed

    That’s why they don’t work! It’s like if you’re playing pool, missing a lot of your shots, and spend lots of effort gesturing, jumping and grunting as your shot fails to achieve its objective – do you think your problem is not jumping vigorously enough or grunting loud enough? That’s what most enterprise responses to cyber-insecurity amount to. Increasing the money spent on things that don’t work won’t suddenly make them start working.

    The basics

    No matter what methods we use, if we continue to deploy large numbers of security guards who are nearing retirement against small, smart, fast-moving ninja bad guys, we’ll lose. If we continue fighting the last war, we’ll lose. If we continue to think that this game is all about how high and thick the walls of the castle are, we’ll lose.

    New approaches, new methods

    They’re not really new – like most good ideas, they’ve been thoroughly proven in other domains. We know they work. It’s a matter of adapting them so they apply to our computer systems.

    A lot of smart computer people have worked on the security problem for a long time. The issue isn’t something abstruse like better encryption algorithms. It’s simple!

    First, realize that anybody who walks in the door could be a bad guy.

    Second, monitor and track the valuable stuff that you don’t want walking out the door.

    Both of which, believe it or not, we fail to do today inside computer systems!

    How retailers do it

    Retailers with lots of low-value goods like grocery stores have store monitors and checkout areas. Anyone could be a thief, so people are assigned to monitor actions accordingly. Some goods may be valuable and easy to hide, like razor blades. Those are often displayed, but require a store employee with a key to let you get them.

    Clothing stores frequently have security tags on every single item. The tags are removed using a special tool during the check-out process. If you try to walk out of the store with an item that is still tagged, alarms ring and security people grab you.

    Stores with very high value goods like jewelry stores have locked cases, and a heavily human approach to security. Basically, at least one person watches each customer (and sales person!) with jewels at all times. They are disciplined to manage the number of items that are outside a locked case carefully. While the guards watch the customers (i.e., the potential thieves), what they really do is watch the jewelry. They track each item until it’s been bought or safely returned to its case.

    The retail approach to securing valuable items is clear: using whatever combination of automated and human means that make sense, track every valuable item, and assure that when the item goes out the door, it has been cleared to go out with the person it’s going out with.

    Applying Cybersecurity methods to retail

    What would retail look like if we used the kind of methods used by computer experts?

    First, every store would be surrounded by thick, high walls. No display windows! There would be strictly controlled ways of getting in – think TSA security at an airport. Further imagine that the world was awash with fake and stolen ID’s, so that while getting in the store legitimately is odious, for a skilled bad guy, not too hard.

    Now imagine that once you’re in, there is no one watching the goods, there are no security tags on the clothes, no security cameras and no guards. You can grab a string of shopping carts, pile them high with goods, and wind slowly through the aisles. At check-out – well there is no check-out! You’ve been thoroughly vetted on the way in, after all, so you must be OK. When you’re done “shopping,” you can just leave! With your mountains of goods!

    Of course, most visitors to this imaginary store are legitimate. They put up with the horrible entrance gauntlet because all stores have something like it. They get what they need and somehow arrange with the store to pay for it. There’s nothing to stop thousands of bad-guy visitors from walking out with thousands or millions items each, or millions of visitors to walk out with normal-sized shopping carts. Whatever works.

    You might think I’m exaggerating. I wish I were.

    Applying Retail methods to Cybersecurity

    It’s a bit more technical and less visual to see how retail methods can be applied to computer systems, but the basic concepts are clear. While current cybersecurity focuses on perimeter defense (like TSA security for stores), the retail approach would be a bit looser. After all, if the bad guys get in but can’t get away with anything valuable, they haven’t accomplished much, have they? How proud is a bank robber who’s broken into the safe but can’t leave with the dough? How fruitful is his career of crime if, every time he passes the demand note to the teller, she just smiles and says “next customer, please?”

    Applying the retail method to computers requires a completely new approach to tracking what visitors do when they’re inside the computer. While tracking their actions is important, what really needs to be done is track the “goods,” the valuable data items. The retail approach would differ according to the value of the items. If they’re like clothing, each item would be checked on the way out to make sure it’s authorized to leave. If they’re like jewels (for example, personal information), each item is watched like a hawk the moment it’s “picked up” by a “customer” (program). Does the customer have a couple of jewels? That could be OK, but we’re more alert. Does the customer have ten or more? Quietly circle the customer, watch the doors, and make sure there’s no escape.

    The method needs to be extended to apply to the unique circumstances of the computer. Computer bad guys can easily assemble thousands of confederates to do their bidding. The bad guys can dress and act however the boss wants them to. However, they are unlikely to act just like normal shoppers. But I don’t want to take this too far in a blog post – we’re coming up to the edge of methods I’d rather not disclose.

    Conclusion

    Computer systems, corporate and government, will continue to be breached at an alarming rate, which is of course much higher than is publicly disclosed. More money will be spent and people hired. More standards will be set, regulations promulgated and enforced. As should be obvious by now, most of the money will be wasted, most of the people will accomplish nothing, and the regulations will increase costs while making things worse. Unless something changes.

    The problem of cybersecurity can be solved. But it can only be solved if: we acknowledge we’re at war and act accordingly; we apply within the guts of our systems common-sense methods whose principles are clear, obvious and proven in other domains; and we start acting as though we actually want to solve the problem, as opposed to the current strategy of denial, cover-up and blame-shifting.

  • My Anthem Account was Hacked

    I get my health insurance through Anthem. Corporate Anthem was hacked, and the company has made a mess of their customer relations after the hacking, as I've described from receiving their "help." I now see evidence that my personal information was accessed, and Anthem has never told me.

    Anthem and HIPAA

    Anthem is really committed to HIPAA. Here's how they explain it on their website.

    Anthem hipaa

    It's clear from this that Anthem is very committed to privacy and security. Both! Here's some of what they say about privacy.

    Anthem privacy

    And here's some of what they say about security.

    Anthem security

    Anthem clearly had all the bases covered. Except they didn't. What's mind-blowing to me is that, in spite of all the security-privacy-lah-de-dah, someone walked off with the personal information of tens of millions of customers — and no alarm even went off! The breach was actually discovered by an alert grunt in the trenches.

    Anthem sys admin
    Hacking David Black

    Anthem has communicated to its members that they would let them know when they discovered whether any particular member was among those who had been hacked. I haven't heard a thing from them. But I now know that it's likely that my information was stolen.

    I went into the standard Anthem consumer portal a little while ago.

    Anthem header

    I poked around a little, and discovered this little bombshell:

    Anthem last visit

    In other words, "I" had logged in at quarter after one in the morning on Saturday, Jan 31, 2015. However, I personally wasn't logged into Anthem at that time. I was asleep.

    The Good News

    There's good news here! I already knew that Anthem either didn't know whether I'd been hacked or had decided to not tell me, so no change there. My opinion of Anthem was already subzero, so it didn't get noticeably lower. Furthermore, in spite of all this, Anthem executive management will continue to rake in millions, and they're pretty sure that profits won't be harmed:

    Anthem won't hurt earn
    What a relief!

    Conclusion

    Nothing new here. Big corporations comply with all the burdensome regulations, and tens of millions of private records somehow get stolen. The result: lots of face-saving talk that does no one any good, and increased competition-stifling regulation that does nothing to solve the problem. Nothing to see here, people … move along…

     

  • The Anthem of Cyber-Insecurity

    I'm hoping that people will start writing songs about cyber-insecurity, and that a good one will emerge that will be acclaimed as the "Anthem of Cyber-Insecurity." It will be sung quietly by groups of computer users who hold hands as they hear the details of yet another massive computer breach. While singing, some of the much-abused users will be silently praying that their "protectors" get bombed by Facebook friend requests by identity-thieved replicas of themselves, while others will pray for the end of "help" that isn't.

    The Anthem Attack

    I'm one of those praying users, because I'm a member of Anthem, the company that "lost" the personal information of "tens of millions" of its members sometime in 2014; they're not sure how many, whose records were "lost," or when it happened. Here's a personalized communication I received from Anthem:

    Anthem When

    Anthem has made a priority of communicating with its customers about the attack. When you're in the glare of publicity like this, I'm sure great care has gone into each statement on the case. That's probably why I have received more than one missive with the same date that spins things in different ways. For example, the Feb 13 note above refers simply to "cyberattackers" who "tried to get" private information, raising the possibility that their efforts were foiled by the valiant workers at Anthem.

    Check out the identically-dated but substantially different Feb 13 note below.

    Anthem 1
    In this second attempt, Anthem tells us about "cyber attackers" (now two words instead of one) who executed a "sophisticated attack," and "obtained personal information" "relating to" their customers. I guess it was successful? But maybe not, because the behavior of these guys isn't a felony, it's merely "suspicious activity" that "may have occurred." Furthermore, they carefully state that the personal information wasn't the customer's actual personal information, but merely "related to" said personal information. Hmmm….

    What "May Have Been" Lost

    So what information may have been lost during this incident that may have occurred at some unknown time? A fair amount.

    Anthem 2

    Again, what's clear is that Anthem isn't clear. The information "accessed" (wasn't it stolen?) "may have included names, …" But maybe not, we are led to believe. If the information that may have been accessed may have included my Social Security number, why isn't it possible that all sorts of other information was also accessed? We are supposed to be reassured that "there is no evidence at this time" that this actually took place — a nearly ideal way of phrasing something that is supposed to sound like reassurance, but provides full CYA.

    Anthem Provides Protection

    Anthem has a whole website set up to let its members know what's going on, and to let customers know how they can get protection against the possible unauthorized access of their personal information.

    Anthem header

    Here's what Anthem will do: they'll pay a third party to help you out.

    Anthem protections

    If you get in trouble, you can call the service, and they'll help you out. Meanwhile, your personal information may be in the hands of people who were unauthorized to access it. If they are the kind of people who will do "unauthorized" things, who knows what perfidy they'll stoop to?

    Anthem's Additional Protection

    The basic service you get isn't protection at all, as they make clear. Nonetheless, "For additional protection…" — on top of the non-protection they already provide — you can sign up for more. What exactly is this more? Quite a bit! Here's some of it:

    Allclear features

    Wow, and all for free! Let's sign up!

    So you enter your e-mail, and get a code, go to the website, enter the code, and finally get to register for protection.

    What happens next? Here's the page:

    Allclear register

    Wow, this is amazing!

    I have a chance to enter into a website a good fraction of the private, personal information entrusted to a giant insurance company which, while under their stewardship, "may have been accessed" by "unauthorized" entities.

    The security geniuses who kept my information secure want me to give it again to a company that they endorse as being wonderful security experts. Anthem was just terrific at keeping my information secure — it goes without saying that their endorsement of the security of this partner they've just picked is rock-solid.

    These guys are bureaucrats. Read this about bureaucratic security cred. And for more, this.

    Summary

    Anthem's revenues are greater than $60 Billion. They can afford to keep customer data secure.

    Anthem's executives are paid enough to do their jobs well. Last year, the CEO made over $16 million and the CFO over $7 million.

    And yet…

    It took a guy at the bottom rung of the ladder to pay attention and notice something was wrong; had he not cared, the outflow of personal data would still be going on, as it had been for an indeterminate amount of time before the alert employee's observation.

    No system or procedure established by the rich, giant entity had anything to do with noticing the breach, much less preventing it.

    Everything about what they've done since exhibits the same lack of attention to detail and I-don't-care attitude that made the breach possible. What they mostly seem to want is to dash off letters riddled with errors and assurances, focused above all on their public image.

    Their offer of "protection" is a cruel joke, exposing the gullible who accept the offer to further dissemination of their private information.

    Conclusion

    I'm waiting for that anthem as I sit, holding hands in a circle with my fellow users, thinking dark thoughts. And I'm as likely to enter my personal data into the Anthem authorized "protection" service as I am to publish it on this blog.

  • Cyber-Insecurity and the Maginot Line

    The French built the famous Maginot Line after WW I as the perfect defense against another German attack. We all know how that worked out; it became the textbook example of “fighting the last war.” With computers, the speed of evolution is literally hundreds of times faster than with armaments. That’s partly why in cyber warfare, the vast majority of money and effort is spent fighting the last war, which partly explains why we are so cyber-insecure and why it’s so important to get way smarter about cybersecurity than we are.

    The Maginot Line

    According to the history books, the French (among others) “won” World War I. The French certainly thought so. The French generals definitely thought so.

    The French decided that they wanted “learn the lessons” of the war, and apply them to preparing for the next war with the Germans.

    They knew that the technology of war evolves. They were well aware that, once they recovered from their post-war deprivations, the Germans would continue to advance the weapons of war. They were confident that heavily armored vehicles (tanks) would evolve from their nascent status during the “Great War.” To make a long story short, after considerable deliberation, they designed and built the Maginot Line as the ultimate defense against German attack.

    The name Maginot “Line” implies that the Maginot whatever was line-like in nature. The reality is richer and more interesting. As this diagram indicates, Maginot line
    it was a rich complex of systems, stretching more than 10 miles from the border posts to the back.

    Here, for example, is an element in the Maginot line. Hochwald_historic_photo
    Things like this would contain machine guns and/or anti-tank guns.

    It was built over about 10 years, from 1930 to 1940, and was extolled as a “work of genius” by military experts.

    The Maginot Line at War

    The Germans attacked on May 10, 1940. By May 21, the Germans had the Allied armies trapped by the sea on the northern coast of France. German forces arrived at an undefended Paris on June 14, and forced the French into an armistice on June 22. France, victors in World War I and creators of that work of genius, the Maginot Line, fell in about six weeks.

    How did it happen? In retrospect, it’s pretty simple: the Germans read the French script for how the war was to be played, and refused to play the part written for them. Their tanks simply by-passed the invincible Line, and the French planes were inferior in design and number to the German planes. Bundesarchiv_Bild_101I-401-0240-20,_Flugzeug_Heinkel_He_111
    Even though the English fed them details of German operations obtained by breaking the Enigma code, French inferiority was so great that they still lost!

    And how could the French possibly have won when the Germans had generals who looked like this? Bundesarchiv_Bild_146-1987-121-30A,_Hugo_Sperrle

    Looking back on the Maginot Line

    It’s hard to find a better example of “fighting the last war” than the Maginot Line. But surely everyone learned the lessons of how bad it is to fight the last war, right? Nope. That’s one of the reasons why the Maginot Line serves so well as a metaphor, going well beyond its role in history. It serves as an oft-ignored beacon for what you should not do.

    The Maginot Line and Cyber Insecurity

    We can make ourselves feel comfortable by calling it cyber-security, but the reality is that anyone involved with computers is somehow involved in cyber-warfare, whether as a civilian (most people, the “users”) or as a professional. Most computer professionals like to think they have civilian jobs in the computer industry, but the fact is, they’re involved in cyber-warfare no less than the people who transport military supplies to the soldiers are involved in warfare. Everything they do makes a contribution to either winning the war or losing it.

    How’s the cyber-warfare going? How do most wars go when the leaders refuse to acknowledge they’re at war? Yup, that well. We act in every way like we're at peace, and insist on peacetime software development methods, while on the other side, hosts of bad guys fully acknowledge they're at war, and it's a war they intend to win.

    The leaders of our computer systems insist that they’re doing everything they can to maintain cyber-security. Their words are often backed by money. It’s not unusual for 10% of a company’s IT budget to be spent on cyber-security. Unfortunately, the vast majority of the money and the efforts go to building the computer version of Maginot Lines, systems that the people in charge are convinced are brilliant, but which are in fact generations behind the bad guys who are constantly attacking them.

    There is a natural tendency to fight the last war, no matter what you’re doing or where you work. Many people are aware of this tendency and try to avoid it, just as the people who built the Maginot Line tried to avoid it. They genuinely tried their best to take into account the advances that would take place, and plan for that future state. But the Germans were more advanced than the French planned for, and more clever.

    So what do you think would take place in a field where the rate of advance of the technology is greater than in any other domain of human experience? If it’s hard for people in domains in which patterns and practices advance slowly, how hard is it in a domain which advances hundreds of times more quickly than anything in history?

    That, in a nutshell, is why the vast majority of the billions of dollars spent on cyber-security has the net effect of wasting money and making us cyber-insecure.

     

  • Cyber Security and Cyber Insecurity

    People talk about “cyber security” as though it’s something we have; they say we’d better be careful (i.e., spend more money), because awful things might happen if we become cyber insecure.

    Sorry, but that train has left the station. Our computers and networks, government, corporate and personal, are already unbelievably overrun by bad guys of all sorts. Not just attacked – overrun; the bad guys are already on the inside, doing stuff that would horrify most people if they could see it or understand it. There are millions of mostly-electronic, mostly-invisible (to most people) instances of thefts and vandalism every year.  And it’s getting worse.

    How Bad is our Cyber Security?

    It’s really bad. While hard to estimate accurately, there is good evidence to suggest that over a quarter of all the traffic on the web is generated by bad guys. Think about it – it’s as though every street you walked or drove had terrorists or obvious gang members or other truly frightening people driving or walking along – not just people you thought looked scary, but people who were genuinely bad, and were out to do damage for their own benefit or just for “fun!”

    A lot of this seems to be low-level crime that many people don’t notice, like bad bots.

    Grandma
    But even in that case, real money is involved!

    Not only is every internet “street” crowded with smart thugs, they are far more effective than the famous robbers of the past. Willie Sutton was a famous bank robber. Over his 40 year career, he got away with an estimated $2 million. And spent decades in prison.

    Willie_Sutton

    According to a 2009 study by Lexis-Nexis, consumers lost over $4 billion that year; banks lost $11 billion, and retailers lost an astounding $190 billion – all just to one source of fraud, credit cards! Before long, we’ll be talking serious money here…

    Slick Willie Sutton is probably rolling in his grave, seething with jealousy.

    There are also really scary things like cybercrime directed at things that can make big explosions and kill loads of people. It’s on its way. I’m not going to talk about it any more here, but suffice it to say that I’m not feeling great about it.

    If it’s that bad, why isn’t is front page news?

    What kind of visual are you going to have on the evening news for another cyber-theft? What is the chance for the news babe to stick a microphone in some grieving person’s face and ask “how did it feel when [the bad guy] [did that awful thing] to [you or your close relative or your neighbor]?”

    By contrast, even a single awful thing happening to one person can make a great news story, with visuals and perhaps an interview with a person in distress. No cybercrime (so far) has generated close to the level of compelling visual as a single car-jacking of a single car in Chicago, for example.

    Carjacking 1
    In addition, the juicy targets for cybercrime are big organizations. When these organizations are hit, they tend to go to great lengths to cover it up. Most of the big hits (and they’re getting bigger and bigger) don’t make the news – partly because they don’t make “great news” (see above), and partly because the big organizations that get hit keep it real quiet – after all, there are no plumes of smoke, explosions or bleeding people to draw attention to the disaster.

    Conclusion

    Cybersecurity is way down on the priority lists of most people, for a variety of reasons, among them that it’s mostly invisible and hard to understand. This in spite of the fact that the fruits of cyber-crime from credit cards alone are thousands of times greater than physical robbery, with a fraction of the conviction rate. Cyber-crime is safe and profitable for those proficient in it! Cyber-crime, along with all other aspects of cyber-insecurity, is already at unprecedented levels and is getting worse, while most of us, including those in charge who should know better, are strolling along and whistling, as though everything were just fine. It’s NOT!

  • Bureaucracy, Regulation and Computer Security

    There always seems to be a bureaucracy ready to tell you how to keep your computer systems secure; or, worse, to tell you what you must do to be in compliance with the regulations promulgated by the bureaucracy. "It's for your own good," they say.

    If you are forced to comply with some regulation or other, you'd better comply. But you're a fool if you confuse compliance with keeping the assets of your business actually, you know, secure.

    Bureaucrats can't keep simple physical things secure

    Computers are complicated. Construction sites? Not so much. Fences, cameras, sensors, guards and an alert, well-managed staff should do the trick. But when bureaucrats are in charge? Forget it.

    David Velazquez was in charge of security at the World Trade Center construction site. Mr. Velazquez is a Columbia University graduate and had a 31 year career at the FBI, ending as head of the Newark field office. You might think well of the FBI, I don't know, but what I do know is that it's a giant government bureaucracy, and Mr. Valazquez appears to have applied the lessons he learned there on his new job.

    Here is one of the crack guards "on duty" at the work site:

    Sleeping guard
     

    That may explain why a group of guys was able to get to the top and jump off, recording video all the way down:

      Base jumper

    Then a kid slipped through a fence and made it all the way to the roof, unheeded by sleeping guards:

    Security kid

    The biggest, baddest bureaucrats of all can't keep their own computers secure

    Alright, maybe the FBI are amateurs. Let's go to the best of the best, the scariest cybersecurity experts of all, the NSA.

    NSA

    These guys are in charge of keeping us secure from the worst of the worst. A cover story in Wired Magazine told us all about it.

    Wired cover

    Loads of people using piles and piles of super-secret cyber magic are on the case:

    Wired story 1

    If anyone can achieve cyber-security, surely these guys are it:

    Wired story 3

    But we all know how that turned out. It just took one moderately clever person with bad intentions and all the vaunted cyber-wonderfulness was for naught. Among Mr. Snowden's myriad revelations was the previously secret budget of the cyber-bureaucrats of the NSA, an astounding $52 billion. Do you think if they doubled the budget they could have done a better job? Hmmmm.

    Bureaucrats and Security

    Why should you listen to someone who can't do it themselves? If you want to stop smoking, do you eagerly take the advice of someone who smokes? If you want to get rich, do you take advice from poor people? Bureaucrats are sure they're right — because they have no competition, and there's no one who has the power to tell them otherwise.

    Why this matters

    The laughable ineffectiveness of bureaucratic security in general, and cybersecurity in particular, can matter a great deal to you. Here's why:

    • If you do what the bureaucrats tell you to do, you'll spend a lot of money.
    • Following the regulations makes everything slower and less efficient. You'll hurt your business.
    • If you get conned into thinking that following the regulations means that you're secure, you're in big trouble. You will be more vulnerable to business-damaging breach than ever before.

    What you should do is simple: establish effective and efficient security by the best means available, which will typically be unrelated to what the authorities solemnly declare. Then, do as much regulation-following as you need to do, whether it's PCI or any of the rest of the alphabet soup, to avoid punishment.

    Is this cynical? Of course! But it's also real life.

  • Edward Snowden, Daniel Ellsberg: Ineffective Security, then and now

    In 1971, the New York Times started publishing excerpts from the closely guarded, highly top secret Pentagon Papers. It was an explosive public exposure of long-held secrets about the Vietnam War, and was a huge controversy. In 2013, the Guardian started publishing excerpts of closely guarded, highly top secret NSA operations. It was an exposive public exposure of the top secret operations of the most well-funded, computer-savvy security organization in the US. There is every reason to believe that security breaches will continue to happen, because the "experts" in charge of security just don't know how to get it done. They didn't know how 42 years ago, they don't know now, and they show no signs of even being interested in learning how to provide effective security.

    The RAND Corporation

    The RAND Corporation was one of the original top-secret research institutes. It was started after World War II to provide a place for top brains to figure things out that would help the military. In contrast to most places with top secret information at the time, the atmosphere inside RAND was purposefully academic and collegial. There were often open seminars and presentations anyone could attend, so that cross-disciplinary fertilization could take place. You had to have a very high level of background checking and security clearance to be admitted — but once you were in, you could go anywhere and talk with anyone, since everyone knew that if you were there, you had the appropriate clearances.

    People at RAND did truly pioneering work in econometrics, operations research, game theory and computing.

    The secrets at RAND needed to be faultlessly secure. While it looked like an ordinary office building close to the beach in Santa Monica, in fact it was a heavily fortified and guarded fortress, with armed guards at every entry point.

    Daniel Ellsberg

    Daniel-ellsberg-resized
    The story of Daniel Ellsberg and the Pentagon Papers is well known. Mr Ellsberg was a RAND employee, with degrees in economics from Harvard and a stint in the Marine Corps. He was involved in secret studies concerning the Vietnam war in the 1960's, and had access to what became known as the Pentagon Papers while at RAND around 1969. He made copies of literally thousands of pages at RAND … and walked out the door with them. Fortress RAND and all the armed guards kept the "normal" bad guys at bay — while letting the former corpsman with a PhD, dressed in a coat and tie and carrying a briefcase, walk calmly out with what they were supposed to be protecting.

    David Black

    1971 09 Harvard student ID card
    I was a scruffy-looking Harvard undergrad in 1970, and had gotten a summer job at RAND to work on the early ARPA net, the predecessor of today's internet. Before starting work, I had to undergo a thorough security clearance; agents actually visited many of my friends and asked probing questions. By the time I started work in July 1970, I had my SECRET clearance and was pending for TOP SECRET. I had a great time solving pioneering problems with the computers. RAND had an early IBM 360, and it was the first non-DEC machine to be connected to the ARPAnet, so we had to overcome a host of very basic issues, like resolving the conflicting coding schemes (EBCDIC vs ASCII), byte lengths (8 bit vs. 6 bit) and word lengths (32 bit vs. 36 bit), in addition to everything else.

    I was also amazed at everything else you could learn at RAND. While protests raged on the streets, inside the protected walls of RAND you could find out what was really going on in Vietnam and Cambodia, from people who had just returned from those places.

    In retrospect, I realize that I got a personal demonstration of how to conduct ineffective security that summer at RAND. The protestors had no chance of breaking into RAND and stealing its secrets. In fact, none did. The guards waved through most of the employees coming through the employee entrance. Except for the one who looked too much like the "hippies" outside. I got stopped and triple-checked every time. On the way out, all the clean-cut, well-dressed, brief-case-carrying employees like Daniel Ellsberg were similarly waved through — no danger there! But that tall, gangly, scruffy Harvard kid? Better stop him and search him thoroughly. He's just the kind of person who would steal our secrets. While they were doing everything but strip-searching me, Ellsberg was shopping the 7,000 pages of secrets he had already brazenly walked out with, under the friendly eyes of the clueless guards.

    The NSA leak of 2013

    The NSA is more of a fortress than RAND ever was. No way anyone could break in and come out alive. Cyber attack? Unlikely, for the same reason. A clean-cut employee-equivalent? Same story as RAND. Once on the inside, have fun! Do what you want, take what you want — we're too busy guarding against those scary outsiders to bother with you — you've got a clearance, you're OK! Except, like Ellsberg, Snowden was not OK.

    Ineffective then, Ineffective now

    I've previously discussed the standard methods for securing important things like bank and medical records. These methods have two fatal flaws.

    First, they take a fortress approach to security. They assume the attacks will come from outside the "walls" by outsiders. They ignore insider attacks, which are the most damaging ones by far.

    Second, they take a procedural, legalistic approach to security, assuming that if enough lawyers write enough regulations and procedures, and enough enforcement takes place through audits and certifications, the problem will be solved. They assume that complex, step-by-step procedures spelling out how to implement security are intrinsically better than simple definitions for what must be secured, with penalties for failures.The trouble is, no one executes the procedures perfectly, the procedures themselves are flawed, and the bad guys are always figuring out new ways to be bad.

    Either of these flaws is sufficient to explain our never-ending security crises, and our ever-spiralling costs for trying to be secure. Together, bad results are guaranteed.

    Summary

    Our security systems are straight from the time of castles and knights: we imagine that the threat is from the scary guys in armor charging around on big horses "out there." Then, with the wrong threat in mind, we .. get the lawyers on the case! We bury ourselves in policies, procedures, regulations, certifications and audits, all of which take time and money, and most of which is completely useless. Then the bad guy cleans up his act enough to get hired, ransacks the place, flees laughing all the way … and we're shocked?? The only shocking thing is that, 42 years after the Pentagon Papers, we're piling even more time and money into ramparts and moats, when the main threat has always been the traitor inside the walls.

     

  • Cyber Security Standards are Ineffective against Insiders like Edward Snowden

    The case of Edward Snowden, the fellow who ran off with a big pile of secrets from the super-secret NSA, illustrates a problem with the mainstream approach to computer security: it's expensive, it's burdensome, and it just doesn't work! Strengthening existing standard security measures, which is what usually happens after embarrassing episodes like this, will just make things worse.

    Securing what should be secure

    Other people can argue about what various agencies should or should not be doing and whether they should be secret. Putting all that aside, there are lots of things most of us want to be kept secret, for example our health and financial records, and for sure we want to prevent unauthorized use of that information. How hard is this to accomplish?

    Apparently it's pretty hard. There are huge security compromises that take place all too often, and smaller ones with great frequency. Security breaches resemble car crash deaths: there are so many of them (tens of thousands a year in the US!), that only the most gruesome of them make the news. If an agency with a secret budget probably in the billions, whose whole mission is about secrecy, can't stop an amateur like Edward Snowden, how is it that anything stays secret?

    Approaches to Security

    The vast majority of our thinking about security threats makes a couple crucial assumptions.

    Our thinking assumes that the threat comes from an outsider, and that the outsider attacks from the outside. The outsider (we think) probes to find a weakness in our defenses, and when he finds ones, smashes in and grabs what he wants.

    Regardless of the source of the threat, we assume that we can establish a procedure that will thwart any breach of security. We assume that if we are rigorous in our requirements for process, documentation, testing and much else, we can eliminate security threats.

    As the NSA case demonstrates, these assumptions are false. Regardless of your feelings about whether Snowden is a hero or a traitor, he clearly demonstrates the fact that our current approach to security is a waste of time.

    Insiders are the real threat

    The first assumption is the "bad guys out there" assumption. Huge amounts of money is spent on "intrusion detection," firewalls, and endless things that amount to building a castle wall that is high and thick so that our secrets can be protected.

    Here's what happens. The marauding knights come sauntering along and see those high walls. Naturally they check it out. They're impressed by everything about your wonderful castle: the moat, the guards, the mean-looking guys on the ramparts, the whole bit. So if you were a sensible bad guy, what would you do?

    You'd go to the nearest town, trade in your bad-guy clothes for a respectable suit or workman's clothes, or whatever the castle is looking to hire. Then you'd walk up to the employee entrance and apply for a job! Once you were inside, you'd keep your nose clean and figure out the lay of the land. Once you had it scoped, one day you'd leave at the end of your shift a much richer person than you were before, so rich that, well, you didn't bother to report to work at the castle any more.

    I was first educated about this by Paul Proctor, who gave me a copy of his 2001 book, The Practical Intrusion Detection Handbook. Most of the book is about what people want to buy, which is based on the "bad guys are out there" theory. But he has a whole chapter on "host-based intrusion detection," in which he spells out the methods and importance of detecting and thwarting bad guys who have managed to get a job working for you. This is what everyone should be doing, and all these years later, we're not!

    Tell me what to do, not how to do it!

    The second assumption is that we can define step-by-step procedures that will prevent security breaches. Hah! Not true! The vast majority of our security procedures have been written by people who are lawyers; if they're not, they're sure acting like they are!

    What we should do is tell you what to accomplish in simple terms, like "Don't murder anyone. No matter how mad or drunk you are, just don't do it. If you do, we'll execute you or put you in jail for a long time. So there." That's all you need, when you're telling someone what to accomplish.

    The equivalent for HIPPA would be something like: "Don't give anyone's health records to anyone except that person or their designated representative, like a parent if they're a kid."

    The equivalent for NSA would be: "Hey, everything we're doing here is real important stuff regarding national security, like what our name says. So don't let anyone who doesn't also work for NSA have it. Period. Ever. Otherwise, you're a traitor, and we'll nail you."

    Instead, what companies and agencies are required to do is conform to an ever-growing collection of detailed methods for supposedly getting secure. Except you spend so much time conforming to the regulations that some guy walks out the door with all your secrets!

    Here's the bad news: Snowden wasn't an exception; he's simply a particularly famous typical case in security-regulated organizations.

    Conclusion

    Edward Snowden is the tip of a security-breach iceberg. Credit cards are being stolen in spite of onerous security regulations. Health records are being compromised, in spite of increasingly onerous regulations. Our approach to security is flawed, fundamentally and by assumption. It's like we're in the water and we're trying to swim by blowing on the water. It's not working, and the solution is not to try blowing even harder. The solution is to take an aggressive, non-regulatory approach to the most likely perpetrators, insiders.

     

  • Chase’s Exemplary Handling of Data Theft

    I think if Chase had really tried, they could have done a worse job telling customers about the recent security breach.

    Background

    Apparently being incapable of performing the requisite fairly simply processing and analysis on their own, Chase and other giant financial institutions give their customers' data to Epsilon (among others!) for marketing-related processing. Despite (I assume) conforming to all the odious rules and regulations for keeping the data secure, Epsilon somehow suffered a major data breach; in order to protect the guilty, the details have not been released.

    Chase's e-mail

    Naturally, Chase and others rushed to assure their customers that everything was really OK, while providing them with helpful hints about avoiding getting scammed by all the crooks who now have the data. Here's the one I received.

    Chase

    Why Chase deserves an award for Badness

    Chase provides a wealth of examples not to follow if you want to treat your customers with respect. Here are a few of the highlights.

    • Timing. The breach reportedly took place on March 30. It was made public the following day. I received Chase's e-mail on the evening of April 4. Boy, Chase sure fell over themselves getting the word out to their customers, didn't they?
    • What was stolen. Epsilon's own press release admits that not only customer e-mail addresses, but also names were stolen. If you read Chase's tardy missive word for word, you notice that they carefully omit to tell their customers that their names were also stolen, while repeating that no "customer account or financial information" was stolen. Surely a customer's name is part of that customer's account! If not, exactly what is it? Why couldn't they just be honest, and tell me that my name was stolen too?
    • What was stolen. Epsilon's second press release emphasizes how they have absolutely, definitely, no-kidding determined that nothing but names and e-mails have been stolen. I'm sorry, but this can't possibly be true. Chase isn't using Epsilon just to do e-mail blasts. They are using them for their analysis based on detailed customer information. According to Epsilon itself, this data includes "Comprehensive income, credit, debt and asset data." It is simply not credible to claim that this data could not be deduced by the thieves from what they took. Neither Chase nor Epsilon bothers to mention all the customer-specific information they've got, which also includes "age, marital status, occupation, ethnicity and changes such as a new child, a move, changes in household income or a new driver."
    • Disastrous advice. Look at the list of recommendations in the e-mail. Do they once, even once, describe, mention or warn against phishing, which is the real danger of having this information out there? They do not! What do they warn against? Repeatedly, they tell you not to put sensitive information into an e-mail, or to respond to a spam e-mail. When the real danger is phishing!
    • Unwanted spam. I can't help pointing out that Chase gives me the incredibly insightful advice to "be on the lookout for unwanted spam." As opposed to the spam I want? After I've identified my spam and put it into "wanted" and "unwanted" piles, exactly what should I do? Since I was told to be "on the lookout" for it, I guess I should spend some time looking at it.
    • Follow up. Chase promises to tell me "everything we know as we know it, and will keep you informed…" Simply put, there has been no follow-up. If you're not going to do it, don't say that you will.

    Summary

    It is clear that Chase

    • notified its customers tardily,
    • demonstrably lied about what was stolen,
    • gave terrible and/or laughable advice about what the customer should do,
    • and finally made promises they failed to keep.

    Could they have done worse? Probably. Meanwhile, let's use this as an anti-role-model for how to handle situations of this kind.

Links

Recent Posts

Categories