MSNBC Chris Hayes’s preciously naive belief in an internal Stuxnet leak investigation.

I was unaware that MSNBC had a policy of hiring virgins:



Transcript
of the relevant bit from MSNBC’s Chris Hayes, trying to lemonade the James Rosen raid:

Here’s the other wrinkle to this; and the other shoe to drop on this, is the Stuxnet leak investigation. A big story in the New York Times about this triumphant thing that the administration did. It was this incredible virus that took out the nuclear program. It’s an incredible story, David Sanger in Iran. It is very clear that information was classified and it is very, very clear from the articles that it was leaked by senior administration officials. There’s just no way it wasn’t.

So the question is, okay, fair’s fair: if you’re going to go after some low-lying guy in the State Department for leaking something, then you should be going after whoever the senior administration officials are. And the other shoe to drop in all this is there is no way in my humble opinion that the New York Times was not subject to the same kinds of warrants. We haven’t seen them yet. But I would bet they’re going to come out.

…AHAHAHA. :wiping eyes: Oh, my. Oh my, oh my, oh my.  Really, it’s best to think of MSBNC as being some sort of demented nature preserve: not a true ecosystem, but at least you can see rare critters that simply cannot survive in the wild…

Moe Lane

PS: No, Chris.  There was no internal investigation of the Stuxnet leak.

14 thoughts on “MSNBC Chris Hayes’s preciously naive belief in an internal Stuxnet leak investigation.”

  1. Wait a sec.
    .
    The MS-NBC guy credited the administration with a .. virus?
    .
    A virus that hit *MS* Windows machines?
    .
    I think, Moe, the brain damage may be worse than you think.
    .
    Mew

      1. The way this virus spreads has been discussed quite a lot, on various tech boards. (http://slashdot.org for example)
        .
        The general consensus seems to be that it couldn’t have been developed without access to the source code.
        .
        This leads to two ugly realities – either the folks who wrote the virus for the U.S. government *also* spied on a U.S. corporation, or a U.S. corporation knew they’d released buggy, vulnerable code *and did not fix it*.
        .
        There may, of course, be a “third way” out of this.
        .
        Mew

        1. Nah, they didn’t need access to the source code. Just a team to look for the vulnerabilities that exist in every OS. And in any case MS at least used to allow universities and researchers access to most of the source code anyways.

          Writing secure code is really hard, even senior guys will screw it up from time to time, and there are a lot of non-senior guys working on it as well.

          1. For a normal virus, Skip, that’s true. Lazy infects all areas of endeavor, including systems admins, veterinarians, etc.
            .
            However, while normal virus find one hole and spread because patching is delayed because .. lazy .. Stuxnet was not normal – for one thing, it used several different and previously unidentified holes to spread itself.
            .
            So .. if lazy affects all fields, why didn’t it affect these virus-writers? Why did they keep looking after finding their first hole?
            .
            That, and the volume of time needed to find and develop exploits to the holes involved, leads this cat to believe they had the source code.
            .
            That still means the government exploited bugs in a U.S. company’s product. It also has .. interesting legal ramifications .. depending on just when Microsoft became aware of Stuxnet.
            .
            “Yes, your honor, we were aware of bug # 52142 prior to 2009. Yes, that is prior to the computers at Al’s House of Pie crashing, costing Al $$$ in lost sales and ultimately leading to the closing of Al’s House of Pie. No, we didn’t patch it, we were asked not to.”
            .
            Mew
            .
            .
            .
            .
            p.s. Yes, this way lies madness.

  2. completely OT: Moe, I hope you remembered XCOM is $10 on Amazon right now. DO IT!

  3. @Acat, I wasn’t implying it was normal, just that you don’t really need source code access to accomplish it (and in some respects it wouldn’t be particularly helpful over just loading things in a kernel debugger). And laziness doesn’t really have anything to do with it either. Complex systems are complex, and it’s just a really difficult problem.

    As for using multiple exploits, that’s actually pretty common. Something like this requires three parts. One, a way to deliver arbitrary code to a system. Two, a way to get that code executed. Three, a way to elevate privileges to system level. Once you have all three in a way that can be used together, you own the system. In particular, stuxnet used one execution only exploit, that would have required most likely carrying in an infected thumb drive, two execution and delivery exploits, both of which would have been stopped by most firewalls at a network’s edge, but not internally, and two privilege escalation exploits in order to own the machine once there. Probably one or two people on a team found all of these.

    1. Skip, no offense, but it looks like you’re obfuscating a bit here.
      .
      What’s unusual isn’t that a virus-writer has to have a way to deliver plus a way to execute, plus a way to elevate to system.
      .
      What’s unusual is that stuxnet didn’t *just* have one way to do all of the above .. it had, as you detail, *several*… and further, they were all zero-day, i.e. “never seen before” exploits.
      .
      This is clearly a team effort, and clearly the team was the U.S. government – who do, equally clearly, have access to MS source code.
      .
      As for combing source code vs. using an online debugger, I will just point out that identifying a known defective function or a coding “bad habit” (laziness, as you previously identified) and grepping for all spots where it’s called is a hell of a lot faster at suggesting areas to poke than a debugger would be.
      .
      Mew

      1. No. I didn’t suggest laziness, I explicitly rejected it as a cause. The reason it’s not laziness is that it’s frickin hard not to. Give the task of writing a non-trivial piece of code for a chunk of an OS that has external interfaces to a thousand developers, and 950 of them at least will turn in code that’s exploitable, and mostly in totally non-obvious ways.

        Take, for example, the LNK file exploit that was the primary infection vector. LNK files are windows shortcuts, so when explorer sees one in a directory it needs to go look at the target to figure out what kind of icon to display. Sometimes it’s a preview of an image, or the file type will give it a generic image. Doesn’t sound like there’s a lot of room for an exploit here on a perfectly validly-formed file. No buffer overflows, no using undocumented fields, etc. But the problem comes from the fact that windows displays certain things as files in a filesystem that aren’t actually files, and you can create shortcuts to those things as well. So you create a shortcut to a control panel entry, what it actually does is flag it as that type, and then make the file it points to actually be the system file that runs the control panel entry. So explorer, when it sees one of these LNK files passes it off to the control panel system to get the icon to display. That system has to load the file into memory to get the icon out. And here’s the problem – unless you explicitly pass in a flag to not do so, the startup code on that file gets run. Cpde that developers don’t ever, really, look at because almost 100% of the time it’s boilerplate.

        So in this case developer 1 writes the code that handles figuring out what icon to display. This probably happened back in the windows 1.0 days. Developer 2 writes the control panel, which originally was an app that didn’t appear to be part of the file system. He writes code that figures out the icon to display on all the registered control panels. There’s no reason to avoid running the startup, and in fact you want to run it because the control panel is likely to need to have the code available. Developer 3 writes the code that makes the control panel just look like any other part of the file system, so he adds support in the LNK files for special file types that aren’t really files, and he just calls into the code that already exists in the control panel applet to figure out the display icon.

        There’s no laziness involved here at all, re-using code like this is what we’re supposed to be doing, and in fact not re-using code causes more exploit vectors because you’ll end up not getting fixes when they’re found because only one copy of the code is patched.

        This exploit existed at least as far back as windows 2000, and sat unfound as far as we know for more than a decade. It probably existed on Windows 95. And it would have been completely non-obvious from the source code, as all of the code would have appeared kosher.

        1. IMO, Skip, in your example, the behaviour of developers 2 and 3 *IS LAZY* because they re-used code without understanding the risks.
          .
          The LNK thing is a nifty vector, but .. it is the responsibility of the coder to understand what the code being re-used does, and to look for potential vulnerabilities.
          .
          This comes through, loud and clear, in both the Microsoft Press book “Writing solid code” (been on my bookshelf for years) and, of course, in Dilbert.
          .
          All that is an aside to the main point; if the government wrote a virus that infected any computers owned by companies or private individuals and that infection cost them time or money .. is the government just as guilty as Robert Morris? More so?
          .
          Mew

Comments are closed.