Network forensics investigator, Unix skald, other geekage
@maradydd @georgiaweidman I totally believe you! What I mean is that ppl won't realize $person = THIEF if they don't know who $person is.
@maradydd I support @georgiaweidman but it's hard to know the dude's rep when few of us know who he is (albeit probably for the best)
ARGH quit telling me "no he wouldn't do that". And then when I lay out the facts, "it's not that bad, just trust him".
@dave_rel1k yeah, I have zero intention of ever having anything to do with that one. Wish I could do more about their shameful response.
@ErrataRob Ugh. You here for a bit? I'm downtown.
@cr1ysys well to be fair I did have pizza and beer first ;)
@JoelWeever @ericjhuber that said, absolutely 100% on "people > tech", contrary to what many execs want to believe.
@JoelWeever @ericjhuber missed this post before. part of the solution is to turn them into hunters rather than waiting on a call.
@PogoWasRight that's not to say that FISA orders aren't completely overdone, but that PRISM itself isn't the issue.
@PogoWasRight they didn't suffer brand damage before then, and I suspect the reality about PRISM is far more pedestrian.
@NeverwinterGame good news everyone!
@PogoWasRight probability is low but Google is standing on principle here.
RT @Crypt0s: @bbaskin Sadder now that Defcon is cancelled as well. I had to cancel my flights and I didn't have flight insurance.
@bbaskin cost reasons, security concerns, or something else?
@JGamblin when you're the smartest guy in the room, it's time to find a better room.
Bourbon AND dessert. For once, the first isn't standing in for the latter.
@HackerHuntress or Maybe He is German
I’d been considering getting involved in CTF365, which is sort of an online MMORPG for hackers. That’s a bit of an oversimplification – it’s a persistent joint Red/Blue CTF with gamification, so “hacker MMORPG” sounds easier, heh.
But then today I facepalmed at this gem from their Twitter account:
Look, people tell jokes, sometimes inappropriate ones. But context matters: if a handful of friends go out for drinks and this sort of thing comes up, that’s human nature. But if a well-known network security community project tweets this, then it sort of puts up a “NO GIRLS ALLOWED” sign. CTF365 is wrong to tweet this because it’s the wrong place to make a joke at the expense of women.
But you’re being too sensitive! If the joke had been turned around and made fun of men, would you be offended?
Actually, yeah, I would, but that’s not really the point here. Hackers don’t really have a history of making it difficult for men to get involved, particularly straight white men like me. As Louis CK says, “you can’t even hurt my feelings” – this is, like, the canonical example of male privilege. We don’t have to worry about whether people are objectifying us and whether that impacts our goals, because hey, “objectify me all you want baby”, right?
Is this really worth worrying about? Aren’t you just seeing boogeymen in places where they don’t exist, risking a backlash to create a truly hostile environment?
I largely agree with the principle that there are times when we should let things go. I believe that this particular issue, however, deserves our attention because it tells women (particularly those newer to the community) that this is a boys’ club where they will be treated as “scene whores” only interested in what they can get out of other people, rather than full community members whose participation is as valued as that of anybody else. And this isn’t the first time they’ve made such jokes, not surprisingly.
And while they did make an attempt to apologize, “sorry if you felt that way” is not an apology because it implies the problem lies with the person offended, not with the person who did the wrong thing in the first place.
I certainly hope CTF365 reconsiders its approach to building a community.
Independent security researchers often have a reputation as narcissistic vulnerability pimps (true or not), but the environment which has evolved around information security largely drives this. This came to a head for me tonight in a Twitter discussion kicked off by Steve Werby:
CTFs are awesome, but if we can teach students how to articulate recommendations in a way non-technical staff can understand, even better.—
Steve Werby (@stevewerby) March 08, 2013
Creating an exploit can often pay anywhere between 1k and 100k (or possibly more in specific circumstances), depending on the researcher’s choice of market and product (or technology). This even affects areas that many users believe unrelated, like mobile OS jailbreaks, which essentially consist of exploits to gain root control despite the operating system’s best efforts to the contrary.
No equivalent market exists for threat-related research. Freelance malware analysts don’t have similar economic drivers because organizations with an interest in this information generally do the research themselves. You can’t monetize malware or attribution the same way. Put another way, nobody believes that Krebs and Danchev get rich from what they do. I don’t think we can “fix” this with the market, although I’d welcome discussion of ideas or evidence to the contrary. But we need to recognize this when thinking about issues around software security and threat identification.
Believing that security, on its own, adds value often turns into a form of the broken windows fallacy. And creating artificial demand for threat intelligence could lead to all sorts of perverse incentives. Some of the same organizations interested in purchasing vulnerabilities and exploits might have an interest in highly-focused intelligence, such as espionage on particular threat groups, but at this point the line between “offense” and “defense” becomes very fuzzy.
I’d love to hear alternate viewpoints and suggestions on where this can go.
You've just ordered pizza from our site
[snipped yummy but long listing of pizzas and drinks including crappy beer]
If you haven’t made the order and it’s a fraud case, please follow the link and cancel the order.
CANCEL ORDER NOW!
If you don’t do that shortly, the order will be confirmed and delivered to you.
However, I wasn’t really worried about the fraud possibility, so I decided to ignore the spam and instead to take the opportunity to run the URL through thug. It performed spectacularly well, grabbing the page, finding the exploits (at least some of them, anyway), and keeping everything neat, orderly, and secure.
hxxp://sweety-angel[.]de/local.htm redirects to hxxp://gimalayad[.]ru:8080/forum/links/column.php, which loaded a Java applet, a Flash file, and two PDF documents. At the time I ran them, VirusTotal hadn’t seen them before but a few engines identified the PDFs and the Flash file as part of the Black Hole Exploit Kit. I found the use of old Adobe Reader vulnerabilities (2010 vintage) a little humorous. Contact me via Twitter or email if you’d like the actual files. I published the IOCs as a Google Doc for reference.
I’ve seen several people talk about lacking ideas for research projects, often around DFIR or network security. Personally, I have the opposite problem: endless ideas for projects, often with the barest hint of a start, but not enough time to pursue them all. So I thought I’d publish a bit of a brain dump. I actually have made good progress on a few of these, and I have concrete plans around others (beyond just “wouldn’t it be cool if…”), but in any case I’d love to see other people pick them up and run with them.
If you do happen to get interested in any of the following, I wouldn’t mind a quick note to touch base to see about possibilities for collaboration or at least an acknowledgement in whatever you publish. Don’t interpret that as any sort of requirement, though; ideas have no value without execution, so all the hard work hasn’t even begun.
- Classification across a large corpus
- Automated IOC extraction and publication
- Threat Actors
- Profiling systems, particularly based on OSINT
- Underanalyzed crime groups (e.g. drug cartels involvement in malware, spam, and fraud)
- Hacktivism motivations and methods
- Cracking lab setups
- Useful entropy calculations
- Quantitative analysis of incidents
- DDOS attacks (hard to get numbers on these)
- Defacements and low-level leaks
- Active Defense
- Honeypots and honeyclients
- Vocabulary or taxonomy on various methods
- Callback Trojans in documents
- C2 / RAT vulnerability research
As part of some research into “active defense“, I decided to review the actual text of the Computer Fraud and Abuse Act (CFAA). This law has a number of well-documented problems, which I don’t plan to address in this post, partly because IANAL and partly because I want to focus on how the Act describes a “protected computer”:
the term “protected computer” means a computer—
(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or
(B) which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States
(Emphasis mine.) Specifically, I want to think about the implications related to a “computer located outside the United States”. Assuming that such a system doesn’t affect US commerce or communications (whether or not that activity takes place within the US), would it fall under the definition of a protected computer? For example, if a US person gains access to a command-and-control system in another country and takes some action that would otherwise certainly violate the CFAA were the C2 in the United States, perhaps the CFAA does not apply. Or maybe somebody accesses an exploit server or malware host to gather additional information: does the CFAA cover this? (Other statutes, particularly in the host country, may apply, so don’t do anything that might get you thrown in prison, kids. We’re just thinking about what the law may cover.)
…managed to gain access to a computer in Taiwan that it suspected of being the source of the attacks. Peering inside that machine, company engineers actually saw evidence of the aftermath of the attacks, not only at Google, but also at at least 33 other companies, including Adobe Systems, Northrop Grumman and Juniper Networks, according to a government consultant who has spoken with the investigators.
(Emphasis mine again.) So, according to this story, Google somehow accessed a system that presumably did not belong to them. Depending on that system’s function, perhaps this didn’t violate the CFAA. Certainly, the USSS or the Department of Justice or Secretary Clinton did not publicly express concern about this. As far as we know, they didn’t shut down the system or otherwise damage it, so while they could have concerns about Taiwanese law if they actually did any of this, they might not have to worry about the CFAA.
This post does not advocate so-called hack back retaliation, but my initial non-lawyerly analysis makes me wonder if other people already depend on this interpretation for various sorts of activities.
I took an idea from my buddy Scott Thomas and now have a page listing my upcoming speaking engagements. At the moment, it’s a bit light, but I have sent quite a few other CFP responses for events that haven’t closed yet. And I expect that work-related stuff will fill it up quickly as well. Some of those will be private but I’ll at least try to list the city in case anybody wants to get together for a drink or something.
Although I work for a competitor, I believe Mandiant did the right thing here. Others may disagree to an extent for good reasons, while others simply went too far in their assumptions and criticisms. (And some folks just need to take off the tinfoil hats). I don’t really care that much about what makes the sekrit skwirl cabal happy, and in fact it tickles me when they get frustrated by “outsiders” (inasmuch as Mandiant is one, anyway) not playing by their rules. In any case, healthy skepticism regarding someone else’s conclusions keeps them honest, but don’t miss the big picture out of myopia. The relative prevalence of espionage and APT relative to regular criminal activity remains an open research question and a valid area of debate, but I’ve seen some really smart people this week falling into the cliché of missing the forest for the trees.
Instead, this means the adversary can’t dictate the pace and terms of the conflict, whether or not they completely retool. By driving up the cost to the attacker over time, you start to make headway. That works both ways, of course, and at the moment that balance leans decidedly in their favor. Releasing the IOCs will also allow defenders to discover additional compromises. Remember that opponents make mistakes, and so we can capitalize on the opportunity for ongoing intel gathering as they transition to new infrastructure (assuming they even bother).
Sharing information has more than just tactical value. In my view (obviously not one shared by Congress), this points out that we don’t need the government to get in the way with CISPA or other information-sharing that stays behind walls of overclassification or possibly creates additional privacy and civil rights issues. We can do this the right way and improve things. Partisan politics lies way outside the scope of this blog, but I certainly see this as “we’re from the government and we’re here to help” territory.
: As usual, these represent my opinions only. And that’s only good for today anyway because I may change my mind as new facts come to light or I think about topics more thoroughly.
With the release this week of President Obama’s executive order on Improving Critical Infrastructure Cybersecurity and the accompanying detail in Presidential Policy Directive 21, lots of people have commented on the implications. Jack Whitsitt appears to have some solid commentary coming.
However, a piece by Richard Stiennon on Forbes caught my eye, not because of the information in it, but because of the FUD it contains.
First, he attacks the concept of risk management:
But risk management does not work in unpredictable environments. Risk management is the framework that most banks, hedge funds and trading desks use when addressing financial risks like those present in the real estate, commodities or derivatives markets. We know how well that worked. Management consultants and bureaucrats love risk management. It foists responsibility away from individuals and onto a process.
Here’s a hint: yes, it does work in “unpredictable environments”, when performed properly by responsible managers. (Whether the DHS can provide this is a separate question, and one on which I suspect Stiennon and I would likely agree.) This stems from the concept of uncertainty from statistics and related sciences. And simply saying ‘risk management is bad because bankers’ (obviously a paraphrase) isn’t wry sniping, as Stiennon later commented, but FUD.
How will an uber-map of critical infrastructure be kept out of the hands of the very threat actors that are targeting these systems? PPD 21 will, in effect, create yet another critical information asset that will end up at the top of the list of critical vulnerable assets.
I don’t know what this means. By this logic, we shouldn’t ever create an inventory of our assets. Does he not keep financial records? Would he have counseled the government during the Cold War not to keep track of nuclear launch sites? Yes, of course the documents detailing these things require appropriate controls, but to conclude that the government should not analyze and sort critical infrastructure because adversaries would love to have this information doesn’t make any sense.
Centralized information collection and dissemination is a natural requirement for risk management. It is akin to the economic data collection and analysis that command economies resort to in place of free markets.
Yes, he basically just said that centralized databases are communism. I have nothing to add here because it speaks for itself.
Stiennon concludes this way:
PPD 21 makes previous unfunded mandates seem simple by comparison. Its breath and scope is a giant overlay on top of the existing system of Federal agencies that, if executed as directed, will turn what was a of collection of connected puddles of government regulatory bodies into a single giant quagmire. It is a top down solution that expresses the frustration of good intentions to “do something.” Even if all the hurdles of implementing an over arching risk management framework were overcome there would still be the errant tree branch or targeted malware that could shut down the power grid.
Yes, bad things will still happen. That is not an excuse to do nothing. Stiennon proposes no alternatives here, other than the implied idea of leaving a “collection of connected puddles of government regulatory bodies” as they are. The current system doesn’t work that well, and whereas I’m not convinced right now that PDD 21 will actually do anything, I also believe that we as professionals and citizens should find ways to improve things rather than simply shoot down anything that isn’t perfect ‘because reasons’.
As I continued to hack on mwcrawler over the last month, I found that it didn’t really meet my needs for various reasons: slowness, difficulty of maintaining and adding sources, repeated grabbing of the same URL, and lack of response from the original author. So I’ve rewritten it and released Maltrieve, which (as the name indicates) retrieves malware directly from the sources listed at a number of sites. Improvements listed in the README include:
- Proxy support
- Multithreading for improved performance
- Logging of source URLs
- Multiple user agent support
- Better error handling
Right now, Maltrieve only looks at four meta-sources because two of the six in mwcrawler appear offline. But I have at least four more on deck, and mwcrawler didn’t parse all of its meta-sources correctly in any case. I also know of a few bugs that I haven’t figured out how to squash yet, but the core functionality works and it needs a broader audience to bang on it. Thus, I’ve tagged this version “beta-1″. Don’t rely on this for serious production, please.
If you use it, please let me know just so I can bask in the warm glow of productivity. The project itself remains under the GPL, of course. Suggestions, bug reports, etc. also would make me happy, whether via issues and pull requests on Github, contacting me on Twitter, or comments here.
What a week: disclosure of compromises at the New York Times, Wall Street Journal, and Washington Post. A Java update released on a Friday evening 18 days early due to active exploitation. Twitter compromised affecting 250k users, including me. I may have more to say about the Twitter compromise later.
I’ve assumed for some time that state-sponsored attackers have long targeted major media outlets, especially those who regularly report on national security issues. While we don’t need to start putting on tinfoil hats, the ill-fated Wikileaks partnership with the NYT should have provided a pretty obvious starting point for people to think about these issues. Even more obviously, at least to me, journalists have had to take OPSEC seriously for a very long time, whether due to drug cartels or US presidents unhappy with political and legal revelations. I wouldn’t characterize these incidents as an assault on our way of life, exactly, because the Fourth Estate has always had conflicts with power. We should become far more suspicious when governments don’t concern themselves with the press, because that says something about their relationships with it or, perhaps, their views of popular opinion.
Others have criticized the reporting and the completeness of the stories. For what it’s worth, as noted above, I certainly don’t think claiming that governments have tried to attack journalists really presents an extraordinary claim. And I have seen enough evidence first-hand to believe that Chinese-based actors actively exploit networks around the world. Combining the two, we know how the Chinese government regards free speech and a free press.
But if you want us to believe that this represents the greatest transfer of wealth in history and all the other hyperbole that surrounds discussion of “the APT” and “China” and “cyberwar”, you need to present evidence. Declassify it, make it public, show it to the American people. If you’re a news outlet dedicated to informing the public, give us the facts. When the government wants to make a case for war, it discusses specific incidents and presents intelligence. If we face such a great threat, don’t just assert the threat, prove it. (Note: I don’t actually expect any of this to happen.)
Whether the intelligence will amount to proof, however, remains to be seen.