Site Network: Beskerming.com | Skiifwrald.com | Jongsma & Jongsma

Innovation in Information Security

Coverage of important Information Security and Information Technology news and events from the research team at S?nnet Beskerming.

Username: | Password: Contact us to request an account

Internet Flaw Highlights More Than Just Technical Problems

When Dan Kaminsky released a cryptic announcement that one of the core technologies (DNS, the Domain Name System) tying the Internet together was vulnerable to a critical weakness it gained the attention of many people, especially given that many of the software vendors who create the vulnerable software had come together to address the problem and the fact that Kaminsky was going to delay the release of information until early August, at the Las Vegas Black Hat conference.

Despite the secrecy about the details of the vulnerability, if you don't want anyone else to work it out for you, then don't tell anyone you've found something. The lack of openness about the issue led many to start speculating and eventually Halvar Flake hit upon the correct answer. When Kaminsky himself challenged others to look into the security of DNS and look at what might have been missed, the outcome was almost guaranteed. Indeed, since the vulnerability was correctly speculated on, exploit code has been publicly released through a number of websites and mailing lists.

Since the correct guessing of the vulnerability, the general response has been one of panic. Those who have read and understood the technical details have largely been left scratching their heads - there's not really anything new there. All it demonstrates is a corner case of a previously known issue. Certainly the issue is one that should have been fixed properly the first time, but for whatever reason it wasn't.

What is more interesting is to see the vitriol that has now emerged as people realise the information is out there. Some of the most serious claims have been levelled against the team at Matasano Chargen for having been the ones to actually spill the beans, as Halvar Flake had only speculated about the details. The pulled post at Matasano Chargen did more to get people to sit up and take notice than it would have if it was left in place and the fact that they had declared that they were part of the trusted few who had the details confirmed by Dan Kaminsky only further validated for many people what had been posted.

Part of the problem is once data has been published on the Internet it is awfully hard to completely retract it, even if it has only been there for a couple of hours in total. As the retracted post at Matasano Chargen promised technical details on the vulnerability it was quickly snapped up by the lucky few who were able to see it and then reproduced on numerous other sites.

Information Security has egg on its face over this issue. It shows how immature the industry can be and how poor many people's skills are at managing release and coordination of information. To his credit Dan Kaminsky did find something that hadn't been fixed. Whether that is an old problem or not is irrelevant for the time being, as it affected a significant portion of the Internet's DNS servers and required a coordinated effort by vendors to do something about it.

The whole incident has left a sour taste in many mouths.

Is Black Hat or DefCon the place to release all about a vulnerability? After the debacle surrounding David Maynor and Jon Ellch's Black Hat OS X wireless vulnerability demonstration in 2006, perhaps people who are looking to release sensitive vulnerability information with some flair should reconsider the pre-release media blitz. It runs the very high risk of turning what might be a valid issue into a circus and leaving all involved worse off for the experience.

Richard Bejtlich suggests that the incident might have been better handled if initial and full disclosure was handled by an impartial third party and the conference used for post-disclosure discussion and the details of how the vulnerability was found. The problem is then finding what can be regarded as an impartial third party.

The open discussion that was created following the initial announcement turned up a more serious problem, which will continue to have problems for users long after most systems are updated to address the vulnerability. NAT, a very common technology that allows for multiple systems to sit behind a single network connection wasn't considered in the vulnerability equation but it was soon realised that the method implemented to protect against the vulnerability would break down when network traffic encountered most NAT devices, with the result of zero protection against the vulnerability.

The whole idea of responsible disclosure, most famously set out by Rain Forest Puppy, has broken down in this case. Those who were not briefed in with details on the vulnerability feel that security by obscurity was the gameplan and watching how the incident played out in the media and how those who knew were (mis)managing the information reinforced this idea for them. As far as those who did know the details, they saw the withholding of information as a necessary step to prevent widespread attack before updated systems could be put in place. The problem was that this left everyone else having to guesstimate the severity of the vulnerability, or having to trust the claims being made by people who weren't releasing enough information to back up their claims.

The problem with the approach taken was that it was set up such that the carrot being dangled was too tempting for everyone to leave alone until Black Hat. When the vulnerability was finally released, it didn't seem to make a lot of sense, surely the vulnerability wasn't as simple as that. With the way that a number of people in the know were talking it sounded like the world was about to end.

So, what is the vulnerability?

Historically, it was possible to guess fairly quickly the IDs in use by DNS queries and responses and so insert fake responses to poison a DNS cache and point requests for legitimate sites to those under a hacker's control. Improved random number generators (to increase the entropy of the IDs) and randomising the source ports helped make this particular attack far more difficult to carry out (but not completely impossible).

Within the structure of a DNS response it is possible for amplifying data to be returned about a domain so that subsequent requests to that domain or subdomains can be made more efficiently, either by identifying the correct authoritative server to query or by supplying the data direct to the requesting system so that it doesn't need to poll the server.

It is this particular feature which is the key to the whole discovery made by Dan Kaminsky. While it should not be possible (poor implementation of the specification aside) for this amplifying data to change the details of other domain entries, it is possible for the amplifying data to change the details for parent domains. This means that a poisoned response for poisoned.example.com can change the details for example.com.

Without the source port randomisation, it has been discovered that it is possible to overcome the message ID randomisation and inject a fake response that poisons the entry for the top domain in around 10 seconds on a fast modern system. To achieve this, numerous requests are made for fake subdomains until the right combination of ID and timing have been found to inject the response. The solution of adding increased randomisation to the source ports used in making the requests adds another layer of complexity for the hacker to overcome, one which is enough for this point in time.

It is a band-aid type solution? Only time will show, but it might prove good enough for the next few years at least. Perhaps a better solution would be that every domain should include a wildcard subdomain entry that identifies the legitimate main server as the authoritative one for all subdomains for that particular domain. Sending this wildcard information in the DNS response would result in increased network traffic but it would also completely neutralise a spoofing attack (unless the attacker is lucky enough to have the right combination of ID, timing, and source port to beat the legitimate response to the end user). It might break some business models that rely upon selling / marketing subdomains and mean more authoritative DNS servers need to be set up, but that is what might be necessary to completely neutralise the vulnerability.

At the end of the day it still only seems to be domain-specific poisoning, that is you can't forcefully poison results for a domain that you aren't already making requests for (i.e. poisoning the result for google.com when making requests for yahoo.com), but with the various IFRAME and JavaScript tricks that exist out there it isn't too hard to make this seem transparent - such that the user doesn't know that they have been making requests for the site, but by this stage it is too late for their system and they are compromised. With readily available exploit code this is going to become a real problem for many people in a short period of time.

24 July 2008

Social bookmark this page at eKstreme.
Alternatively, Bookmark or Share via AddThis

Do you like how we cover Information Security news? How about checking out our company services, delivered the same way our news is.

Let our Free OS X Screen Saver deliver the latest security alerts and commentary to your desktop when you're not at your system.

Comments will soon be available for registered users.