back to article DNS devastation: Top websites whacked offline as Dyn dies again

An extraordinary, focused attack on DNS provider Dyn continues to disrupt internet services for hundreds of companies, including online giants Twitter, Amazon, AirBnB, Spotify and others. The worldwide assault started at approximately 11am UTC on Friday. It was a massive denial-of-service blast that knocked Dyn's DNS anycast …

  1. Barry Rueger

    Inevitable

    Arguably this sort of "bring the Internet to its knees" attack was pretty much inevitable.

    And arguably that has been the case for most of a decade.

    Right now a lot of companies must be quaking in their boots wondering what's next.

    1. raving angry loony

      Re: Inevitable

      Or salivating at the profits to be had from marketing "protection" to those who feel vulnerable. The whole thing smells of a protection racket in the making.

    2. John Smith 19 Gold badge
      Unhappy

      "Right now a lot of companies must be quaking in their boots wondering what's next."

      Certainly those who understood what just happened.

      While the suppliers of the IoS products that enabled it should be ashamed.

    3. jobst

      Re: Inevitable - but not because of DNS

      It seems a lot of people are talking about DNS in the moment ... but we must not forget who really is to blame - it's the stupid security programmed into may IoT's because of greed. The cause for this problem are not the inherent problems of the DNS system(s) but the stupidity of device manufactures like Dlink, Netgear, Avtech and so on.

    4. Anonymous Coward
      Anonymous Coward

      Re: Inevitable

      Not as inevitable as someone shouting "Blockchain - that's the answer!"

  2. Anonymous Coward
    Anonymous Coward

    not the only one

    noip appeared to be offline at times on wednesday.

  3. Anonymous Coward
    Anonymous Coward

    I guess commercial considerations have replaced the "internet routes round damage" idea.

    1. Yes Me Silver badge
      Unhappy

      "routes round damage" idea.

      Routing works just fine during a DNS outage; the problem is that you can't find the addresses that you want to route to. DDoS against DNS authoritative servers has always been scary. If only every ISP supported ingress filtering, as they're supposed to, tracking down and killing DDoS bots would be that much easier.

    2. ecofeco Silver badge

      Actually it did. Many sites were affected but not all. I could easily reach most websites I use during the day.

      It worked exactly like it should.

    3. Frank Oz

      Well, yeah ...

      Bottom line is that its the current DNS process, which is in place to ensure that the domain name issuers get their little piece of the pie, which make attacks like this possible. In the good old days the ENTIRE DNS data files/database was mirrored on a number of servers around the planet ... so a DOOS would have to hit them all to cause things to fall over this badly.

      These days, all the hacker has to do is find out which commercial Domain Name provider provided which mega huge internet presence with its domain names ... and just hit that server. If anything this attack should result in a number of DYN clients going elsewhere (presumably to a less visible DNS provider)

      What should happen is that ICANN points out that the current DNS verification and validation processes (which are only in place to protect the IP of the DNS provider) actually make it easier for the Ungodly ... and that perhaps total replication of the database across mulitple provider sand locations might be a good idea to revert to.

      But that's unlikely - because nowadays ICANN represents the DNS providers.

      1. John Brown (no body) Silver badge

        "In the good old days the ENTIRE DNS data files/database was mirrored on a number of servers around the planet ... so a DOOS would have to hit them all to cause things to fall over this badly."

        I was wondering the same thing. Where are the master root servers these days? I take it they either no longer exists or people like Dyn don't bother with them.

        1. Danny 14

          Most companies run their own dns caching as part of their proxies (i imagine, we do to cut down on requests) so this will hit general public more. In fact i only noticed the issues when i switched from wifi to data on my phone.

    4. Steven Jones

      It's not a routing issue...

      The Internet does route around damage, but this isn't an attack on routing. It's an attack on a network service. That's rather a different thing.

      However, it's certainly true that far too little effort has been put into fundamentally hardening network services of all sorts against these sorts of attacks. Unfortunately far too many Internet protocols and services are built around assumptions of good behaviour.

      1. Anonymous Coward
        Anonymous Coward

        Re: It's not a routing issue...

        "Routing" has many more meanings than "IP routing". The Internet was designed as a distributed systems - kill a node, it would still keep on working. Now we're turning it into a big centralized system where a few big data centers, the grandsons of mainframes, hold everything. And when they become the easy target of huge distributes attacks like this, they are kaputt... the old saying "when you put all your eggs in one basket...".

        If all those DNS records were widely distributed, good luck to take down all of them.

  4. Dave Pickles

    DNS wouldn't be so vulnerable if folks set really long TTLs on their entries and didn't use DNS for load-balancing - caches around the internet could then ride out any feasible attack.

    1. ParasiteParty
      Facepalm

      I wish this was true...

      Many ISP's - to name BT for one but probably most, simply do not respect published TTL's.

      We've seen issues where we set low TTL's during a site migration but the ISP's simply don't go back for another lookup. You end up having to fanny about with DNS providers until enough time has passed for the providers DNS to re-do the lookup in its own time.

    2. efinlay

      Except...

      ...with really long TTLs, how do you manage regional or global load balancing? Failovers? Switching records in general? Migrations?

      (admittedly, that last one is generally less frequent)

      Honest question - not snarky, I'm curious.

      1. Anonymous Coward
        Holmes

        Re: Except...

        Good question, that commentard: "with really long TTLs, how do you manage regional or global load balancing".

        https://en.wikipedia.org/wiki/Anycast - Anycast addresses. This is what Google's 8.8.8.8 and 8.8.4.4 and OpenDNS 208.67.222.222 and 208.67.220.220 use.

        Cheers

        Jon

        I suspect I've answered your question as posted but probably not what you intended or the full story.

        1. Nate Amsden

          Re: Except...

          Anycast only covers a subset of load balancing needs(small subset at that)

        2. patrickstar

          Re: Except...

          Anycast is not suitable for TCP since TCP is dependent on all packets of a connection going to the same place. It might very well work for specific applications and short-lived connections but it's definitely not something you want to deploy on a website that's supposed to be reachable by 100% of all users.

          You can even encounter scenarios where, for some subset of users, it works very poorly if at all - equal cost multipath for example, where every other packet ends up at a different anycast instance.

        3. TeeCee Gold badge
          Facepalm

          Re: Except...

          Except that it isn't a good question as the original comment did specify that a prerequisite would be that people didn't use DNS for load-balancing.

          People are using something for something for which it wasn't designed to do and have found out that it's not really suitable for it? Who Could Possibly Have Seen That Coming?

      2. Alister

        Re: Except...

        ...with really long TTLs, how do you manage regional or global load balancing? Failovers? Switching records in general? Migrations?

        You can do it by not using DNS to switch between sites. Instead you have one or more load balancers with fixed IPs which you point the DNS at, and then redirect the traffic to the sites and servers as you want.

        We do DR failover this way, as well as load balancing and migrations between hosting environments.

        There is a slightly increased latency, obviously, but not enough to impact normal traffic.

    3. streaky
      Boffin

      TTL is nothing really to do with it. Sites would go offline under sustained attack sooner or later.

      The main issue here is that these large companies are doing DNS wrong on a more fundamental level. We learned years ago that people were attacking DNS providers and this could be leveraged to take out fundamental infrastructure and sites of all sorts of sizes. The fix is obvious and it's something I've recently pointed out to github:

      If you're an attack target do not just use a single DNS provider. Use 2.

      If you do that it's much easier to not be caught in the crossfire. It's also much more difficult for adversaries to take you out via DNS - they have to take out two entirely separate networks to achieve that requiring double the attack assets.

      The internet was designed very insecurely but they did build it in a way that made it easy to mitigate attacks like the one today and everybody running DNS services at the companies that were taken out look like complete clowns in retrospect. It's like the people who expect AWS zones to be up 100% of the time despite them not being designed to be survivable and Amazon giving people the tools to not do that.

      Also fwiw using anycast to balance large sites is a really bad idea. If anycast was a solution to the problem we wouldn't be sitting here talking about anycasted dns providers being taken out.

      1. bazza Silver badge

        @Streaky,

        "The internet was designed very insecurely"

        Security wasn't a consideration at all in those days.

        Fundamentally everything we have security-wise is a bodge. Ultimately no matter what security mechanism one contrives, it always boils down to the following. Machines are hopeless at identifying people.

        1. streaky

          The security thing wasn't really a complaint, just a fact of life. We can we rebuild, we have the technology - though I wasn't really arguing for that. I wouldn't mind burning UDP to the ground though but it's an entirely separate subject.

          TCP is dependent on all packets of a connection going to the same place

          TCP anycast is a thing (indeed it's how a lot of HTTP DDoS protection works). Doesn't mean it's a sensible use of resources when your DNS provider can do useful things for you; it's all cost/benefit - DNS providers are cheap, anycasted HTTP isn't. As I said it's not really a solution to the problem, not relying on your singular provider's servers in the case they get hit or plain just go down is.

          1. patrickstar

            Yes, it can be done, but it's not a general solution the way DNS anycast is.

            When anycasting DNS, you can just plop down anycast instances in whatever locations will host you, with no special routing policies in place, no prepending/community games, and having it exported as widely as possible.

        2. Anonymous Coward
          Anonymous Coward

          ...and how much it costs to implement offset against profit and how much the PHB will earn for his/her yearly bonus.

      2. Anonymous Coward
        Anonymous Coward

        > If you're an attack target do not just use a single DNS provider. Use 2.

        If *you* are an attack target, it is *your* infrastructure that is going to be targeted, not the DNS providers (they may throw that into the deal as well, but expect your own infrastructure to become suddenly popular with IP cameras and stuff).

        1. streaky
          Coffee/keyboard

          it is *your* infrastructure that is going to be targeted, not the DNS providers

          Yeah but TCP attacks the average toddler can deal with, they're blatant and they're easy to identify the source of and can be mitigated quite quickly. UDP attacks against DNS infrastructure are very difficult to deal with which is why they're popular for taking out large targets - and regardless of that "you" as the target can mean that you're one of many large US sites and the attacker would be happy to take you out as collateral.

        2. Doctor Syntax Silver badge

          "If *you* are an attack target, it is *your* infrastructure that is going to be targeted,"

          For some values of "you". If "you" means the US internet business community then DNS is part of that infrastructure and, from what's happened, appears to be a single point of failure for quite a large portion of "you".

      3. leenex

        It's mainly an attack on port 53, right?

        What if DNS servers were able to agree on a different, free port as part of the protocol? There would be no telling which port would be used, and an attacker would have to scan 65535 port to find it, right?

        Any client scanning all ports would be easy to identify.

        (This was a brain fart from someone who can hardly configure a Cisco router.)

        1. streaky

          It's not a question of scanning or whatever, the attack was against a "shared" DNS provider where all these sites were using common infrastructure. It's not as if you can "hide" your DNS server because resolvers have to be able to find them, so they have to be pointed at from the parent zone servers.

  5. Pen-y-gors

    Sort out priorities...

    I suspect so long as El Reg and a few other specialist interest sites are unaffected then this readership won't be too worried.

  6. Anonymous Coward
    Anonymous Coward

    The outage is actually doing a fab job at functioning as an ad-blocker.

    1. ecofeco Silver badge

      I noticed that as well.

  7. John Savard

    Helpful Article

    Hopefully, all DNS sites will start caching; I wish my computer would cache the IP address of sites I visit so that I wouldn't even notice a DNS failure - it could even warn me if an IP address changes, to help prevent IP spoofing.

    Anyways, I switched my DNS to that given in your article, and I could connect to the RuneScape servers once more! Quite saved my morning. Also I was reading old issues of U&lc, and that too was restored.

    1. hmv

      Re: Helpful Article

      DNS already does caching, but those who run amazon.com and github.com have chosen to minimise the caching time to make it easier to switch things around and disregarding the usefulness of caching.

      1. 404
        Facepalm

        Re: Helpful Article

        Ayup... mid to late 90's with NT DNS(!) servers, then later with Solaris x86 Bind DNS servers, when setting up new websites, you always had to stop Internet Exploder, close Netscape, restart your DNS client, just to check to see if the scripts took...

        I'm sorry, I don't know where I was going with this -> wife pops in with 'Oh look, a portable ski lodge!'... thought evaporated.

      2. Anonymous Coward
        Childcatcher

        Re: Helpful Article

        "DNS already does caching"

        Not really - each record has a Time To Live (TTL) in seconds. Your DNS server should honour that but it can go a bit mad when people ignore the standards to fix things.

        For example I've just looked up github.com via 2001:4860:4860::8888 (Google public DNS - IPv6) four times in quick succession and got the following TTLs: 117, 160, 144 and 18. I then looked up the NS records (AWS, four NS records) and looked them up there - now the A records round-robin between two IP addresses and with a TTL of 300.

        The world can be a nasty place

        1. patrickstar

          Re: Helpful Article

          Completely normal, expected, and perfectly TTL-obeying behavior.

          DNS resolvers don't reply with the TTL as originally specified in the response from the authoritative DNS server (i.e. what the guy who set up the domain specified) - they respond with the time remaining until the record expires from their cache.

          Your 4 queries ended up at 4 different servers. Google's DNS servers exist at multiple locations with the same IP address - this is what's known as anycast instancse. Each anycast instance then consists of multiple actual servers behind a load balancer. The load balancer just picked a server at random for each of your queries. Nothing mystical about any of this - everyone big does the same thing in almost exactly the same way, including the root servers.

          These 4 servers had cached the record (i.e. received a query for it when it wasn't cached and subsequently looked it up and stored the result in the cache) at different times. Hence different times remaining until expiration in their caches.

    2. Gary Bickford

      Re: Helpful Article

      > Hopefully, all DNS sites will start caching; I wish my computer would cache the IP address of sites I visit so that I wouldn't even notice a DNS failure - it could even warn me if an IP address changes, to help prevent IP spoofing.

      I have local DNS server running in cache mode on all my computers - desktops and servers. Theses are all Linux machines. IDK if Windows has that capability, but I think the default configuration for Ubuntu is to run bind as a caching DNS server if it is turned on. So then I have my net config is using 127.0.0.1 as the DNS source, and my bind configuration uses 8.8.8.8 plus another one.

      One additional benefit is that when I'm on a cable connection this bypasses the cable company's default DNS that it sets up in my cable modem's DHCP config, which they use for various nefarious purposes such as inserting their own ads in websites, selling my traffic info, and "fixing" domain name typos by routing to their own advertising sites. I've seen all of those tricks at various times when visiting people who use comcast or optimum.

  8. Scott 26
    Trollface

    No irony at all...

    ... at an article about a DDOS attack that has twitter affected, with a twitter screenshot in it....

  9. ma1010
    Mushroom

    ENOUGH!

    You know, this is really enough of this crap. What's the point? It's like the assbags that went out and wrote viruses and sent them around to screw up computers of people they didn't even know. What was that for? And now why try to bugger up the whole Internet?

    I'm not smart enough to figure out a solution (and there may not be one), but it seems to me that something should be possible.

    Technically, we need some geniuses to figure out a way to trace this crap back to its source. Politically, we need international treaties which provide that anyone who screws up the Internet, regardless of where they are, will be arrested and tried for it. Once found guilty, give them a nice, LONG prison sentence. And maybe a permanent, non-dischargable judgment (for many millions) that follows them around for the rest of their life to make sure they're pauperized to the point they can't AFFORD a computer.

    1. Mark 85

      Re: ENOUGH!

      You raise a good point with: "What's the point?". Perhaps a live fire test of the botnets? A warning? Not sure from here.

      With the IoT crap getting whipped into botnets, this could be a harbinger: "Remember this? we'll pay us or you're next."

      Or a state actor group flexing it's muscles as a warning....?

      I just don't think it's being done for fun.

      1. Dan 55 Silver badge
        Black Helicopters

        Re: ENOUGH!

        Or a state actor group flexing it's muscles as a warning....?

        Just after the big important IoT meeting in the US...

    2. Martin Summers Silver badge

      Re: ENOUGH!

      Have you never watched films? There's always some evil dude who wants to destroy the planet in some way. It's an ego thing, nothing rational.

      1. Martin-73 Silver badge

        Re: ENOUGH!

        The chap should be easy enough to track down. He'll be covered in long white persian cat hair

        1. CrazyOldCatMan Silver badge

          Re: ENOUGH!

          The chap should be easy enough to track down. He'll be covered in long white persian cat hair

          Phew! The only cats I have with white hair are all short-hairs. Maybe I'm only a little bit evil?

      2. Haku
        Coat

        Re: ENOUGH!

        "Have you never watched films?"

        I've heard of them. Doesn't Samuel L Jackson always play the black guy in those?

    3. Anonymous Coward
      Anonymous Coward

      Re: ENOUGH!

      Guess eventually the major internet companies or even government agencies will get proactive and start releasing bot-killers targeting vulnerable devices

      1. bobbear

        Re: ENOUGH!

        ISPs could do a lot more to target bots. I've reported bots innumerable times with clear, comprehensive evidential data and had the reports totally ignored by big-name ISP's that were utterly clueless & didn't want to know so now I don't even bother... Time for large penalties for hosting bots, methinks.

    4. Anonymous Coward
      Anonymous Coward

      Re: ENOUGH!

      Or we could simply disconnect every AS that doesn't do egress filtering.

      Job done. DDoS fixed.

      1. xehpuk

        Re: ENOUGH!

        If ISPs had proper egress filtering DDoS should be possible to handle.

        Any site should be able to send a message to the net saying block all traffic from that IP to me. The ISP closest to offending node then applies this filtering for an hour or so.

        Even if attacked by a million nodes, sending out a million block messages should be doable.

        1. CrazyOldCatMan Silver badge
          Stop

          Re: ENOUGH!

          Any site should be able to send a message to the net saying block all traffic from that IP to me

          And thus, a million teenty-tiny DDOS methods were born..

      2. patrickstar

        Re: ENOUGH!

        Doing strict uRPF everywhere is simply not possible. Asymmetric routing is far too common.

        And there are perfectly legitimate, if somewhat odd, scenarios where you need the ability to send packets from locations different from where incoming traffic would be routed. Certain load balancer / geographically distributed setups, for example.

        Was this attack even spoofed, by the way? Doesn't really sound like it from the reasonably specific host counts. And a lot of these hosts are probably on various lower-end connections, which is where spoof protection actually is frequently implemented. I remember some DDoS tools actually used to test each host for spoofability on installation and automagically have them do the "right" thing spoof-wise (fully spoofed sources / addresses from the subnet only / only real address).

        If you are approaching a terabit of traffic it likely won't help defense much that it's totally unspoofed. Atleast not if you can't get long source address ACLs inserted far upstream.

        The only thing spoof protection truly breaks is amplifier attacks, but for that all you need is one or a few originating hosts and that you'll always be able to find.

    5. Steven Jones

      Re: ENOUGH!

      One big problem is it's often extremely difficult to trace the originators with a distributed DOS attack through compromised devices controlled by heavily disguised control systems which, themselves, can go through compromised devices. Often this can be triggered by anybody, anywhere using any old public network. Even when the controlling source (or the source of the compromising agent) can be identified, these are often residents of countries where the rule of western law doesn't hold, or even regimes where this sort of activity serves a purpose of the state (or even agencies in that state not under full control).

      It might be that some really draconian action will be required on ISPs and network operators to manage the security on their devices. A can conceive of ISPs and network operators being compelled to police their own user base for illicit traffic on pain of having some of their service access cut off which means, by implication, they have to police their users the same way.

      Perhaps also some penalties for manufacturers and suppliers of devices that can be compromised which don't fix security holes. This is one huge issue for the "Internet of Everything".

      Ultimately, the whole infrastructure needs to be hardened, and especially core services, such that they are far more difficult to attack in this way.

      1. Anonymous Coward
        Anonymous Coward

        Re: ENOUGH!

        Yeah. Law enforcement cannot stop this. "Cyberwar" counterattacks won't work either.

        "Draconian self-policing" (throttling/disconnecting infected downstream users) won't work against botnets whose DDoS traffic is effectively indistinguishable from legitimate traffic. End users won't disconnect infected devices that appear to be functioning normally. Government "cybersecurity" regulations will be misguided and ineffectual. Nothing will be done until the internet is unusable.

        What can be done is, 1) Cutting back on unnecessary technology, integration, services, features, etc. 2) Keys instead of passwords. 3) Standard binary data formats that are less susceptible to serialization attacks than oddball/proprietary formats and the "web soup" of text formats embedded in one another. 4) Not just open source, but simple and understandable open systems all the way down to the transistor level.

      2. John Brown (no body) Silver badge

        Re: ENOUGH!

        "It might be that some really draconian action will be required on ISPs and network operators to manage the security on their devices. A can conceive of ISPs and network operators being compelled to police their own user base for illicit traffic on pain of having some of their service access cut off which means, by implication, they have to police their users the same way."

        I agree. And there's already precedent with email blacklists occasionally blocking a whole ISP for allowing outgoing spam. I can easily see interconnect companies and back-haul providers being the "police" in something like this. What about attacks against the various internet exchange hubs? Easy. Shut down connects with the ISPs with the top 3 or top 5 number of attack sources and tell them to sort it or find another interconnect.

        Yeah, I know it's not really that simple, but all this talk of "big data", security services "black boxes" on ISP networks, ISPs own monitoring and records keeping of users data held for later analysis, "cloud" processing etc, you'd think it should be a piece of piss to track and block all this shite. Isn't this why all the data is being collected?

      3. Doctor Syntax Silver badge

        Re: ENOUGH!

        "ISPs and network operators being compelled to police their own user base for illicit traffic on pain of having some of their service access cut off which means, by implication, they have to police their users the same way."

        If a large enough number of devices are involved the illicit traffic from any one device might not be easily discoverable. A better variation would be policing their user base for vulnerable internet-exposed devices. Where the device is an ISP-supplied router this would have the immediate effect of requiring the ISPs to be more careful in deciding what kit they supply.

        1. Anonymous Coward
          Anonymous Coward

          Beatings will continue until morale improves

          Great. We'll put Doctor Syntax in charge of vetting routers for the ISPs. If this happens again, we'll put him in jail. ;)

          Seriously, there's no point in punishing people for systemic problems dating back 25-50 years. Nobody's blameless, everyone's in over their head. The system in question is literally the sum total of every living software project that grew from a working prototype to a big ball of mud. BBOM^N.

    6. PrivateCitizen

      Re: ENOUGH!

      "Politically, we need international treaties which provide that anyone who screws up the Internet, regardless of where they are, will be arrested and tried for it. "

      Does this include the countless people / businesses / etc who cut every possible corner to produce cheap IoT style gadgets because they dont really give a toss about how they could be misused?

      Yes, the skids who launched this attack need to be identified and punished, but then so do the people who fundamentally fucked everything up so much that a bored kid can take down the internet.

      1. Doctor Syntax Silver badge

        Re: ENOUGH!

        "Does this include the countless people / businesses / etc who cut every possible corner to produce cheap IoT style gadgets because they dont really give a toss about how they could be misused?"

        Yes. With extreme prejudice.

    7. Gary Bickford

      Re: ENOUGH!

      > I'm not smart enough to figure out a solution (and there may not be one), but it seems to me that something should be possible.

      What I'd _like_ to suggest but is actually a bad idea would be when one of these hijacked devices is identified, that the victim server could be allowed to route back to the offending device, and reset it, erasing the bogus code and setting a new random password. Then the device would still run, but the owner would be locked out of the admin interface until they reset to factory specs again (and hopefully set the user /pass to something different). Needless to say, this is a bad idea.

      But either from class action litigation liability forcing a recall, and/or legislation, requiring every device to have a different factory reset password and defaulting to not allow admin access from the WAN side would solve most of these problems. And I suspect you will see ISP / cable providers taking an active role and blocking devices that they determine are susceptible. They could do thus with a quick login test when a device is first seen by their routers by detecting the device type and trying the default login. If it wirks, they block traffic from that device (or port, if on a local NAT setup.)

  10. Anonymous Coward
    Mushroom

    IOT FTW

    This isn't quite the glorious IOT armageddon that's been prophesied, but current trends will gets us there in no time. It's already throttling Twitter and a bunch of 3rd-party web widgets. Excellent.

    Looking forward to The Day After -->

    1. JeffyPoooh
      Pint

      IOSDC

      Wait until we have the promised Internet of Self-Driving Cars (IOSDC); the ones with all of the magic features which are communications enabled.

      I'm sure that all these promises will come to fruition very soon...

      ...right after the Internet is fully secure.

  11. Anonymous Coward
    Anonymous Coward

    Sigh...

    David Gibson, VP of strategy at Varonis,

    That should be at something else. Sound similar, but more apt. The guy is clueless.

    There are multiple issues here, but if you look at the outage maps you will see that the most affected part is USA and especially the Eastern Seaboard.

    The reason for the size of the outage is a combination of Dyn not being a Carrier in its own right and the way USA Internet works. USA Internet is built around a small set of private peerings between large Carriers and there is no public peering as such. If you are providing a "service" like Dyn you cannot peer - you have to buy transit. This limits your upstream diversity to a couple of links. Whack any one of them with a DOS and you go off the radar for a significant portion of the USA Internet. The reason it gets really ugly is that USA providers often have some seriously "funky" routing policies where they override BGP to force traffic not to leave their network. In normal days - fine. If you have your upstream links going up/down because you are being DOS-ed - not so much.

    Compared to that a Carrier run DNS can be geographically distributed and located next to peering points resulting in significant levels of resilience. Similarly, in Europe peering rules are often waived for companies specializing in DNS and some DNS services are run by peering points themselves resulting in much higher resilience. There are also a number of decades old methods of providing load distribution and georgaphic resilience which work very well if you are a Carrier. If you are just a service provider with limited upstream diversity - not so much.

    As far as OpenDNS Openly violating DNS semantics and disregarding TTL reported by source servers if it cannot reach them - no thanks. Protocols specs are designed in a specific way for a reason and if you do stupid sh*** like that all kinds of retarded things can happen.

    1. Dan 55 Silver badge

      Re: Sigh...

      Sorry, but wasn't the stupid shit the DDoS?

    2. patrickstar

      Re: Sigh...

      You are fundamentally wrong. Lots of service/content providers peer. A lot.

      See CloudFlare for an example. Or Akamai. Or Google. Or even entities that basically do only one thing, like Facebook and Netflix.

      Peering is based on mutual benefit, and obviously there often is a benefit for ISPs to have content their users access (or DNS queries for them) go through peering instead of transit.

      1. Voland's right hand Silver badge

        Re: Sigh...

        You are fundamentally wrong.

        Tell that to any one of the USA Carrier oligopoly. They will tell you to contact their sales department to mutually benefit from your money as a paying customer.

        You can do that argument over, on this size of the pond. That is how the Internet works in Europe. In USA - not so much.

        So unless you are a part of the merry oligopoly gang anycast will not help you much. Sure - you have HA and resilience, but not upstream link diversity. So once your links start to be hit by 600G+ you are out of the game there and then.

        1. patrickstar

          Re: Sigh...

          While you are correct about Tier1s peering policy, you aren't correct about the implications.

          The solution is simply to peer with´a lot of networks from "Tier2" and down.

          Not a lot of networks have a Tier1 as their only upstream anyways - they tend not to provide good connectivity for a specific region, you want a national network for that.

          Basically, you only have Tier1s as your primary transit when you have a lot of peering on your own. This is what the whole "Tier" model implies - peering with networks on the same "Tier" and buying transit from those on a lower.

          The role of Tier1s is basically to provide international connectivity.

          When doing anycast you ideally don't want your traffic hitting the network of a Tier1 ever (or atleast not be transported far by it) - that means you have some geographic region not covered by the anycast.

          By the way, OVH has successfully weathered twice that and they are just a hosting provider.

          CloudFlare certainly has weathered comparable attacks as well.

    3. P.Brimacombe

      Re: Sigh...

      "Compared to the situation in the US, a Carrier run DNS can be geographically distributed and located next to peering points resulting in significant levels of resilience. "

      intuitively it makes sense. I would like to understand this better.

      1. patrickstar

        Re: Sigh...

        A Tier1 run DNS service (i.e. using their AS with their peering policies) would actually be WORSE - precisely because Tier1s don't peer except with other Tier1's.

        Essentially:

        Tier1's don't have end users (well, not a lot of them at least). As a DNS service, you want the best paths possible to end users. Therefore, you don't want a Tier1 to be the DNS service.

  12. wolfetone Silver badge
    Paris Hilton

    Totally thought The S*n had taken down Twitter to shut Gary Lineker up over the whole child immigrant thing.

  13. Anonymous Coward
    Anonymous Coward

    We were warned about this sort of thing 3 years ago...

    https://www.schneier.com/blog/archives/2016/09/someone_is_lear.html

  14. Martin Summers Silver badge

    I dealt with my first DDOS on servers I look after on Tuesday. Seems every man and his dog are having a go at it. Thankfully the skiddie had compromised and was using a nice neat block of IP's from one ISP and made it rather easy to block.

  15. Nate Amsden

    as a enterprise dyn customer for 7 years

    This is the first ddos they've had real trouble mitigating. All past ddos attacks to them never registered on my monitoring since they handle it so well. Obviously today is different.

  16. ItsNotATrap

    Bloody IoT

    I'm so mad at my fridge-freezer right now...

    1. ecofeco Silver badge

      Re: Bloody IoT

      But not your pet feeder?

      1. Mike Pellatt

        Re: Pet feeder

        My lab is looking very content at the moment.

        The cat's throwing up everywhere.

        The goldfish doesn't fit in the bowl any more.

      2. CrazyOldCatMan Silver badge

        Re: Bloody IoT

        But not your pet feeder?

        No - she's done nothing wrong :-)

  17. William Higinbotham

    Look at other counties internet traffic.

    Interesting that Asia traffic delay time started exactly the same time as US denial attack. http://www.internettrafficreport.com/asia.htm

  18. Anonymous Coward
    Anonymous Coward

    Has everyone forgotten everyting?

    Makes no sense to be down. Everyone knows you never put your DNS servers on the same darn subnet, same thing would be to never host your DNS servers at one company. Unless you're looking to cut corners.

    Solution is pretty simple: Run your own DNS servers. Of course that does require people.......

    1. Paul Hovnanian Silver badge

      Re: Has everyone forgotten everyting?

      /etc/hosts FTW!

  19. Anonymous Coward
    Anonymous Coward

    They are using a 'backdoor'

    Password = Geoff.

    1. ecofeco Silver badge

      Re: They are using a 'backdoor'

      I see what you did there.

  20. Anonymous Coward
    Anonymous Coward

    Need to update systems!

    No excuse to be using DOS

  21. Anonymous Coward
    Anonymous Coward

    Krebsonsecurity.com

    btw, Doug Madory helped Brian out with his research on DDoS service providers.

  22. Anonymous Coward
    Anonymous Coward

    OpenDNS also provides IPv6 lookups

    Just in case anyone needs it, OpenDNS provides IPv6 lookups at 2620:0:ccc::2 and 2620:0:ccd::2 (you'll find the relevant data and details on their website).

    A pox on unsecured IoT kit providers, and prolonged percussive re-education for those abusing those weaknesses.

  23. Jason Bloomberg Silver badge
    Joke

    Nuke Russia.

    We know it's Russia. Nuke Russia and see if these attacks stop. If not nuke North Korea. And if they are still happening nuke China, then Iran. We know it must be one of the bad guys.

    Joking, but it's probably been seriously considered as the right strategy by some.

    1. Mark 85

      Re: Nuke Russia.

      The "other" media is reporting that Anonymous and Wikileaks supporters are claiming responsibility. Who knows who's doing it. <shrugs>

      1. wolfetone Silver badge

        Re: Nuke Russia.

        "The "other" media is reporting that Anonymous and Wikileaks supporters are claiming responsibility. Who knows who's doing it. <shrugs>"

        This sort of thing would make a great update to Cluedo, wouldn't it?

  24. webxtrakt

    Public DNS Performance

    Check out Public DNS Performance via this hourly updated chart:

    https://webxtrakt.com/public-dns-performance

  25. Anonymous Coward
    Anonymous Coward

    How to solve most hacking / botnet issues for the civilized world.

    Cut the cables to China and Russia... 90% problems solved. Let them have an internet to themselves but keep them out of Europe and NA.

  26. JeffyPoooh
    Pint

    Netflix failure mode was a bit different...

    Netflix popped right up, but the page itself was about a third incomplete.

    My assumption was some elements of the page were failing due to the same DNS problem.

    YMMV.

  27. MarkSitkowski

    How about this?

    Our IDS/IPS automatically inserts a new firewall rule for every incoming hack. It then reports the offending IP address to the ISP owning it. It never sleeps.

    During a DDoS attack on our website in 2014, we were attacked by approximately 7000 servers from almost every subnet in Brazil and Argentina, during an attack that lasted a week. Each attacking server was blocked after the first hack query and, over the course of the week, the attack tailed off, as each ISP took the offending IP address offline.

    This may not be a perfect solution, but it deactivates each mindless parasite as it removes each attack endpoint. If everyone did this, and ISP's responded fast enough (Best: Brazil, Germany, Russia, USA, Indonesia, Israel. Worst: China, Mexico, France) the hackers would spend their lives constantly looking for new servers running vulnerable WordPress/Joomla installations, as the existing ones were neutralised.

  28. jimdandy

    If the bad guys didn't have free access to millions of open IOT and home routers, this would just be a minor annoyance. Fix the problem at the root and all the complex BS solutions discussed would just be that:

    Boole Shite.

    Doesn't mean that better managed DNS services wouldn't help. Doesn't mean that robust defenses and even offenses isn't a good idea. But building the castle walls higher when the effing barbarians are at the gate seems like too little, and to effing late.

  29. lansalot

    Dear whitehats

    Please change all the passwords on those insecure devices to something random.

    Thx

    Everyone-else

  30. Duncan Macdonald

    Use old cache data ?

    If the public DNS servers algorithm was changed to continue to use the entries whose TTL had expired if it was not possible to get a reply from a master DNS server would that have any severe effects ?

    (I am thinking of providing responses to users with a 60 second TTL and requerying the master DNS servers at 60 second intervals until a response is received.)

  31. jrdld
    Happy

    Amazon

    I find it wonderfully ironic that Amazon was taken out by IoT kit that was probably bought on Amazon.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like