Web Performance is a Journey, Not a Destination

Mehdi Daoudi

Subscribe to Mehdi Daoudi: eMailAlertsEmail Alerts
Get Mehdi Daoudi: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

DNS Outage Was Doomsday for the Internet

What was supposed to be a quiet Friday suddenly turned into a real “Black Friday” for us (as well as most of the Internet) when Dyn suffered a major DDOS attack. From an internet disruption’s perspective, the widespread damage the outage caused made it the worst I have ever experienced.

At the core of it all, the managed DNS provider Dyn was targeted in a DDOS attack that impacted thousands of web properties, services, SaaS providers, and more.

The chart below shows the DNS resolution time and availability of twitter.com from around the world. There were three clear waves of outages:

  • 7:10 EST to 9:10 EST
  • 11:52 EST to 16:33 EST
  • 19:13 EST to 20:38 EST

dns-twitterhttp://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/DNS-Twit... 300w, http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/DNS-Twit... 768w, http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/DNS-Twit... 624w, http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/DNS-Twit... 1453w" sizes="(max-width: 625px) 100vw, 625px" />

The DNS failures were the result of Dyn nameservers not responding to DNS queries for more than four seconds.

We were impacted in three ways:

  • Our domain Catchpoint.com was not reachable for a solid 30 minutes until we introduced our secondary managed DNS provider Verisign. We also brought up and publicized to our customers a backup domain that was never on Dyn, so our customers could login to our portal and keep an eye on their online services. All of these were in standby mode prior to the incident.
  • Our nodes could not reliably talk to our globally distributed command and control systems until we switched to IP only mode, bypassing DNS lookups. This was a feature we had developed, tested, and in production, but was not active as our engineering teams planned one more enhancement. Due to the nature of the situation, we deemed the enhancement to be lower risk than what we were experiencing.
  • Many of our own third party vendors that our company relies on stopped working- Customer Support and Online Help solution, CRM, office door badging system, SSO, 2 Factor Authentication services, one of the CDNs, a file sharing solution, and the list goes on and on.

This blog post is not about finger pointing; the folks at Dyn had a horrible day putting up with their worst nightmare. They did an amazing job of dealing with it, from notifications to extinguishing the fire. This is about how to deal with the worst case outage, as a company and an industry.

As with every outage, it’s important to take the time to reflect on what took place and how this can be avoided in the future.

Here are some of my takeaways from Friday, and the must-have solutions:

  • DNS is still one of the weakest links in our Internet infrastructure and digital economy. We have to keep learning and sharing that knowledge with each other. Here are several articles we have written on DNS.
  • A single DNS provider is not an option anymore for anyone. No company, small or large, can rely on a single DNS provider.
  • DNS vendors should create knowledge base articles about how to introduce secondary DNS providers, and they must be easy to find and follow.
  • DNS vendors need to make the setup of auto – transfer easier to find. Having to open a ticket in a middle of a crisis to find out the IP of the xtransfer name servers is simply not a viable option.
  • DNS Vendors should not set high TTLs (two days) on the authoritative nameserver records they pass on the DNS queries, and it should be easy to drop or change TTL. While this is great to bypass changing records on the TLDs, making the nameservers authoritative for two days becomes a headache when trying to switch to or migrate from a back-up solution.

image001http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/image001... 768w, http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/image001... 624w, http://assetsblogfly2.catchpoint.com/wp-content/uploads/2016/10/image001... 885w" sizes="(max-width: 300px) 100vw, 300px" />

Introducing another DNS vendor wouldn’t have achieved 100% of the result until you go into the Dyn configuration and add that other solution in the mix:

Some takeaways from a monitoring standpoint:

I had people tell me, “But Mehdi, I am not seeing a problem in my RUM.” When your site isn’t reachable, RUM won’t tell you anything because there is no user activity to show. This is why your monitoring strategy must include synthetic and RUM.

  • DNS monitoring is critical to understand the “why.”
  • DNS performance impacts web performance.
  • The impact was so incredible, some sites that didn’t rely on Dyn still suffered outages or bad user experience. This is because they used third parties that did rely on Dyn.

We interact with many things on a daily basis (cars, cell phones, planes, hair dryers) that have some sort of certification. I urge whoever is responsible to consider the following:

  • A ban on any Internet-connected device that does not force the change of default credential upon starting it. There shouldn’t Admin/Admin for anything including cameras, refrigerators, access points, routers, etc.
  • A ban on accessing of such devices from any place on the Internet. There should be some limitation, either access through the provider interface or from local network.
  • Consumers should also pressure the industry by not buying the products that aren’t safe. Maybe we need an “Internet Safety Rating” from a governmental agency or worldwide organization.
  • A must-have feature on every home and SMB router, and access point is the ability to detect abnormal traffic/activity and turn it off or slow it down; sending thousands of DNS requests in a minute is not normal. We should learn from Microsoft and what they did with Windows XP to limit an infected host.
  • Local ISPs must have capabilities to detect and stop rogue traffic.

Cybersecurity is dire. I hope this incident serves as a huge wake-up call for everyone. What happened Friday was a Code Blue event; we rely on the Internet for practically everything in society today, and it’s our job to do everything we can to protect it.

Thank you, Dyn, for the prompt response times to the support tickets, to Verisign for last-minute questions, our customers who were very patient and understanding, our entire support organization, and some special friends in major companies who offered a helping hand by providing some amazing advice around DNS.

Mehdi – Catchpoint CEO and Co-Founder

To learn more about how you can handle a major outage like this in the future, join our upcoming Ask Me Anything: OUTAGE! with VictorOps, Target, and Release Engineering Approaches.

The post DNS Outage Was Doomsday for the Internet appeared first on Catchpoint's Blog.

Read the original blog entry...

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.