FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

April’s Patch Tuesday Brings Record Number of Fixes

By BrianKrebs

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

Microsoft Rolls Out Patches for 73 Flaws, Including 2 Windows Zero-Days

By Newsroom
Microsoft has released patches to address 73 security flaws spanning its software lineup as part of its Patch Tuesday updates for February 2024, including two zero-days that have come under active exploitation. Of the 73 vulnerabilities, 5 are rated Critical, 65 are rated Important, and three and rated Moderate in severity. This is in addition to 24 flaws that have been fixed

VexTrio: The Uber of Cybercrime - Brokering Malware for 60+ Affiliates

By Newsroom
The threat actors behind ClearFake, SocGholish, and dozens of other e-crime outfits have established partnerships with another entity known as VexTrio as part of a massive "criminal affiliate program," new findings from Infoblox reveal. The latest development demonstrates the "breadth of their activities and depth of their connections within the cybercrime industry," the company said,

PixieFail UEFI Flaws Expose Millions of Computers to RCE, DoS, and Data Theft

By Newsroom
Multiple security vulnerabilities have been disclosed in the TCP/IP network protocol stack of an open-source reference implementation of the Unified Extensible Firmware Interface (UEFI) specification used widely in modern computers. Collectively dubbed PixieFail by Quarkslab, the nine issues reside in the TianoCore EFI Development Kit II (EDK II) and could be exploited to

Sea Turtle Cyber Espionage Campaign Targets Dutch IT and Telecom Companies

By Newsroom
Telecommunication, media, internet service providers (ISPs), information technology (IT)-service providers, and Kurdish websites in the Netherlands have been targeted as part of a new cyber espionage campaign undertaken by a Türkiye-nexus threat actor known as Sea Turtle. "The infrastructure of the targets was susceptible to supply chain and island-hopping attacks, which the attack group

Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode

By Burt Kaliski
A digital blue tree on a gradient blue background.

The quantum computing era is coming, and it will change everything about how the world connects online. While quantum computing will yield tremendous benefits, it will also create new risks, so it’s essential that we prepare our critical internet infrastructure for what’s to come. That’s why we’re so pleased to share our latest efforts in this area, including technology that we’re making available as an open source implementation to help internet operators worldwide prepare.

In recent years, the research team here at Verisign has been focused on a future where quantum computing is a reality, and where the general best practices and guidelines of traditional cryptography are re-imagined. As part of that work, we’ve made three further contributions to help the DNS community prepare for these changes:

  • an open source implementation of our Internet-Draft (I-D) on Merkle Tree Ladder (MTL) mode;
  • a new I-D on using MTL mode signatures with DNS Security Extensions (DNSSEC); and
  • an expansion of our previously announced public license terms to include royalty-free terms for implementing and using MTL mode if the I-Ds are published as Experimental, Informational, or Standards Track Requests for Comments (RFCs). (See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.)

About MTL Mode

First, a brief refresher on what MTL mode is and what it accomplishes:

MTL mode is a technique developed by Verisign researchers that can reduce the operational impact of a signature scheme when authenticating an evolving series of messages. Rather than signing messages individually, MTL mode signs structures called Merkle tree ladders that are derived from the messages to be authenticated. Individual messages are authenticated relative to a ladder using a Merkle tree authentication path, while ladders are authenticated relative to a public key of an underlying signature scheme using a digital signature. The size and computational cost of the underlying digital signatures can therefore be spread across multiple messages.

The reduction in operational impact achieved by MTL mode can be particularly beneficial when the mode is applied to a signature scheme that has a large signature size or computational cost in specific use cases, such as when post-quantum signature schemes are applied to DNSSEC.

Recently, Verisign Fellow Duane Wessels described how Verisign’s DNSSEC algorithm update — from RSA/SHA-256 (Algorithm 8) to ECDSA Curve P-256 with SHA-256 (Algorithm 13) — increases the security strength of DNSSEC signatures and reduces their size impact. The present update is a logical next step in the evolution of DNSSEC resiliency. In the future, it is possible that DNSSEC may utilize a post-quantum signature scheme. Among the new post-quantum signature schemes currently being standardized, though, there is a shortcoming; if we were to directly apply these schemes to DNSSEC, it would significantly increase the size of the signatures1. With our work on MTL mode, the researchers at Verisign have provided a way to achieve the security benefit of a post-quantum algorithm rollover in a way that mitigates the size impact.

Put simply, this means that in a quantum environment, the MTL mode of operation developed by Verisign will enable internet infrastructure operators to use the longer signatures they will need to protect communications from quantum attacks, while still supporting the speed and space efficiency we’ve come to expect.

For more background information on MTL mode and how it works, see my July 2023 blog post, the MTL mode I-D, or the research paper, “Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice.”

Recent Standardization Efforts

In my July 2023 blog post titled “Next Steps in Preparing for Post-Quantum DNSSEC,” I described two recent contributions by Verisign to help the DNS community prepare for a post-quantum world: the MTL mode I-D and a public, royalty-free license to certain intellectual property related to that I-D. These activities set the stage for the latest contributions I’m announcing in this post today.

Our Latest Contributions

  • Open source implementation. Like the I-D we published in July of this year, the open source implementation focuses on applying MTL mode to the SPHINCS+ signature scheme currently being standardized in FIPS 205 as SLH-DSA (Stateless Hash-Based Digital Signature Algorithm) by the National Institute of Standards and Technology (NIST). We chose SPHINCS+ because it is the most conservative of NIST’s post-quantum signature algorithms from a cryptographic perspective, being hash-based and stateless. We remain open to adding other post-quantum signature schemes to the I-D and to the open source implementation.
    We encourage developers to try out the open source implementation of MTL mode, which we introduced at the IETF 118 Hackathon, as the community’s experience will help improve the understanding of MTL mode and its applications, and thereby facilitate its standardization. We are interested in feedback both on whether MTL mode is effective in reducing the size impact of post-quantum signatures on DNSSEC and other use cases, and on the open source implementation itself. We are particularly interested in the community’s input on what language bindings would be useful and on which cryptographic libraries we should support initially. The open source implementation can be found on GitHub at: https://github.com/verisign/MTL
  • MTL mode for DNSSEC I-D. This specification describes how to use MTL mode signatures with DNSSEC, including DNSKEY and RRSIG record formats. The I-D also provides initial guidance for DNSSEC key creation, signature generation, and signature verification in MTL mode. We consider the I-D as an example of the kinds of contributions that can help to address the “Research Agenda for a Post-Quantum DNSSEC,” the subject of another I-D recently co-authored by Verisign. We expect to continue to update this I-D based on community feedback. While our primary focus is on the DNSSEC use case, we are also open to collaborating on other applications of MTL mode.
  • Expanded patent license. Verisign previously announced a public, royalty-free license to certain intellectual property related to the MTL mode I-D that we published in July 2023. With the availability of the open source implementation and the MTL mode for DNSSEC specification, the company has expanded its public license terms to include royalty-free terms for implementing and using MTL mode if the I-D is published as an Experimental, Informational, or Standards Track RFC. In addition, the company has made a similar license grant for the use of MTL mode with DNSSEC. See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.

Verisign is grateful for the DNS community’s interest in this area, and we are pleased to serve as stewards of the internet when it comes to developing new technology that can help the internet grow and thrive. Our work on MTL mode is one of the longer-term efforts supporting our mission to enhance the security, stability, and resiliency of the global DNS. We’re encouraged by the progress that has been achieved, and we look forward to further collaborations as we prepare for a post-quantum future.

Footnotes

  1. While it’s possible that other post-quantum algorithms could be standardized that don’t have large signatures, they wouldn’t have been studied for as long. Indeed, our preferred approach for long-term resilience of DNSSEC is to use the most conservative of the post-quantum signature algorithms, which also happens to have the largest signatures. By making that choice practical, we’ll have a solution in place whether or not a post-quantum algorithm with a smaller signature size is eventually available. ↩

The post Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode appeared first on Verisign Blog.

Microsoft's Final 2023 Patch Tuesday: 33 Flaws Fixed, Including 4 Critical

By Newsroom
Microsoft released its final set of Patch Tuesday updates for 2023, closing out 33 flaws in its software, making it one of the lightest releases in recent years. Of the 33 shortcomings, four are rated Critical and 29 are rated Important in severity. The fixes are in addition to 18 flaws Microsoft addressed in its Chromium-based Edge browser since the release of Patch

Mac Users Beware: New Trojan-Proxy Malware Spreading via Pirated Software

By Newsroom
Unauthorized websites distributing trojanized versions of cracked software have been found to infect Apple macOS users with a new Trojan-Proxy malware. "Attackers can use this type of malware to gain money by building a proxy server network or to perform criminal acts on behalf of the victim: to launch attacks on websites, companies and individuals, buy guns, drugs, and other illicit

Phishers Spoof USPS, 12 Other Natl’ Postal Services

By BrianKrebs

The fake USPS phishing page.

Recent weeks have seen a sizable uptick in the number of phishing scams targeting U.S. Postal Service (USPS) customers. Here’s a look at an extensive SMS phishing operation that tries to steal personal and financial data by spoofing the USPS, as well as postal services in at least a dozen other countries.

KrebsOnSecurity recently heard from a reader who received an SMS purporting to have been sent by the USPS, saying there was a problem with a package destined for the reader’s address. Clicking the link in the text message brings one to the domain usps.informedtrck[.]com.

The landing page generated by the phishing link includes the USPS logo, and says “Your package is on hold for an invalid recipient address. Fill in the correct address info by the link.” Below that message is a “Click update” button that takes the visitor to a page that asks for more information.

The remaining buttons on the phishing page all link to the real USPS.com website. After collecting your address information, the fake USPS site goes on to request additional personal and financial data.

This phishing domain was recently registered and its WHOIS ownership records are basically nonexistent. However, we can find some compelling clues about the extent of this operation by loading the phishing page in Developer Tools, a set of debugging features built into Firefox, Chrome and Safari that allow one to closely inspect a webpage’s code and operations.

Check out the bottom portion of the screenshot below, and you’ll notice that this phishing site fails to load some external resources, including an image from a link called fly.linkcdn[.]to.

Click the image to enlarge.

A search on this domain at the always-useful URLscan.io shows that fly.linkcdn[.]to is tied to a slew of USPS-themed phishing domains. Here are just a few of those domains (links defanged to prevent accidental clicking):

usps.receivepost[.]com
usps.informedtrck[.]com
usps.trckspost[.]com
postreceive[.]com
usps.trckpackages[.]com
usps.infortrck[.]com
usps.quicktpos[.]com
usps.postreceive].]com
usps.revepost[.]com
trackingusps.infortrck[.]com
usps.receivepost[.]com
usps.trckmybusi[.]com
postreceive[.]com
tackingpos[.]com
usps.trckstamp[.]com
usa-usps[.]shop
usps.infortrck[.]com
unlistedstampreceive[.]com
usps.stampreceive[.]com
usps.stamppos[.]com
usps.stampspos[.]com
usps.trckmypost[.]com
usps.trckintern[.]com
usps.tackingpos[.]com
usps.posinformed[.]com

As we can see in the screenshot below, the developer tools console for informedtrck[.]com complains that the site is unable to load a Google Analytics code — UA-80133954-3 — which apparently was rejected for pointing to an invalid domain.

Notice the highlighted Google Analytics code exposed by a faulty Javascript element on the phishing website. Click to enlarge. That code actually belongs to the USPS.

The valid domain for that Google Analytics code is the official usps.com website. According to dnslytics.com, that same analytics code has shown up on at least six other nearly identical USPS phishing pages dating back nearly as many years, including onlineuspsexpress[.]com, which DomainTools.com says was registered way back in September 2018 to an individual in Nigeria.

A different domain with that same Google Analytics code that was registered in 2021 is peraltansepeda[.]com, which archive.org shows was running a similar set of phishing pages targeting USPS users. DomainTools.com indicates this website name was registered by phishers based in Indonesia.

DomainTools says the above-mentioned USPS phishing domain stamppos[.]com was registered in 2022 via Singapore-based Alibaba.com, but the registrant city and state listed for that domain says “Georgia, AL,” which is not a real location.

Alas, running a search for domains registered through Alibaba to anyone claiming to reside in Georgia, AL reveals nearly 300 recent postal phishing domains ending in “.top.” These domains are either administrative domains obscured by a password-protected login page, or are .top domains phishing customers of the USPS as well as postal services serving other countries.

Those other nations include the Australia Post, An Post (Ireland), Correos.es (Spain), the Costa Rican post, the Chilean Post, the Mexican Postal Service, Poste Italiane (Italy), PostNL (Netherlands), PostNord (Denmark, Norway and Sweden), and Posti (Finland). A complete list of these domains is available here (PDF).

A phishing page targeting An Post, the state-owned provider of postal services in Ireland.

The Georgia, AL domains at Alibaba also encompass several that spoof sites claiming to collect outstanding road toll fees and fines on behalf of the governments of Australia, New Zealand and Singapore.

An anonymous reader wrote in to say they submitted fake information to the above-mentioned phishing site usps.receivepost[.]com via the malware sandbox any.run. A video recording of that analysis shows that the site sends any submitted data via an automated bot on the Telegram instant messaging service.

The traffic analysis just below the any.run video shows that any data collected by the phishing site is being sent to the Telegram user @chenlun, who offers to sell customized source code for phishing pages. From a review of @chenlun’s other Telegram channels, it appears this account is being massively spammed at the moment — possibly thanks to public attention brought by this story.

Meanwhile, researchers at DomainTools recently published a report on an apparently unrelated but equally sprawling SMS-based phishing campaign targeting USPS customers that appears to be the work of cybercriminals based in Iran.

Phishers tend to cast a wide net and often spoof entities that are broadly used by the local population, and few brands are going to have more household reach than domestic mail services. In June, the United Parcel Service (UPS) disclosed that fraudsters were abusing an online shipment tracking tool in Canada to send highly targeted SMS phishing messages that spoofed the UPS and other brands.

With the holiday shopping season nearly upon us, now is a great time to remind family and friends about the best advice to sidestep phishing scams: Avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. Most phishing scams invoke a temporal element that warns of negative consequences should you fail to respond or act quickly.

If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

Update: Added information about the Telegram bot and any.run analysis.

Chromium’s Impact on Root DNS Traffic

By Duane Wessels
Search Bar

This article originally appeared Aug. 21, 2020 on the APNIC blog.

Introduction

Chromium is an open-source software project that forms the foundation for Google’s Chrome web browser, as well as a number of other browser products, including Microsoft Edge, Opera, Amazon Silk, and Brave. Since Chrome’s introduction in 2008, Chromium-based browsers have steadily risen in popularity and today comprise approximately 70% of the market share.1

Chromium has, since its early days, included a feature known as the omnibox. This is where users may enter either a web site name, URL, or search terms. But the omnibox has an interface challenge. The user might enter a word like “marketing” that could refer to both an (intranet) web site and a search term. Which should the browser choose to display? Chromium treats it as a search term, but also displays an infobar that says something like “did you mean http://marketing/?” if a background DNS lookup for the name results in an IP address.

At this point, a new issue arises. Some networks (e.g., ISPs) utilize products or services designed to intercept and capture traffic from mistyped domain names. This is sometimes known as “NXDomain hijacking.” Users on such networks might be shown the “did you mean” infobar on every single-term search. To work around this, Chromium needs to know if it can trust the network to provide non-intercepted DNS responses.

Chromium Probe Design

Inside the Chromium source code there is a file named intranet_redirect_detector.c. The functions in this file attempt to load three URLs whose hostnames consist of a randomly generated single-label domain name, as shown in Figure 1 below.

Figure 1: Chromium source code that implements random URL fetches.
Figure 1: Chromium source code that implements random URL fetches.

This code results in three URL fetches, such as http://rociwefoie/, http://uawfkfrefre/, and http://awoimveroi/, and these in turn result in three DNS lookups for the random host names. As can be deduced from the source code, these random names are 7-15 characters in length (line 151) and consist of only the letters a-z (line 153). In versions of the code prior to February 2014, the random names were always 10 characters in length.

The intranet redirect detector functions are executed each time the browser starts up, each time the system/device’s IP address changes, and each time the system/device’s DNS configuration changes. If any two of these fetches resolve to the same address, that address is stored as the browser’s redirect origin.

Identifying Chromium Queries

Nearly any cursory glance at root name server traffic will exhibit queries for names that look like those used in Chromium’s probe queries. For example, here are 20 sequential queries received at an a.root-servers.net instance:

20 sequential queries received at an a.root-servers.net

In this brief snippet of data, we can see six queries (yellow highlight) for random, single-label names, and another four (green highlight) with random first labels followed by an apparent domain search suffix. These match the pattern from the Chromium source code, being 7-15 characters in length and consisting of only the letters a-z.

To characterize the amount of Chromium probe traffic in larger amounts of data (i.e., covering a 24-hour period), we tabulate queries based on the following attributes:

  • Response code (NXDomain or NoError)
  • Popularity of the leftmost label
  • Length of the leftmost label
  • Characters used in the leftmost label
  • Number of labels in the full query name
Sankey graph showing classification of queries matching Chromium probe patterns.
Figure 2: Sankey graph showing classification of queries matching Chromium probe patterns.

Figure 2 shows a classification of data from a.root-servers.net on May 13, 2020. Here we can see that 51% of all queried names were observed fewer than four times in the 24-hour period. Of those, nearly all were for non-existent TLDs, although a very small amount come from the existing TLDs (labeled “YXD” on the left). This small sliver represents either false positives or Chromium probe queries that have been subject to domain suffix search appending by stub resolvers or end user applications.

Of the 51% observed fewer than four times, all but 2.86% of those have a first label between 7 and 15 characters in length (inclusive). Furthermore, most of those match the pattern consisting of only a-z characters (case insensitive), leaving us with 45.80% of total traffic on this day that appears to be from Chromium probes.

From there we break down the queries by number of labels and length of the first label. Note that label lengths, on the far right of the graph, have a very even distribution, except for 7 and 10 characters. Labels with 10 characters are more popular because older versions of Chromium generated only 10-character names. We believe that 7 is less popular due to the increased probability of collisions in only 7 characters, which can increase the query count to above our threshold of three.

Longitudinal Analysis

Next, we turn our attention to an analysis of how the total root traffic percentage of Chromium-like queries has changed over time. We use two data sets in this analysis: data from DNS-OARC’s “Day In The Life” (DITL) collections, and Verisign’s data for a.root-servers.net and j.root-servers.net.

Long-term trend analysis of Chromium-like queries to root name servers.
Figure 3: Long-term trend analysis of Chromium-like queries to root name servers.

Figure 3 shows the results of the long-term analysis. We were able to analyze the annual DITL data from 2006-2014, and from 2017-2018, labeled “DITL Full” in the figure. The 2015-2016 data was unavailable on the DNS-OARC systems. The 2019 dataset could not be analyzed in full due to its size, so we settled for a sampled analysis instead, labeled “DITL Sampled” in Figure 3. The 2020 data was not ready for analysis by the time our research was done.

In every DITL dataset, we analyzed each root server identity (“letter”) separately. This produces a range of values for each year. The solid line shows the average of all the identities, while the shaded area shows the range of values.

To fill in some of the DITL gaps we used Verisign’s own data for a.root-servers.net and j.root-servers.net. Here we selected a 24-hour period for each month. Again, the solid line shows the average and the shaded area represents the range.

The figure also includes a line labeled “Chrome market share” (note: Chrome, not Chromium-based browsers) and a marker indicating when the feature was first added to the source code. Note, there are some false positive Chromium-like queries observed in the DITL data prior to introduction of the feature, comprising about 1% of the total traffic, but in the 10+ years since the feature was added, we now find that half of the DNS root server traffic is very likely due to Chromium’s probes. That equates to about 60 billion queries to the root server system on a typical day.

Concluding Thoughts

The root server system is, out of necessity, designed to handle very large amounts of traffic. As we have shown here, under normal operating conditions, half of the traffic originates with a single library function, on a single browser platform, whose sole purpose is to detect DNS interception. Such interception is certainly the exception rather than the norm. In almost any other scenario, this traffic would be indistinguishable from a distributed denial of service (DDoS) attack.

Could Chromium achieve its goal while only sending one or two queries instead of three? Are other approaches feasible? For example, Firefox’s captive portal test uses delegated namespace probe queries, directing them away from the root servers towards the browser’s own infrastructure. While technical solutions such as Aggressive NSEC Caching (RFC 8198), Qname Minimization (RFC 7816), and NXDomain Cut (RFC 8020) could also significantly reduce probe queries to the root server system, these solutions require action by recursive resolver operators, who have limited incentive to deploy and support these technologies.

This piece was co-authored by Matt Thomas, Distinguished Engineer in Verisign’s CSO Applied Research division.


1https://www.w3counter.com/trends

The post Chromium’s Impact on Root DNS Traffic appeared first on Verisign Blog.

Verisign Will Help Strengthen Security with DNSSEC Algorithm Update

By Duane Wessels
abstract blue data stream on black background

As part of Verisign’s ongoing effort to make global internet infrastructure more secure, stable, and resilient, we will soon make an important technology update to how we protect the top-level domains (TLDs) we operate. The vast majority of internet users won’t notice any difference, but the update will support enhanced security for several Verisign-operated TLDs and pave the way for broader adoption and the next era of Domain Name System (DNS) security measures.

Beginning in the next few months and continuing through the end of 2023, we will upgrade the algorithm we use to sign domain names in the .com, .net, and .edu zones with Domain Name System Security Extensions (DNSSEC).

In this blog, we’ll outline the details of the upcoming change and what members of the DNS technical community need to know.

DNSSEC Adoption

DNSSEC provides data authentication security to DNS responses. It does this by ensuring any altered data can be detected and blocked, thereby preserving the integrity of DNS data. Think of it as a chain of trust – one that helps avoid misdirection and allows users to trust that they have gotten to their intended online destination safely and securely.

Verisign has long been at the forefront of DNSSEC adoption. In 2010, a major milestone occurred when the Internet Corporation for Assigned Names and Numbers (ICANN) and Verisign signed the DNS root zone with DNSSEC. Shortly after, Verisign introduced DNSSEC to its TLDs, beginning with .edu in mid-2010, .net in late 2010, and .com in early 2011. Additional TLDs operated by Verisign were subsequently signed as well.

In the time since we signed our TLDs, we have worked continuously to help members of the internet ecosystem take advantage of DNSSEC. We do this through a wide range of activities, including publishing technical resources, leading educational sessions, and advocating for DNSSEC adoption in industry and technical forums.

Growth Over Time

Since the TLDs were first signed, we have observed two very distinct phases of growth in the number of signed second-level domains (SLDs).

The first growth phase occurred from 2012 to 2020. During that time, signed domains in the .com zone grew at about 0.1% of the base per year on average, reaching just over 1% by the end of 2020. In the .net zone, signed domains grew at about 0.1% of the base per year on average, reaching 1.2% by the end of 2020. These numbers demonstrated a slow but steady increase, which can be seen in Figure 1.

Line graph of the percent of .com and .net domain names with Delegation Signer (DS) records where the percent rises from 2010 through 2023.

Figure 1: A chart spanning 2010 through the present shows the number of .com and .net domain names with DS – or Delegation Signer – records. These records form a link in the DNSSEC chain-of-trust for signed domains, indicating an uptick in DNSSEC adoption among SLDs.

We’ve observed more pronounced growth in signed SLDs during the second growth phase, which began in 2020. This is largely due to a single registrar that enabled DNSSEC by default for their new registrations. For .com, the annual rate increased to 0.9% of the base, and for .net, it increased to 1.1% of the base. Currently, 4.2% of .com domains are signed and 5.1% of .net domains are signed. This accelerated growth is also visible in Figure 1.

As we look forward, Verisign anticipates continued growth in the number of domains signed with DNSSEC. To support continued adoption and help further secure the DNS, we’re planning to make one very important change.

Rolling the Algorithm

All Verisign TLDs are currently signed with DNSSEC algorithm 8, also known as RSA/SHA-256, as documented in our DNSSEC Practice Statements. Currently, we use a 2048-bit Key Signing Key (KSK), and 1280-bit Zone Signing Keys (ZSK). The RSA algorithm has served us (and the broader internet) well for many years, but we wanted to take the opportunity to implement more robust security measures while also making more efficient use of resources that support DNSSEC-signed domain names.

We are planning to transition to the Elliptic Curve Digital Signature Algorithm (ECDSA), specifically Curve P-256 with SHA-256, or algorithm number 13, which allows for smaller signatures and improved cryptographic strength. This smaller signature size has a secondary benefit, as well: any potential DDoS attacks will have less amplification as a result of the smaller signatures. This could help protect victims from bad actors and cybercriminals.

Support for DNSSEC signing and validation with ECDSA has been well-established by various managed DNS providers, 78 other TLDs, and nearly 10 million signed SLDs. Additionally, research performed by APNIC and NLnet Labs shows that ECDSA support in validating resolvers has increased significantly in recent years.

The Road to Algorithm 13

How did we get to this point? It took a lot of careful preparation and planning, but with internet stewardship at the forefront of our mission, we wanted to protect the DNS with the best technologies available to us. This means taking precise measures in everything we do, and this transition is no exception.

Initial Planning

Algorithm 13 was on our radar for several years before we officially kicked off the implementation process this year. As mentioned previously, the primary motivating properties were the smaller signature size, with each signature being 96 bytes smaller than our current RSA signatures (160 bytes vs. 64 bytes), and the improved cryptographic strength. This helps us plan for the future and prepare for a world where more domain names are signed with DNSSEC.

Testing

Each TLD will first implement the rollover to algorithm 13 in Verisign’s Operational Test & Evaluation (OT&E) environment prior to implementing the process in production, for a total of two rollovers per TLD. Combined, this will result in six total rollovers across the .com, .net, and .edu TLDs. Rollovers between the individual TLDs will be spaced out to avoid overlap where possible.

The algorithm rollover for each TLD will follow this sequence of events:

  1. Publish algorithm 13 ZSK signatures alongside algorithm 8 ZSK signatures
  2. Publish algorithm 13 DNSKEY records alongside algorithm 8 DNSKEY records
  3. Publish the algorithm 13 DS record in the root zone and stop publishing the algorithm 8 DS record
  4. Stop publishing algorithm 8 DNSKEY records
  5. Stop publishing algorithm 8 ZSK signatures

Only when a successful rollover has been done in OT&E will we begin the process in production.

Who is affected, and when is the change happening?

Now that we’ve given the background, we know you’re wondering: how might this affect me?

The change to a new DNSSEC-signing algorithm is expected to have no impact for the vast majority of internet users, service providers, and domain registrants. According to the aforementioned research by APNIC and NLnet Labs, most DNSSEC validators support ECDSA, and any that do not will simply ignore the signatures and still be able to resolve domains in Verisign-operated TLDs.

Regarding timing, we plan to begin to transition to ECDSA in the third and fourth quarters of this year. We will start the transition process with .edu, then .net, and then .com. We are currently aiming to have these three TLDs transitioned before the end of the fourth quarter 2023, but we will let the community know if our timeline shifts.

Conclusion

As leaders in DNSSEC adoption, this algorithm rollover demonstrates yet another critical step we are taking toward making the internet more secure, stable, and resilient. We look forward to enabling the change later this year, providing more efficient and stronger cryptographic security while optimizing resource utilization for DNSSEC-signed domain names.

The post Verisign Will Help Strengthen Security with DNSSEC Algorithm Update appeared first on Verisign Blog.

Next Steps in Preparing for Post-Quantum DNSSEC

By Burt Kaliski
binary digits on a gradient blue background

In 2021, we discussed a potential future shift from established public-key algorithms to so-called “post-quantum” algorithms, which may help protect sensitive information after the advent of quantum computers. We also shared some of our initial research on how to apply these algorithms to the Domain Name System Security Extensions, or DNSSEC. In the time since that blog post, we’ve continued to explore ways to address the potential operational impact of post-quantum algorithms on DNSSEC, while also closely tracking industry research and advances in this area.

Now, significant activities are underway that are setting the timeline for the availability and adoption of post-quantum algorithms. Since DNS participants – including registries and registrars – use public key-cryptography in a number of their systems, these systems may all eventually need to be updated to use the new post-quantum algorithms. We also announce two major contributions that Verisign has made in support of standardizing this technology: an Internet-Draft as well as a public, royalty-free license to certain intellectual property related to that Internet-Draft.

In this blog post, we review the changes that are on the horizon and what they mean for the DNS ecosystem, and one way we are proposing to ease the implementation of post-quantum signatures – Merkle Tree Ladder mode.

By taking these actions, we aim to be better prepared (while also helping others prepare) for a future where cryptanalytically relevant quantum computing and post-quantum cryptography become a reality.

Recent Developments

In July 2022, the National Institute of Standards and Technology (NIST) selected one post-quantum encryption algorithm and three post-quantum signature algorithms for standardization, with standards for these algorithms arriving as early as 2024. In line with this work, the Internet Engineering Task Force (IETF) has also started standards development activities on applying post-quantum algorithms to internet protocols in various working groups, including the newly formed Post-Quantum Use in Protocols (PQUIP) working group. And finally, the National Security Agency (NSA) recently announced that National Security Systems are expected to transition to post-quantum algorithms by 2035.

Collectively, these announcements and activities indicate that many organizations are envisioning a (post-)quantum future, across many protocols. Verisign’s main concern continues to be how post-quantum cryptography impacts the DNS, and in particular, how post-quantum signature algorithms impact DNSSEC.

DNSSEC Considerations

The standards being developed in the next few years are likely to be the ones deployed when the post-quantum transition eventually takes place, so now is the time to take operational requirements for specific protocols into account.

For DNSSEC, the operational concerns are twofold.

First, the large signature sizes of current post-quantum signatures selected by NIST would result in DNSSEC responses that exceed the size limits of the User Datagram Protocol, which is broadly deployed in the DNS ecosystem. While the Transmission Control Protocol and other transports are available, the additional overhead of having large post-quantum signatures on every response — which can be one to two orders of magnitude as long as traditional signatures —introduces operational risk to the DNS ecosystem that would be preferable to avoid.

Second, the large signatures would significantly increase memory requirements for resolvers using in-memory caches and authoritative nameservers using in-memory databases.

Bar graph of the size impact of traditional and post-quantum signature size where a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.
Figure 1: Size impact of traditional and post-quantum signature size impact on a fully signed DNS zone. Horizontal bars show percentage of zone that would be signature for two traditional and two post-quantum algorithms; vertical bars show the percentage increase in the zone size due to signature data.

Figure 1, from Andy Fregly’s recent presentation at OARC 40, shows the impact on a fully signed DNS zone where, on average, there are 2.2 digital signatures per resource record set (covering both existence and non-existence proofs). The horizontal bars show the percentage of the zone file that would be comprised of signature data for the two prevalent current algorithms, RSA and ECDSA, and for the smallest and largest of the NIST PQC algorithms. At the low and high end of these examples, signatures with ECDSA would take up 40% of the zone and SPHINCS+ signatures would take up over 99% of the zone. The vertical bars give the percentage size increase of the zone file due to signatures. Again, comparing the low and high end, a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.

Merkle Tree Ladder Mode: Reducing Size Impact of Post-Quantum Signatures

In his 1988 article, “The First Ten Years of Public-Key Cryptography,” Whitfield Diffie, co-discoverer of public-key cryptography, commented on the lack of progress in finding public-key encryption algorithms that were as fast as the symmetric-key algorithms of the day: “Theorems or not, it seemed silly to expect that adding a major new criterion to the requirements of a cryptographic system could fail to slow it down.”

Diffie’s counsel also appears relevant to the search for post-quantum algorithms: It would similarly be surprising if adding the “major new criterion” of post-quantum security to the requirements of a digital signature algorithm didn’t impact performance in some way. Signature size may well be the tradeoff for post-quantum security, at least for now.

With this tradeoff in mind, Verisign’s post-quantum research team has explored ways to address the size impact, particularly to DNSSEC, arriving at a construction we call a Merkle Tree Ladder (MTL), a generalization of a single-rooted Merkle tree (see Figure 2). We have also defined a technique that we call the Merkle Tree Ladder mode of operation for using the construction with an underlying signature algorithm.

Diagram showing an example of a Merkle tree ladder.
Figure 2: A Merkle Tree Ladder consists of one or more “rungs” that authenticate or “cover” the leaves of a generalized Merkle tree. In this example, rungs 19:19, 17:18, and 1:16 are the collectively the ancestors of all 19 leaves of the tree and therefore cover them. The values of the nodes are the hash of the values of their children, providing cryptographic protection. A Merkle authentication path consisting of sibling nodes authenticates a leaf node relative to the ladder e.g., leaf node 7 (corresponding to message 7 beneath) can be authenticated relative to rung 1:16 by rehashing it with the sibling nodes along the path 8, 5:6, 1:4 and 9:16. If the verifier already has a previous ladder that covers a message, the verifier can instead rehash relative to that ladder, e.g., leaf node 7 can be verified relative to rung 1:8 using sibling nodes 8, 5:6 and 1:4.

Similar to current deployments of public-key cryptography, MTL mode combines processes with complementary properties to balance performance and other criteria (see Table 1). In particular, in MTL mode, rather than signing individual messages with a post-quantum signature algorithm, ladders comprised of one or more Merkle tree nodes are signed using the post-quantum algorithm. Individual messages are then authenticated relative to the ladders using Merkle authentication paths.

Criterion to Achieve Initial Design with a Single Process Improved Design Combining Complementary Processes Benefit
Public-Key Property for Encryption – Encrypt Individual Messages with Public-Key Algorithm – Establish Symmetric Keys Using Public-Key Algorithm
– Encrypt Multiple Messages Using Each Symmetric Key
– Amortize Cost of Public-Key Operations Across Multiple Messages
Post-Quantum Property for Signatures – Sign Individual Messages with Post-Quantum Algorithm – Sign Merkle Tree Ladders using Post-Quantum Algorithm
– Authenticate Multiple Messages Relative to Each Signed Ladder
– Amortize Size of Post-Quantum Signature Across Multiple Messages
Table 1: Speed concerns for traditional public-key algorithms were addressed by combining them with symmetric-key algorithms (for instance, as outlined in early specifications for Internet Privacy-Enhanced Mail). Size concerns for emerging post-quantum signature algorithms can potentially be addressed by combining them with constructions such as Merkle Tree Ladders.

Although the signatures on the ladders might be relatively large, the ladders and their signatures are sent infrequently. In contrast, the Merkle authentication paths that are sent for each message are relatively short. The combination of the two processes maintains the post-quantum property while amortizing the size impact of the signatures across multiple messages. (Merkle tree constructions, being based on hash functions, are naturally post-quantum.)

The two-part approach for public-key algorithms has worked well in practice. In Transport Layer Security, symmetric keys are established in occasional handshake operations, which may be more expensive. The symmetric keys are then used to encrypt multiple messages within a session without further overhead for key establishment. (They can also be used to start a new session).

We expect that a two-part approach for post-quantum signatures can similarly work well in an application like DNSSEC where verifiers are interested in authenticating a subset of messages from a large, evolving message series (e.g., DNS records).

In such applications, signed Merkle Tree Ladders covering a range of messages in the evolving series can be provided to a verifier occasionally. Verifiers can then authenticate messages relative to the ladders, given just a short Merkle authentication path.

Importantly, due to a property of Merkle authentication paths called backward compatibility, all verifiers can be given the same authentication path relative to the signer’s current ladder. This also helps with deployment in applications such as DNSSEC, since the authentication path can be published in place of a traditional signature. An individual verifier may verify the authentication path as long as the verifier has a previously signed ladder covering the message of interest. If not, then the verifier just needs to get the current ladder.

As reported in our presentation on MTL mode at the RSA Conference Cryptographers’ Track in April 2023, our initial evaluation of the expected frequency of requests for MTL mode signed ladders in DNSSEC is promising, suggesting that a significant reduction in effective signature size impact can be achieved.

Verisign’s Contributions to Standardization

To facilitate more public evaluation of MTL mode, Verisign’s post-quantum research team last week published the Internet-Draft “Merkle Tree Ladder Mode (MTL) Signatures.” The draft provides the first detailed, interoperable specification for applying MTL mode to a signature scheme, with SPHINCS+ as an initial example.

We chose SPHINCS+ because it is the most conservative of the NIST PQC algorithms from a cryptographic perspective, being hash-based and stateless. It is arguably most suited to be one of the algorithms in a long-term deployment of a critical infrastructure service like DNSSEC. With this focus, the specification has a “SPHINCS+-friendly” style. Implementers familiar with SPHINCS+ will find similar notation and constructions as well as common hash function instantiations. We are open to adding other post-quantum signature schemes to the draft or other drafts in the future.

Publishing the Internet-Draft is a first step toward the goal of standardizing a mode of operation that can reduce the size impact of post-quantum signature algorithms.

In support of this goal, Verisign also announced this week a public, royalty-free license to certain intellectual property related to the Internet-Draft published last week. Similar to other intellectual property rights declarations the company has made, we have announced a “Standards Development Grant” which provides the listed intellectual property under royalty-free terms for the purpose of facilitating standardization of the Internet-Draft we published on July 10, 2023. (The IPR declaration gives the official language.)

We expect to release an open-source implementation of the Internet-Draft soon, and, later this year, to publish an Internet-Draft on using MTL mode signatures in DNSSEC.

With these contributions, we invite implementers to take part in the next step toward standardization: evaluating initial versions of MTL mode to confirm whether they indeed provide practical advantages in specific use cases.

Conclusion

DNSSEC continues to be an important part of the internet’s infrastructure, providing cryptographic verification of information associated with the unique, stable identifiers in this ubiquitous namespace. That is why preparing for an eventual transition to post-quantum algorithms for DNSSEC has been and continues to be a key research and development activity at Verisign, as evidenced by our work on MTL mode and post-quantum DNSSEC more generally.

Our goal is that with a technique like MTL mode in place, protocols like DNSSEC can preserve the security characteristics of a pre-quantum environment while minimizing the operational impact of larger signatures in a post-quantum world.

In a later blog post, we’ll share more details on some upcoming changes to DNSSEC, and how these changes will provide both security and operational benefits to DNSSEC in the near term.

Verisign plans to continue to invest in research and standards development in this area, as we help prepare for a post-quantum future.

The post Next Steps in Preparing for Post-Quantum DNSSEC appeared first on Verisign Blog.

The Risks and Preventions of AI in Business: Safeguarding Against Potential Pitfalls

By The Hacker News
Artificial intelligence (AI) holds immense potential for optimizing internal processes within businesses. However, it also comes with legitimate concerns regarding unauthorized use, including data loss risks and legal consequences. In this article, we will explore the risks associated with AI implementation and discuss measures to minimize damages. Additionally, we will examine regulatory

Verisign Honors Vets in Technology For Military Appreciation Month

By Ellen Petrocci
Verisign veterans american flag

For Murray Green, working for a company that is a steward of critical internet infrastructure is a mission that he can get behind. Green, a senior engineering manager at Verisign, is a U.S. Army veteran who served during Operation Desert Storm and sees stewardship as a lifelong mission. In both roles, he has stayed focused on the success of the mission and cultivating great teamwork.

Teamwork is something that Laura Street, a software engineer and U.S. Air Force veteran, came to appreciate through her military service. It was then that she learned to appreciate how people from different backgrounds can work together on missions by finding their commonalities.

While military and civilian roles are very different, Verisign appeals to many veterans because of the mission-driven nature of the work we do.

Green and Street are two of the many veterans who have chosen to apply their military experience in a civilian career at Verisign. Both say that the work is not only rewarding to them, but to anyone who depends on Verisign’s commitment in helping to maintain the security, stability, and resiliency of the Domain Name System (DNS) and the internet.

At Verisign, we celebrate Military Appreciation Month by paying tribute to those who have served and recognizing how fortunate we are to work alongside amazing veterans whose contributions to our work provide enormous value.

Introducing Data-Powered Technology

Before joining the military, Murray Green studied electrical engineering but soon realized that his true passion was computer science. Looking for a way to pay for school and explore and excel as a Programmer Analyst, he turned to the U.S. Army.

He served more than four years at the Walter Reed Army Medical Center in Washington as the sole programmer for military personnel, using a proprietary language to maintain a reporting system that supplied data analysis. It was a role that helped him recognize the importance of data to any mission – whether for the U.S. Army or a company like Verisign.

At Walter Reed, he helped usher in the age of client-server computing, which dramatically reduced data processing time. “Around this time, personal computers connected to mini servers were just coming online so, using this new technology, I was able to unload data from the mainframe and bring it down to minicomputers running programs locally, which resulted in tasks being completed without the wait times associated with conventional mainframe computing,” he said. “I was there at the right time.”

His work led him to receive the Meritorious Service Medal, recognizing his expertise in the proprietary programming language that was used to assist in preparation for Operation Desert Storm, the first mobilization of U.S. Army personnel since Vietnam.

In the military, he also came to understand the importance of leadership – “providing purpose, direction, and motivation to accomplish the mission and improve the organization.”

Green has been at Verisign for over 20 years, starting off in the registry side of the business. In that role, he helped maintain the .com/.net top level-domain (TLD) name database, which at the time, held 5 million domain names. Today, he still oversees this database, managing a highly skilled team that has helped provide uninterrupted resolution service for .com and .net for over a quarter of a century.

Sense of Teamwork Leaves a Lasting Impression

Street had been in medical school, looking for a way to pay for her continued education, when she heard about the military’s Health Professional Scholarship Program and turned to the U.S. Air Force.

“I met some terrific people in the military,” she said. “My favorite experiences involved working with people who cared about others and were able to motivate them with positivity.” But it was the sense of teamwork she encountered in the military that left a lasting impression.

“There’s a sense of accountability and concern for others,” she said. “You help one another.”

While working in the Education and Training department, she had been working with a support team to troubleshoot a video that wasn’t loading properly and was impressed with how the developers worked to fix the problem. She immediately took an interest in programming and enrolled in night classes at a local community college. After completing her service in the U.S. Air Force, she went back to school to pursue a bachelor’s degree in computer science.

She’s been at Verisign for two years and, while the job itself is rewarding because it taps into so many of her interests – from Java programming to network protection and packet analysis – it was the chemistry with the team that was most enticing about the role.

“I felt as at-ease as one can possibly feel during a technical interview,” she said. “I got the sense that these were people who I would want to work with.

Street credits the military for teaching her valuable communication and teamwork skills that she continues to apply in her role, which focuses on keeping the .com and .net top-level-domains available around the clock, around the world.

A Unique and Global Mission

Both Green and Street encourage service members to stay focused on the success of their personal missions and the teamwork they learned in the military, and to leverage those skills in the civilian world. Use your service as a selling point and understand that companies value that background more than you think, they said.

“Being proud of the service we provide to others and paying attention to details allows us at Verisign to make a global difference,” Green said. “The veterans on our team bring an incredible skillset that is highly valued here. I know that I’m a part of an incredible team at Verisign.”

Verisign is proud to create career opportunities where veterans can apply their military training. To learn more about our current openings, visit Verisign Careers.

The post Verisign Honors Vets in Technology For Military Appreciation Month appeared first on Verisign Blog.

New Decoy Dog Malware Toolkit Uncovered: Targeting Enterprise Networks

By Ravie Lakshmanan
An analysis of over 70 billion DNS records has led to the discovery of a new sophisticated malware toolkit dubbed Decoy Dog targeting enterprise networks. Decoy Dog, as the name implies, is evasive and employs techniques like strategic domain aging and DNS query dribbling, wherein a series of queries are transmitted to the command-and-control (C2) domains so as to not arouse any suspicion. "

Adding ZONEMD Protections to the Root Zone

By Duane Wessels
blue-circuit-board

The Domain Name System (DNS) root zone will soon be getting a new record type, called ZONEMD, to further ensure the security, stability, and resiliency of the global DNS in the face of emerging new approaches to DNS operation. While this change will be unnoticeable for the vast majority of DNS operators (such as registrars, internet service providers, and organizations), it provides a valuable additional layer of cryptographic security to ensure the reliability of root zone data.

In this blog, we’ll discuss these new proposals, as well as ZONEMD. We’ll share deployment plans, how they may affect certain users, and what DNS operators need to be aware of beforehand to ensure little-to-no disruptions.

The Root Server System

The DNS root zone is the starting point for most domain name lookups on the internet. The root zone contains delegations to nearly 1,500 top-level domains, such as .com, .net, .org, and many others. Since its inception in 1984, various organizations known collectively as the Root Server Operators have provided the service for what we now call the Root Server System (RSS). In this system, a myriad of servers respond to approximately 80 billion root zone queries each day.

While the RSS continues to perform this function with a high degree of dependability, there are recent proposals to use the root zone in a slightly different way. These proposals create some efficiencies for DNS operators, but they also introduce new challenges.

New Proposals

In 2020, the Internet Engineering Task Force (IETF) published RFC 8806, titled “Running a Root Server Local to a Resolver.” Along the same lines, in 2021 the Internet Corporation for Assigned Names and Numbers (ICANN) Office of the Chief Technology Officer published OCTO-027, titled “Hyperlocal Root Zone Technical Analysis.” Both proposals share the idea that recursive name servers can receive and load the entire root zone locally and respond to root zone queries directly.

But in a scenario where the entire root zone is made available to millions of recursive name servers, a new question arises: how can consumers of zone data verify that zone content has not been modified before reaching their systems?

One might imagine that DNS Security Extensions (DNSSEC) could help. However, while the root zone is indeed signed with DNSSEC, most of the records in the zone are considered non-authoritative (i.e., all the NS and glue records) and therefore do not have signatures. What about something like a Pretty Good Privacy (PGP) signature on the root zone file? That comes with its own challenge: in PGP, the detached signature is easily separated from the data. For example, there is no way to include a PGP signature over DNS zone transfer, and there is no easy way to know which version of the zone goes with the signature.

Introducing ZONEMD

A solution to this problem comes from RFC 8976. Led by Verisign and titled “Message Digest for DNS Zones” (known colloquially as ZONEMD), this protocol calls for a cryptographic digest of the zone data to be embedded into the zone itself. This ZONEMD record can then be signed and verified by consumers of the zone data. Here’s how it works:

Each time a zone is updated, the publisher calculates the ZONEMD record by sorting and canonicalizing all the records in the zone and providing them as input to a message digest function. Sorting and canonicalization are the same as for DNSSEC. In fact, the ZONEMD calculation can be performed at the same time the zone is signed. Digest calculation necessarily excludes the ZONEMD record itself, so the final step is to update the ZONEMD record and its signatures.

A recipient of a zone that includes a ZONEMD record repeats the same calculation and compares its calculated digest value with the published digest. If the zone is signed, then the recipient can also validate the correctness of the published digest. In this way, recipients can verify the authenticity of zone data before using it.

A number of open-source DNS software products now, or soon will, include support for ZONEMD verification. These include Unbound (version 1.13.2), NSD (version 4.3.4), Knot DNS (version 3.1.0), PowerDNS Recursor (version 4.7.0) and BIND (version 9.19).

Who Is Affected?

Verisign, ICANN, and the Root Server Operators are taking steps to ensure that the addition of the ZONEMD record in no way impacts the ability of the root server system to receive zone updates and to respond to queries. As a result, most internet users are not affected by this change.

Anyone using RFC 8806, or a similar technique to load root zone data into their local resolver, is unlikely to be affected as well. Software products that implement those features should be able to fully process a zone that includes the new record type, especially for reasons described below. Once the record has been added, users can take advantage of ZONEMD verification to ensure root zone data is authentic.

Users most likely to be affected are those that receive root zone data from the internic.net servers (or some other source) and use custom software to parse the zone file. Depending on how such custom software is designed, there is a possibility that it will treat the new ZONEMD record as unexpected and lead to an error condition. Key objectives of this blog post are to raise awareness of this change, provide ample time to address software issues, and minimize the likelihood of disruptions for such users.

Deployment Plan

In 2020, Verisign asked the Root Zone Evolution Review Committee (RZERC) to consider a proposal for adding data protections to the root zone using ZONEMD. In 2021, the RZERC published its recommendations in RZERC003. One of those recommendations was for Verisign and ICANN to develop a deployment plan and make the community aware of the plan’s details. That plan is summarized in the remainder of this blog post.

Phased Rollout

One attribute of a ZONEMD record is the choice of a hash algorithm used to create the digest. RFC 8976 defines two standard hash algorithms – SHA-384 and SHA-512 – and a range of “private-use” algorithms.

Initially, the root zone’s ZONEMD record will have a private-use hash algorithm. This allows us to first include the record in the zone without anyone worrying about the validity of the digest values. Since the hash algorithm is from the private-use range, a consumer of the zone data will not know how to calculate the digest value. A similar technique, known as the “Deliberately Unvalidatable Root Zone,” was utilized when DNSSEC was added to the root zone in 2010.

After a period of more than two months, the ZONEMD record will transition to a standard hash algorithm.

Hash Algorithm

SHA-384 has been selected for the initial implementation for compatibility reasons.

The developers of BIND implemented the ZONEMD protocol based on an early Internet-Draft, some time before it was published as an RFC. Unfortunately, the initial BIND implementation only accepts ZONEMD records with a digest length of 48 bytes (i.e., the SHA-384 length). Since the versions of BIND with this behavior are in widespread use today, use of the SHA-512 hash algorithm would likely lead to problems for many BIND installations, possibly including some Root Server Operators.

Presentation Format

Distribution of the zone between the Root Zone Maintainer and Root Server Operators primarily takes place via the DNS zone transfer protocol. In this protocol, zone data is transmitted in “wire format.”

The root zone is also stored and served as a file on the internic.net FTP and web servers. Here, the zone data is in “presentation format.” The ZONEMD record will appear in these files using its native presentation format. For example:

. 86400 IN ZONEMD 2021101902 1 1 ( 7d016e7badfd8b9edbfb515deebe7a866bf972104fa06fec
e85402cc4ce9b69bd0cbd652cec4956a0f206998bfb34483 )

Some users of zone data received from the FTP and web servers might currently be using software that does not recognize the ZONEMD presentation format. These users might experience some problems when the ZONEMD record first appears. We did consider using a generic record format; however, in consultation with ICANN, we believe that the native format is a better long-term solution.

Schedule

Currently, we are targeting the initial deployment of ZONEMD in the root zone for September 13, 2023. As previously stated, the ZONEMD record will be published first with a private-use hash algorithm number. We are targeting December 6, 2023, as the date to begin using the SHA-384 hash algorithm, at which point the root zone ZONEMD record will become verifiable.

Conclusion

Deploying ZONEMD in the root zone helps to increase the security, stability, and resiliency of the DNS. Soon, recursive name servers that choose to serve root zone data locally will have stronger assurances as to the zone’s validity.

If you’re interested in following the ZONEMD deployment progress, please look for our announcements on the DNS Operations mailing list.

The post Adding ZONEMD Protections to the Root Zone appeared first on Verisign Blog.

Minimized DNS Resolution: Into the Penumbra

By Zaid AlBanna
green-and-yellow-web-circuit-board

Over the past several years, domain name queries – a critical element of internet communication – have quietly become more secure, thanks, in large part, to a little-known set of technologies that are having a global impact. Verisign CTO Dr. Burt Kaliski covered these in a recent Internet Protocol Journal article, and I’m excited to share more about the role Verisign has performed in advancing this work and making one particular technology freely available worldwide.

The Domain Name System (DNS) has long followed a traditional approach of answering queries, where resolvers send a query with the same fully qualified domain name to each name server in a chain of referrals. Then, they generally apply the final answer they receive only to the domain name that was queried for in the original request.

But recently, DNS operators have begun to deploy various “minimization techniques” – techniques aimed at reducing both the quantity and sensitivity of information exchanged between DNS ecosystem components as a means of improving DNS security. Why the shift? As we discussed in a previous blog, it’s all in the interest of bringing the process closer to the “need-to-know” security principle, which emphasizes the importance of sharing only the minimum amount of information required to complete a task or carry out a function. This effort is part of a general, larger movement to reduce the disclosure of sensitive information in our digital world.

As part of Verisign’s commitment to security, stability, and resiliency of the global DNS, the company has worked both to develop qname minimization techniques and to encourage the adoption of DNS minimization techniques in general. We believe strongly in this work since these techniques can reduce the sensitivity of DNS data exchanged between resolvers and both root and TLD servers without adding operational risk to authoritative name server operations.

To help advance this area of technology, in 2015, Verisign announced a royalty-free license to its qname minimization patents in connection with certain Internet Engineering Task Force (IETF) standardization efforts. There’s been a steady increase in support and deployment since that time; as of this writing, roughly 67% of probes were utilizing qname-minimizing resolvers, according to statistics hosted by NLnet Labs. That’s up from just 0.7% in May 2017 – a strong indicator of minimization techniques’ usefulness to the community. At Verisign, we are seeing similar trends with approximately 65% of probes utilizing qname-minimizing resolvers in queries with two labels at .com and .net authoritative name servers, as shown in Figure 1 below.

Graph showing percentage of queries with two labels observed at COM/NET authoritative name servers
Figure 1: A domain name consists of one or more labels. For instance, www.example.com consists of three labels: “www”, “example”, and “com”. This chart suggests that more and more recursive resolvers have implemented qname minimization, which results in fewer queries for domain names with three or more labels. With qname minimization, the resolver would send “example.com,” with two labels, instead of “www.example.com” with all three.

Kaliski’s article, titled “Minimized DNS Resolution: Into the Penumbra,” explores several specific minimization techniques documented by the IETF, reports on their implementation status, and discusses the effects of their adoption on DNS measurement research. An expanded version of the article can be found on the Verisign website.

This piece is just one of the latest to demonstrate Verisign’s continued investment in research and standards development in the DNS ecosystem. As a company, we’re committed to helping shape the DNS of today and tomorrow, and we recognize this is only possible through ongoing contributions by dedicated members of the internet infrastructure community – including the team here at Verisign.

Read more about Verisign’s contributions to this area:

Query Name Minimization and Authoritative DNS Server Behavior – DNS-OARC Spring ’15 Workshop (presentation)

Minimum Disclosure: What Information Does a Name Server Need to Do Its Job? (blog)

Maximizing Qname Minimization: A New Chapter in DNS Protocol Evolution (blog)

A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere (blog)

Information Protection for the Domain Name System: Encryption and Minimization (blog)

The post Minimized DNS Resolution: Into the Penumbra appeared first on Verisign Blog.

Verisign’s Role in Securing the DNS Through Key Signing Ceremonies

By Duane Wessels
blue and white digital lines

Every few months, an important ceremony takes place. It’s not splashed all over the news, and it’s not attended by global dignitaries. It goes unnoticed by many, but its effects are felt across the globe. This ceremony helps make the internet more secure for billions of people.

This unique ceremony began in 2010 when Verisign, the Internet Corporation for Assigned Names and Numbers (ICANN), and the U.S. Department of Commerce’s National Telecommunications and Information Administration collaborated – with input from the global internet community – to deploy a technology called Domain Name System Security Extensions (DNSSEC) to the Domain Name System (DNS) root zone in a special ceremony. This wasn’t a one-off occurrence in the history of the DNS, though. Instead, these organizations developed a set of processes, procedures, and schedules that would be repeated for years to come. Today, these recurring ceremonies help ensure that the root zone is properly signed, and as a result, the DNS remains secure, stable, and resilient.

In this blog, we take the opportunity to explain these ceremonies in greater detail and describe the critical role that Verisign is honored to perform.

A Primer on DNSSEC, Key Signing Keys, and Zone Signing Keys

DNSSEC is a series of technical specifications that allow operators to build greater security into the DNS. Because the DNS was not initially designed as a secure system, DNSSEC represented an essential leap forward in securing DNS communications. Deploying DNSSEC allows operators to better protect their users, and it helps to prevent common threats such as “man-in-the-middle” attacks. DNSSEC works by using public key cryptography, which allows zone operators to cryptographically sign their zones. This allows anyone communicating with and validating a signed zone to know that their exchanges are genuine.

The root zone, like most signed zones, uses separate keys for zone signing and for key signing. The Key Signing Key (KSK) is separate from the Zone Signing Key (ZSK). However, unlike most zones, the root zone’s KSK and ZSK are operated by different organizations; ICANN serves as the KSK operator and Verisign as the ZSK operator. These separate roles for DNSSEC align naturally with ICANN as the Root Zone Manager and Verisign as the Root Zone Maintainer.

In practice, the KSK/ZSK split means that the KSK only signs the DNSSEC keys, and the ZSK signs all the other records in the zone. Signing with the KSK happens infrequently – only when the keys change. However, signing with the ZSK happens much more frequently – whenever any of the zone’s other data changes.

DNSSEC and Public Key Cryptography

Something to keep in mind before we go further: remember that DNSSEC utilizes public key cryptography, in which keys have both a private and public component. The private component is used to generate signatures and must be guarded closely. The public component is used to verify signatures and can be shared openly. Good cryptographic hygiene says that these keys should be changed (or “rolled”) periodically.

In DNSSEC, changing a KSK is generally difficult, whereas changing a ZSK is relatively easy. This is especially true for the root zone where a KSK rollover requires all validating recursive name servers to update their copy of the trust anchor. Whereas the first and only KSK rollover to date happened after a period of eight years, ZSK rollovers take place every three months. Not coincidentally, this is also how often root zone key signing ceremonies take place.

Why We Have Ceremonies

The notion of holding a “ceremony” for such an esoteric technical function may seem strange, but this ceremony is very different from what most people are used to. Our common understanding of the word “ceremony” brings to mind an event with speeches and formal attire. But in this case, the meaning refers simply to the formality and ritual aspects of the event.

There are two main reasons for holding key signing ceremonies. One is to bring participants together so that everyone may transparently witness the process. Ceremony participants include ICANN staff, Verisign staff, Trusted Community Representatives (TCRs), and external auditors, plus guests on occasion.

The other important reason, of course, is to generate DNSSEC signatures. Occasionally other activities take place as well, such as generating new keys, retiring equipment, and changing TCRs. In this post, we’ll focus only on the signature generation procedures.

The Key Signing Request

A month or two before each ceremony, Verisign generates a file called the Key Signing Request (KSR). This is an XML document which includes the set of public key records (both KSK and ZSK) to be signed and then used during the next calendar quarter. The KSR is securely transmitted from Verisign to the Internet Assigned Numbers Authority (IANA), which is a function of ICANN that performs root zone management. IANA securely stores the KSR until it is needed for the upcoming key signing ceremony.

Each quarter is divided into nine 10-day “slots” (for some quarters, the last slot is extended by a day or two) and the XML file contains nine key “bundles” to be signed. Each bundle, or slot, has a signature inception and expiration timestamp, such that they overlap by at least five days. The first and last slots in each quarter are used to perform ZSK rollovers. During these slots we publish two ZSKs and one KSK in the root zone.

At the Ceremony: Details Matter

The root zone KSK private component is held inside secure Hardware Security Modules (HSMs). These HSMs are stored inside locked safes, which in turn are kept inside locked rooms. At a key signing ceremony, the HSMs are taken out of their safes and activated for use. This all occurs according to a pre-defined script with many detailed steps, as shown in the figure below.

Script for steps during key signing ceremony
Figure 1: A detailed script outlining the exact steps required to activate HSMs, as well as the initials and timestamps of witnesses.

Also stored inside the safe is a laptop computer, its operating system on non-writable media (i.e., DVD), and a set of credentials for the TCRs, stored on smart cards and locked inside individual safe deposit boxes. Once all the necessary items are removed from the safes, the equipment can be turned on and activated.

The laptop computer is booted from its operating system DVD and the HSM is connected via Ethernet for data transfer and serial port for console logging. The TCR credentials are used to activate the HSM. Once activated, a USB thumb drive containing the KSR file is connected to the laptop and the signing program is started.

The signing program reads the KSR, validates it, and then displays information about the keys about to be signed. This includes the signature inception and expiration timestamps, and the ZSK key tag values.

Validate and Process KSR /media/KSR/KSK46/ksr-root-2022-q4-0.xml...
#  Inception           Expiration           ZSK Tags      KSK Tag(CKA_LABEL)
1  2022-10-01T00:00:00 2022-10-22T00:00:00  18733,20826
2  2022-10-11T00:00:00 2022-11-01T00:00:00  18733
3  2022-10-21T00:00:00 2022-11-11T00:00:00  18733
4  2022-10-31T00:00:00 2022-11-21T00:00:00  18733
5  2022-11-10T00:00:00 2022-12-01T00:00:00  18733
6  2022-11-20T00:00:00 2022-12-11T00:00:00  18733
7  2022-11-30T00:00:00 2022-12-21T00:00:00  18733
8  2022-12-10T00:00:00 2022-12-31T00:00:00  18733
9  2022-12-20T00:00:00 2023-01-10T00:00:00  00951,18733
...PASSED.

It also displays an SHA256 hash of the KSR file and a corresponding “PGP (Pretty Good Privacy) Word List.” The PGP Word List is a convenient and efficient way of verbally expressing hexadecimal values:

SHA256 hash of KSR:
ADCE9749F3DE4057AB680F2719B24A32B077DACA0F213AD2FB8223D5E8E7CDEC
>> ringbolt sardonic preshrunk dinosaur upset telephone crackdown Eskimo rhythm gravity artist celebrate bedlamp pioneer dogsled component ruffled inception surmount revenue artist Camelot cleanup sensation watchword Istanbul blowtorch specialist trauma truncated spindle unicorn <<

At this point, a Verisign representative comes forward to verify the KSR. The following actions then take place:

  1. The representative’s identity and proof-of-employment are verified.
  2. They verbalize the PGP Word List based on the KSR sent from Verisign.
  3. TCRs and other ceremony participants compare the spoken list of words to those displayed on the screen.
  4. When the checksum is confirmed to match, the ceremony administrator instructs the program to proceed with generating the signatures.

The signing program outputs a new XML document, called the Signed Key Response (SKR). This document contains signatures over the DNSKEY resource record sets in each of the nine slots. The SKR is saved to a USB thumb drive and given to a member of the Root Zone KSK Operations Security team. Usually sometime the next day, IANA securely transmits the SKR back to Verisign. Following several automatic and manual verification steps, the signature data is imported into Verisign’s root zone management system for use at the appropriate times in the next calendar quarter.

Why We Do It

Keeping the internet’s DNS secure, stable, and resilient is a crucial aspect of Verisign’s role as the Root Zone Maintainer. We are honored to participate in the key signing ceremonies with ICANN and the TCRs and do our part to help the DNS operate as it should.

For more information on root key signing ceremonies, visit the IANA website. Visitors can watch video recordings of previous ceremonies and even sign up to witness the next ceremony live. It’s a great resource, and a unique opportunity to take part in a process that helps keep the internet safe for all.

The post Verisign’s Role in Securing the DNS Through Key Signing Ceremonies appeared first on Verisign Blog.

Serious Security: How dEliBeRaTe tYpOs might imProVe DNS security

By Paul Ducklin
It's a really cool and super-simple trick. The question is, "Will it help?"

Keep Your Grinch at Bay: Here's How to Stay Safe Online this Holiday Season

By The Hacker News
As the holiday season approaches, online shopping and gift-giving are at the top of many people's to-do lists. But before you hit the "buy" button, it's important to remember that this time of year is also the peak season for cybercriminals. In fact, cybercriminals often ramp up their efforts during the holidays, taking advantage of the influx of online shoppers and the general hustle and bustle

Celebrating 35 Years of the DNS Protocol

By Scott Hollenbeck
Celebrating 35 Years of the DNS Protocol

In 1987, CompuServe introduced GIF images, Steve Wozniak left Apple and IBM introduced the PS/2 personal computer with improved graphics and a 3.5-inch diskette drive. Behind the scenes, one more critical piece of internet infrastructure was quietly taking form to help establish the internet we know today.

November of 1987 saw the establishment of the Domain Name System protocol suite as internet standards. This was a development that not only would begin to open the internet to individuals and businesses globally, but also would arguably redefine communications, commerce and access to information for future generations.

Today, the DNS continues to be critical to the operation of the internet as a whole. It has a long and strong track record thanks to the work of the internet’s pioneers and the collaboration of different groups to create volunteer standards.

Let’s take a look back at the journey of the DNS over the years.

Scaling the Internet for All

Prior to 1987, the internet was primarily used by government agencies and members of academia. Back then, the Network Information Center, managed by SRI International, manually maintained a directory of hosts and networks. While the early internet was transformative and forward-thinking, not everyone had access to it.

During that same time period, the U.S. Advanced Research Projects Agency Network, the forerunner to the internet we know now, was evolving into a growing network environment, and new naming and addressing schemes were being proposed. Seeing that there were thousands of interested institutions and companies wanting to explore the possibilities of networked computing, a group of ARPA networking researchers realized that a more modern, automated approach was needed to organize the network’s naming system for anticipated rapid growth.

Two Request for Comments documents, numbered RFC 1034 and RFC 1035, were published in 1987 by the informal Network Working Group, which soon after evolved into the Internet Engineering Task Force. Those RFCs, authored by computer scientist Paul V. Mockapetris, became the standards upon which DNS implementations have been built. It was Mockapetris, inducted into the Internet Hall of Fame in 2012, who specifically suggested a name space where database administration was distributed but could also evolve as needed.

In addition to allowing organizations to maintain their own databases, the DNS simplified the process of connecting a name that users could remember with a unique set of numbers – the Internet Protocol address – that web browsers needed to navigate to a website using a domain name. By not having to remember a seemingly random string of numbers, users could easily get to their intended destination, and more people could access the web. This has worked in a logical way for all internet users – from businesses large and small to everyday people – all around the globe.

With these two aspects of the DNS working together – wide distribution and name-to-address mapping – the DNS quickly took shape and developed into the system we know today.

The Multistakeholder Model and Rough Consensus

Thirty-five years of DNS development and progress is attributable to the collaboration of multiple stakeholders and interest groups – academia, technical community, governments, law enforcement and civil society, plus commercial and intellectual property interests – who continue even today to bring crucial perspectives to the table as it relates to the evolution of the DNS and the internet. These perspectives have lent themselves to critical security developments in the DNS, from assuring protection of intellectual property rights to the more recent stakeholder collaborative efforts to address DNS abuse.

Other major collaborative achievements involve the IETF, which has no formal membership roster or requirements, and is responsible for the technical standards that comprise the internet protocol suite, and the Internet Corporation for Assigned Names and Numbers, which plays a central coordination role in the bottom-up multistakeholder system governing the global DNS. Without constructive and productive voluntary collaboration, the internet as we know it simply isn’t possible.

Indeed, these cooperative efforts marshaled a brand of collaboration known today as “rough consensus.” That term, originally “rough consensus and running code,” gave rise to a more dynamic collaboration process than the “100% consensus from everyone” model. In fact, the term was adopted by the IETF in the early days of establishing the DNS to describe the formation of the dominant view of the working group and the need to quickly implement new technologies, which doesn’t always allow for lengthy discussions and debates. This approach is still in use today, proving its usefulness and longevity.

Recognizing a Milestone

As we look back on how the DNS came to be and the processes that have kept it reliably running, it’s important to recognize the work done by the organizations and individuals that make up this community. We must also remember that the efforts continue to be powered by voluntary collaborations.

Commemorating anniversaries such as 35 years of the DNS protocol allows the multiple stakeholders and communities to pause and reflect on the enormity of the work and responsibility before us. Thanks to the pioneering minds who conceived and built the early infrastructure of the internet, and in particular to Paul Mockapetris’s fundamental contribution of the DNS protocol suite, the world has been able to establish a robust global economy that few could ever have imagined so many years ago.

The 35th anniversary of the publication of RFCs 1034 and 1035 reminds us of the contributions that the DNS has made to the growth and scale of what we know today as “the internet.” That’s a moment worth celebrating.

The post Celebrating 35 Years of the DNS Protocol appeared first on Verisign Blog.

VPN vs. DNS Security

By The Hacker News
When you are trying to get another layer of cyber protection that would not require a lot of resources, you are most likely choosing between a VPN service & a DNS Security solution. Let's discuss both.  VPN Explained VPN stands for Virtual Private Networks and basically hides your IP and provides an encrypted server by redirecting your traffic via a server run by a VPN host. It establishes a

Unified Threat Management: The All-in-One Cybersecurity Solution

By The Hacker News
UTM (Unified threat management) is thought to be an all-in-one solution for cybersecurity. In general, it is a versatile software or hardware firewall solution integrated with IPS (Intrusion Prevention System) and other security services. A universal gateway allows the user to manage network security with one comprehensive solution, which makes the task much easier. In addition, compared to a

Google Adds Support for DNS-over-HTTP/3 in Android to Keep DNS Queries Private

By Ravie Lakshmanan
Google on Tuesday officially announced support for DNS-over-HTTP/3 (DoH3) for Android devices as part of a Google Play system update designed to keep DNS queries private. To that end, Android smartphones running Android 11 and higher are expected to use DoH3 instead of DNS-over-TLS (DoT), which was incorporated into the mobile operating system with Android 9.0. DoH3 is also an alternative to

Celebrating Women Engineers Today and Every Day at Verisign

By Ellen Petrocci
Celebrating Verisign's women engineers for INWED 2022

Today, as the world celebrates International Women in Engineering Day, we recognize and honor women engineers at Verisign, whose own stories have helped shape dreams and encouraged young women and girls to take up engineering careers.

Here are three of their stories:

Shama Khakurel, Senior Software Engineer

When Shama Khakurel was in high school, she aspired to join the medical field. But she quickly realized that classes involving math or engineering came easiest to her, much more so than her work in biology or other subjects. It wasn’t until she took a summer computer programming course called “Lotus and dBase Programming” that she realized her career aspirations had officially changed; from that point on, she wanted to be an engineer.

In the nearly 20 years she’s been at Verisign, she’s expanded her skills, challenged herself, pursued opportunities – and always had the support of managers who mentored her along the way.

“Verisign has given me every opportunity to grow,” Shama says. And even though she continues to “learn something new every day,” she also provides mentorship to younger engineering employees.

Women tend to shy away from engineering roles, she says, because they think that math and science are harder subjects. “They seem to follow and believe that myth, but there is a lot of opportunity for a woman in this field.”

Vinaya Shenoy, Engineering Manager

For Vinaya Shenoy, an engineering manager who has worked for Verisign for 17 years, a passion for math and science at a young age steered her toward a career in computer science engineering.

She draws inspiration from other women who are industry leaders and immigrants from India and who made it to top rank with their determination and leadership skills. She credits their stories with helping her see what all women are capable of, especially in unconventional or unexpected areas.

“Engineering is not just coding. There are a lot of areas within engineering that you can explore and pursue,” she says. “If problem-solving and creating are your passions, you can harness the power of technology to solve problems and give back to the community.”

Tuyet Vuong, Software Engineer

Tuyet Vuong is one of those people who enjoys problem-solving. As a young girl and the child of two physics teachers, she would often build small gadgets – perhaps her own clock or a small fan – from things she would find around the house.

Today, the challenges are bigger and have a greater impact, and she still finds herself enjoying them.

“Engineering is a fun, exciting and rewarding discipline where you can explore and build new things that are helpful to society,” says Tuyet. And sharing the insights and experiences of so many talented people – both men and women – is what makes the role that much more rewarding.

That sense of fulfillment also comes from breaking down stereotypes, such as the attitudes about women only being suitable for a limited number of careers when she was growing up in Vietnam. That’s why she’s a firm believer that mentoring and encouraging young women engineers isn’t just the responsibility of other women.

“The effort should come from both genders,” she says. “The effort shouldn’t come from women alone.”

Looking to the Future

At Verisign, we see the real impact of all our women engineers’ contributions when it comes to ensuring that the internet is secure, stable and resilient. Today and every day, we celebrate Verisign’s women engineers. We thank you for all you’ve done and everything you’re yet to accomplish.


If you’re interested in pursuing your passion for engineering, view our open career opportunities here.

The post Celebrating Women Engineers Today and Every Day at Verisign appeared first on Verisign Blog.

Reimagine Hybrid Work: Same CyberSec in Office and at Home

By The Hacker News
It was first the pandemic that changed the usual state of work - before, it was commuting, working in the office & coming home for most corporate employees. Then, when we had to adapt to the self-isolation rules, the work moved to home offices, which completely changed the workflow for many businesses.As the pandemic went down, we realized success never relied on where the work was done. Whether

Iranian Hackers Spotted Using a new DNS Hijacking Malware in Recent Attacks

By Ravie Lakshmanan
The Iranian state-sponsored threat actor tracked under the moniker Lyceum has turned to using a new custom .NET-based backdoor in recent campaigns directed against the Middle East. "The new malware is a .NET based DNS Backdoor which is a customized version of the open source tool 'DIG.net,'" Zscaler ThreatLabz researchers Niraj Shivtarkar and Avinash Kumar said in a report published last week. "

More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

By Duane Wessels
Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

This blog was also published by APNIC.

With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.

That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.

That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.

While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.

Exploring the Mystery

One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)

Distribution of second-level label lengths in NS queries from AS 15169
Figure 1: Distribution of second-level label lengths in queries to root name servers, comparing AS 15169 to others, for several different TLDs.

Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)

Daily count of NS Queries from AS 15169
Figure 2: Historical data shows the mysterious queries began in late 2019.

Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.

Fraction of SLDs seen at A/J-root also seen at TLD (AS 15169 queries)
Figure 3: Fraction of queries from AS 15169 appearing on A-root and J-root that also appear on .com and .net name servers, by the length of the second-level label.

The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.

However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.

Queries per day from AS 15169 to A/J-root for names with second-level label length equal to 12 or 13 (July 6, 2021)
Figure 4: Queries per day from AS 15169, for names with second-level label length equal to 12 or 13, over a 24-hour period.

From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.

The Missing Piece

These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.

In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.

The Mystery Explained

Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.

With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.

Wrapping Up

Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.

The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.

Unpatched DNS Related Vulnerability Affects a Wide Range of IoT Devices

By Ravie Lakshmanan
Cybersecurity researchers have disclosed an unpatched security vulnerability that could pose a serious risk to IoT products. The issue, which was originally reported in September 2021, affects the Domain Name System (DNS) implementation of two popular C libraries called uClibc and uClibc-ng that are used for developing embedded Linux systems. <!--adsense--> uClibc is known to be used by major

Routing Without Rumor: Securing the Internet’s Routing System

By Danny McPherson
colorful laptop

This article is based on a paper originally published as part of the Global Commission on the Stability of Cyberspace’s Cyberstability Paper Series, “New Conditions and Constellations in Cyber,” on Dec. 9, 2021.

The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.

Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.

To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.

Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.

The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”

The Border Gateway Protocol

Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.

BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.

Route Hijacks and Route Leaks

Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.

Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.

Resource Public Key Infrastructure

Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.

RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.

Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.

Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.

As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.

Conclusion

Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.

The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.

Observations on Resolver Behavior During DNS Outages

By Verisign
Globe with caution signs

When an outage affects a component of the internet infrastructure, there can often be downstream ripple effects affecting other components or services, either directly or indirectly. We would like to share our observations of this impact in the case of two recent such outages, measured at various levels of the DNS hierarchy, and discuss the resultant increase in query volume due to the behavior of recursive resolvers.

During the beginning of October 2021, the internet saw two significant outages, affecting Facebook’s services and the .club top level domain, both of which did not properly resolve for a period of time. Throughout these outages, Verisign and other DNS operators reported significant increases in query volume. We provided consistent responses throughout, with the correct delegation data pointing to the correct nameservers.

While these higher query rates do not impact Verisign’s ability to respond, they raise a broader operational question – whether the repeated nature of these queries, indicative of a lack of negative caching, might potentially be mistaken for a denial-of-service attack.

Facebook

On Oct. 4, 2021, Facebook experienced a widespread outage, lasting nearly six hours. During this time most of its systems were unreachable, including those that provide Facebook’s DNS service. The outage impacted facebook.com, instagram.com, whatsapp.net and other domain names.

Under normal conditions, the .com and .net authoritative name servers answer about 7,000 queries per second in total for the three domain names previously mentioned. During this particular outage, however, query rates for these domain names reached upwards of 900,000 queries per second (an increase of more than 100x), as shown in Figure 1 below.

Rate of DNS queries

Figure 1: Rate of DNS queries for Facebook’s domain names during the 10/4/21 outage.

During this outage, recursive name servers received no response from Facebook’s name servers – instead, those queries timed out. In situations such as this, recursive name servers generally return a SERVFAIL or “server failure” response, presented to end users as a “this site can’t be reached” error.

Figure 1 shows an increasing query rate over the duration of the outage. Facebook uses relatively low Time-to-Lives (TTLs), a setting that tells DNS resolvers how long to cache an answer on their DNS records before issuing a new query, of from one to five minutes. This in turn means that, five minutes into the outage, all relevant records would have expired from all recursive resolver caches – or at least from those that honor the publisher’s TTLs. It is not immediately clear why the query rate continues to climb throughout the outage, nor whether it would eventually have plateaued had the outage continued.

To get a sense of where the traffic comes from, we group query sources by their autonomous system number. The top five autonomous systems, along with all others grouped together, are shown in Figure 2.

Rate of DNS queries grouped by AS

Figure 2: Rate of DNS queries for Facebook’s domain names grouped by source autonomous system.

From Figure 2 we can see that, at their peak, queries for these domain names to Verisign’s .com and .net authoritative name servers from the most active recursive resolvers – those of Google and Cloudflare – increased around 7,000x and 2,000x respectively over their average non-outage rates.

.club

On Oct. 7th, 2021, three days after Facebook’s outage, the .club and .hsbc TLDs also experienced a three-hour outage. In this case, the relevant authoritative servers remained reachable, but responded with SERVFAIL messages. The effect on recursive resolvers was essentially the same: Since they did not receive useful data, they repeatedly retried their queries to the parent zone. During the incident, the Verisign-operated A-root and J-root servers observed an increase in queries for .club domain names of 45x, from 80 queries per second before, to 3,700 queries per second during the outage.

Rate of DNS queries to A and J root servers

Figure 3: Rate of DNS queries to A and J root servers during the 10/7/2021 .club outage.

Similar to the previous example, this outage also demonstrated an increasing trend in query rate during the duration of the outage. In this case, it might be explained by the fact that the records for .club’s delegation in the root zone use two-day TTLs. However, the theoretical analysis is complicated by the fact that authoritative name server records in child zones use longer TTLs (six days), while authoritative name server address records use shorter TTLs (10 minutes). Here we do not observe a significant amount of query traffic from Google sources; instead, the increased query volume is largely attributable to the long tail of recursive resolvers in “All Others.”

Rate of DNS queries for .club and .hsbc

Figure 4: Rate of DNS queries for .club and .hsbc, grouped by source autonomous system.

Botnet Traffic

Earlier this year Verisign implemented a botnet sinkhole and analyzed the received traffic. This botnet utilizes more than 1,500 second-level domain names, likely for command and control. We observed queries from approximately 50,000 clients every day. As an experiment, we configured our sinkhole name servers to return SERVFAIL and REFUSED responses for two of the botnet domain names.

When configured to return a valid answer, each domain name’s query rate peaks at about 50 queries per second. However, when configured to return SERVFAIL, the query rate for a single domain name increases to 60,000 per second, as shown in Figure 5. Further, the query rate for the botnet domain name also increases at the TLD and root name servers, even though those services are functioning normally and data relevant to the botnet domain name has not changed – just as with the two outages described above. Figure 6 shows data from the same experiment (although for a different date), colored by the source autonomous system. Here, we can see that approximately half of the increased query traffic is generated by one organization’s recursive resolvers.

Query rates to name servers

Figure 5: Query rates to name servers for one domain name experimentally configured to return SERVFAIL.

Query rates to botnet sinkho

Figure 6: Query rates to botnet sinkhole name servers, grouped by autonomous system, when one domain name is experimentally configured to return SERVFAIL.

Common Threads

These two outages and one experiment all demonstrate that recursive name servers can become unnecessarily aggressive when responses to queries are not received due to connectivity issues, timeouts, or misconfigurations.

In each of these three cases, we observe significant query rate increases from recursive resolvers across the internet – with particular contributors, such as Google Public DNS and Cloudflare’s resolver, identified on each occasion.

Often in cases like this we turn to internet standards for guidance. RFC 2308 is a 1998 Standards Track specification that describes negative caching of DNS queries. The RFC covers name errors (e.g., NXDOMAIN), no data, server failures, and timeouts. Unfortunately, it states that negative caching for server failures and timeouts is optional. We have submitted an Internet-Draft that proposes updating RFC 2308 to require negative caching for DNS resolution failures.

We believe it is important for the security, stability and resiliency of the internet’s DNS infrastructure that the implementers of recursive resolvers and public DNS services carefully consider how their systems behave in circumstances where none of a domain name’s authoritative name servers are providing responses, yet the parent zones are providing proper referrals. We feel it is difficult to rationalize the patterns that we are currently observing, such as hundreds of queries per second from individual recursive resolver sources. The global DNS would be better served by more appropriate rate limiting, and algorithms such as exponential backoff, to address these types of cases we’ve highlighted here. Verisign remains committed to leading and contributing to the continued security, stability and resiliency of the DNS for all global internet users.

This piece was co-authored by Verisign Fellow Duane Wessels and Verisign Distinguished Engineers Matt Thomas and Yannis Labrou.

The post Observations on Resolver Behavior During DNS Outages appeared first on Verisign Blog.

Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021

By Verisign
The Domain Name Industry Brief December 2021

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2021 closed with 364.6 million domain name registrations across all top-level domains, a decrease of 2.7 million domain name registrations, or 0.7%, compared to the second quarter of 2021.1,2 Domain name registrations have decreased by 6.1 million, or 1.6%, year over year.1,2

Total domain name registrations across all TLDs in Q3 2021

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the third quarter of 2021, including:

The Domain Name Industry Brief this quarter also includes an overview of the ongoing community work to mitigate DNS security threats.

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.


  1. The figure(s) includes domain names in the .tk ccTLD. .tk is a ccTLD that provides free domain names to individuals and businesses. Revenue is generated by monetizing expired domain names. Domain names no longer in use by the registrant or expired are taken back by the registry and the residual traffic is sold to advertising networks. As such, there are no deleted .tk domain names. The .tk zone reflected here is based on data from Q4 2020, which is the most recent data available. https://www.businesswire.com/news/home/20131216006048/en/Freenom-Closes-3M-Series-Funding#.UxeUGNJDv9s.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD Internationalized Domain Names (IDNs), (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021 appeared first on Verisign Blog.

Ongoing Community Work to Mitigate Domain Name System Security Threats

By Keith Drazek

For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.

As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.

As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.

In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:

  • Determining the roles and responsibilities of the various service providers across the internet ecosystem;
  • Delineating categories of threats: content, infrastructure, illegal vs. harmful, etc.;
  • Understanding the precise operational and technical capabilities of various types of providers across the internet ecosystem;
  • Relationships, if any, that respective service providers have with individuals or entities responsible for creating and/or removing the illegal or abusive activity;
  • Role of third-party “trusted notifiers,” including government actors, that may play a role in identifying and reporting illegal and abusive behavior to the appropriate service provider;
  • Processes to ensure infrastructure providers can trust third-party notifiers to reliably identify and provide evidence of illegal or harmful content;
  • Promoting administrative and operational scalability in trusted notifier engagements;
  • Determining the necessary safeguards around liability, due process, and transparency to ensure domain name registrants have recourse when the DNS is used as a tool to police DNS security threats, particularly when related to content.
  • Supporting ICANN’s important and appropriate role in coordination and facilitation, particularly as a centralized source of data, tools, and resources to help and hold accountable those parties responsible for managing and maintaining the internet’s unique identifiers.
Figure 1: The Internet Ecosystem

Definitions of Online Abuse

To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.

  • DNS Security Threats: defined as being “composed of five broad categories of harmful activity [where] they intersect with the DNS: malware, botnets, phishing, pharming, and spam when [spam] serves as a delivery mechanism for those other forms of DNS Abuse.”
  • Infrastructure Abuse: a broader set of security threats that can impact the DNS itself – including denial-of-service / distributed denial-of-service (DoS / DDoS) attacks, DNS cache poisoning, protocol-level attacks, and exploitation of implementation vulnerabilities.
  • Illegal Content: content that is unlawful and hosted on websites that are accessed via domain names in the global DNS. Examples might include the illegal sale of controlled substances or the distribution of child sexual abuse material (CSAM), and proven intellectual property infringement.
  • Abusive Content: is content hosted on websites using the domain name infrastructure that is deemed “harmful,” either under applicable law or norms, which could include scams, fraud, misinformation, or intellectual property infringement, where illegality has yet to be established by a court of competent jurisdiction.

Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.

ICANN Organization’s Efforts on DNS Abuse

The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.

The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.

The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.

The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.

ICANN Community Inputs on DNS Abuse

As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.

Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.

During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.

In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.

Other Groups and Actors Focused on DNS Security

It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:

Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.

DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.

Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.

ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.

Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.

What Verisign is Doing Today

As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:

  • Monitoring for abuse: Protecting against abuse starts with knowing what is happening in our systems and services, in a timely manner, and being capable of detecting anomalous or abusive behavior, and then reacting to address it appropriately. Verisign works closely with a range of actors, including trusted notifiers, to help ensure our abuse mitigation actions are informed by sources with necessary subject matter expertise and procedural rigor.
  • Blocking and redirecting abusive domain names: Blocking certain domain names that have been identified by Verisign and/or trusted third parties as security threats, including botnets that leverage well-understood and characterized domain generation algorithms, helps us to protect our infrastructure and neutralize or otherwise minimize potential security and stability threats more broadly by remediating abuse enabled via domain names in our TLDs. For example, earlier this year, Verisign observed a botnet family that was responsible for such a disproportionate amount of total global DNS queries, we were compelled to act to remediate the botnet. This was referenced in Verisign’s Q1 2021 Domain Name Industry Brief Volume 18, Issue 2.
  • Avoiding disposable domain name registrations: While heavily discounted domain name pricing strategies may promote short-term sales, they may also attract a spectrum of registrants who might be engaged in abuse. Some security threats, including phishing and botnets, exploit the ability to register large numbers of ‘disposable’ domain names rapidly and cheaply. Accordingly, Verisign avoids marketing programs that would permit our TLDs to be characterized in this class of ‘disposable’ domains, that have been shown to attract miscreants and enable abusive behavior.
  • Maintaining a cooperative and responsive partnership with law enforcement and government agencies, and engagement with courts of relevant jurisdiction: To ensure the security, stability and resiliency of the DNS and the internet at large, we have developed and maintained constructive relationships with United States and international law enforcement and government agencies to assist in addressing imminent and ongoing substantial security threats to operational applications and critical internet infrastructure, as well as illegal activity associated with domain names.
  • Ensuring adherence of contractual obligations: Our contractual frameworks, including our registry policies and .com Registry-Registrar Agreements, help provide an effective legal framework that discourages abusive domain name registrations. We believe that fair and consistent enforcement of our policies helps to promote good hygiene within the registrar channel.
  • Entering into a binding Letter of Intent with ICANN that commits both parties to cooperate in taking a leadership role in combating security threats. This includes working with the ICANN community to determine the appropriate process for, and development and implementation of, best practices related to combating security threats; to educate the wider ICANN community about security threats; and support activities that preserve and enhance the security, stability and resiliency of the DNS. Verisign also made a substantial financial commitment in direct support of these important efforts.

Trusted Notifiers

An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.

Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.

Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.

Conclusion

Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.

The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.

The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award

By Burt Kaliski
Header image: Flashing technical imagery

The global internet, from the perspective of its billions of users, has often been envisioned as a cloud — a shapeless structure that connects users to applications and to one another, with the internal details left up to the infrastructure operators inside.

From the perspective of the infrastructure operators, however, the global internet is a network of networks. It’s a complex set of connections among network operators, application platforms, content providers and other parties.

And just as the total amount of global internet traffic continues to grow, so too does the shape and structure of the internet — the internal details of the cloud — continue to evolve.

At the Association for Computing Machinery’s Special Interest Group on Data Communications (ACM SIGCOMM) conference in 2010, researchers at Arbor Networks and the University of Michigan, including Danny McPherson, now executive vice president and chief security officer at Verisign, published one of the first papers to analyze the internal structure of the internet in detail.

The study, entitled “Internet Inter-Domain Traffic,” drew from two years of measurements involving more than 200 exabytes of data.

One of the paper’s key observations was the emergence of a “global internet core” of a relatively small number of large application and content providers that had become responsible for the majority of the traffic between different parts of the internet — in contrast to the previous topology where large network operators were the primary source.

The authors’ conclusion: “we expect the trend towards internet inter-domain traffic consolidation to continue and even accelerate.”

The paper’s predictions of internet traffic and topology trends proved out over the past decade, as confirmed by one of the paper’s authors, Craig Labovitz, in a 2019 presentation that reiterated the paper’s main findings: the internet is “getting bigger by traffic volume” while also “rapidly getting smaller by concentration of content sources.”

This week, the ACM SIGCOMM 2021 conference series recognized the enduring value of the research with the prestigious Test of Time Paper Award, given to a paper that “deemed to be an outstanding paper whose contents are still a vibrant and useful contribution today.”

Internet measurement research is particularly relevant to Domain Name System (DNS) operators such as Verisign. To optimize the deployment of their services, DNS operators need to know where DNS query traffic is most likely to be exchanged in the coming years. Insights into the internal structure of the internet can help DNS operators ensure the ongoing security, stability and resiliency of their services, for the benefit both of other infrastructure operators who depend on DNS, and the billions of users who connect online every day.

Congratulations to Danny and co-authors Craig Labovitz, Scott Iekel-Johnson, Jon Oberheide and Farnam Jahanian on receiving this award, and thanks to ACM SIGCOMM for its recognition of the research. If you’re curious about what evolutionary developments Danny and others at Verisign are envisioning today about the internet of the future, subscribe to the Verisign blog, and follow us on Twitter and LinkedIn.

The post The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award appeared first on Verisign Blog.

Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets

By Verisign
An image of multiple botnets for the Verisign blog "Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets"

Note: This article originally appeared in Verisign’s Q1 2021 Domain Name Industry Brief.

This article expands on observations of a botnet traffic group at various levels of the Domain Name System (DNS) hierarchy, presented at DNS-OARC 35.

Addressing DNS abuse and maintaining a healthy DNS ecosystem are important components of Verisign’s commitment to being a responsible steward of the internet. We continuously engage with the Internet Corporation for Assigned Names and Numbers (ICANN) and other industry partners to help ensure the secure, stable and resilient operation of the DNS.

Based on recent telemetry data from Verisign’s authoritative top-level domain (TLD) name servers, Verisign observed a widespread botnet responsible for a disproportionate amount of total global DNS queries – and, in coordination with several registrars, registries and ICANN, acted expeditiously to remediate it.

Just prior to Verisign taking action to remediate the botnet, upwards of 27.5 billion queries per day were being sent to Verisign’s authoritative TLD name servers, accounting for roughly 10% of Verisign’s total DNS traffic. That amount of query volume in most DNS environments would be considered a sustained distributed denial-of-service (DDoS) attack.

These queries were associated with a particular piece of malware that emerged in 2018, spreading throughout the internet to create a global botnet infrastructure. Botnets provide a substrate for malicious actors to theoretically perform all manner of malicious activity – executing DDoS attacks, exfiltrating data, sending spam, conducting phishing campaigns or even installing ransomware. This is the result of the malware’s ability to download and execute any other type of payload the malicious actor desires.

Malware authors often apply various forms of evasion techniques to protect their botnets from being detected and remediated. A Domain Generation Algorithm (DGA) is an example of such an evasion technique.

DGAs are seen in various families of malware that periodically generate a number of domain names, which can be used as rendezvous points for botnet command-and-control servers. By using a DGA to build the list of domain names, the malicious actor makes it more difficult for security practitioners to identify what domain names will be used and when. Only by exhaustively reverse-engineering a piece of malware can the definitive set of domain names be ascertained.

The choices made by miscreants to tailor malware DGAs directly influences the DGAs’ ability to evade detection. For instance, electing to use more TLDs and a large number of domain names in a given time period makes the malware’s operation more difficult to disrupt; however, this approach also increases the amount of network noise, making it easier to identify anomalous traffic patterns by security and network teams. Likewise, a DGA that uses a limited number of TLDs and domain names will generate significantly less network noise but is more fragile and susceptible to remediation.

Botnets that implement DGA algorithms or utilize domain names clearly represent an “abuse of the DNS,” opposed to other types of abuse that are executed “via the DNS,” such as phishing. This is an important distinction the DNS community should consider as it continues to refine the scope of DNS abuse and how remediation of the various abuses can be effectuated.

The remediation of domain names used by botnets as rendezvous points poses numerous operational challenges and insights. The set of domain names needs to be identified and investigated to determine their current registration status. Risk assessments must be evaluated on registered domain names to determine if additional actions should be performed, such as sending registrar notifications, issuing requests to transfer domain names, adding Extensible Provisioning Protocol (EPP) hold statuses, altering delegation records, etc. There are also timing and coordination elements that must be balanced with external entities, such as ICANN, law enforcement, Computer Emergency Readiness Teams (CERTs) and contracted parties, including registrars and registries. Other technical decisions also need to be considered, designed and deployed to achieve the desired remediation goal.

After coordinating with ICANN, and several registrars and registries, Verisign registered the remaining available botnet domain names and began a three-phase plan to sinkhole those domain names. Ultimately, this remediation effort would reduce the traffic sent to Verisign authoritative name servers and effectively eliminate the botnet’s ability to use command-and-control domain names within Verisign-operated TLDs.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.
Figure 1: The botnet’s DNS query volume at Verisign authoritative name servers.

Phase one was executed on Dec. 21, 2020, in which 100 .cc domain names were configured to resolve to Verisign-operated sinkhole servers. Subsequently, traffic at Verisign authoritative name servers quickly decreased. The second group of domain names contained 500 .com and .net domain names, which were sinkholed on Jan. 7, 2021. Again, traffic volume at Verisign authoritative name servers quickly decreased. The final group of 879 .com and .net domain names were sinkholed on Jan. 13, 2021. By the end of phase three, the cumulative DNS traffic reduction surpassed 25 billion queries per day. Verisign reserved approximately 10 percent of the botnet domain names to remain on serverHold as a placebo/control-group to better understand sinkholing effects as they relate to query volume at the child and parent zones. Verisign believes that sinkholing the remaining domain names would further reduce authoritative name server traffic by an additional one billion queries.

This botnet highlights the remarkable Pareto-like distribution of DNS query traffic, in which a few thousand domain names that span namespaces containing more than 165 million domain names, demand a vastly disproportionate amount of DNS resources.

What causes the amplification of DNS traffic volume for non-existent domain names to occur at the upper levels of the DNS hierarchy? Verisign is conducting a variety of measurements on the sinkholed botnet domain names to better understand the caching behavior of the resolver population. We are observing some interesting traffic changes at the TLD and root name servers when time to live (TTL) and response codes are altered at the sinkhole servers. Stay tuned.

In addition to remediating this botnet in late 2020 and into early 2021, Verisign extended its already four-year endeavor to combat the Avalanche botnet family. Since 2016, the Avalanche botnet had been significantly impacted due to actions taken by Verisign and an international consortium of law enforcement, academic and private organizations. However, many of the underlying Avalanche-compromised machines are still not remediated, and the threat from Avalanche could increase again if additional actions are not taken. To prevent this from happening, Verisign, in coordination with ICANN and other industry partners, is using a variety of tools to ensure Avalanche command-and-control domain names cannot be used in Verisign-operated TLDs.

Botnets are a persistent issue. And as long as they exist as a threat to the security, stability and resiliency of the DNS, cross-industry coordination and collaboration will continue to lie at the core of combating them.

This piece was co-authored by Matt Thomas and Duane Wessels, distinguished engineers at Verisign.

The post Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets appeared first on Verisign Blog.

Information Protection for the Domain Name System: Encryption and Minimization

By Burt Kaliski

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

By Burt Kaliski

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

By Burt Kaliski

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Verisign Outreach Program Remediates Billions of Name Collision Queries

By Matt Thomas

A name collision occurs when a user attempts to resolve a domain in one namespace, but it unexpectedly resolves in a different namespace. Name collision issues in the public global Domain Name System (DNS) cause billions of unnecessary and potentially unsafe DNS queries every day. A targeted outreach program that Verisign started in March 2020 has remediated one billion queries per day to the A and J root name servers, via 46 collision strings. After contacting several national internet service providers (ISPs), the outreach effort grew to include large search engines, social media companies, networking equipment manufacturers, national CERTs, security trust groups, commercial DNS providers, and financial institutions.

While this unilateral outreach effort resulted in significant and successful name collision remediation, it is broader DNS community engagement, education, and participation that offers the potential to address many of the remaining name collision problems. Verisign hopes its successes will encourage participation by other organizations in similar positions in the DNS community.

Verisign is proud to be the operator for two of the world’s 13 authoritative root servers. Being a root server operator carries with it many operational responsibilities. Ensuring the security, stability and resiliency of the DNS requires proactive efforts so that attacks against the root name servers do not disrupt DNS resolution, as well as the monitoring of DNS resolution patterns for misconfigurations, signaling telemetry, and unexpected or unintended uses that, without closer collaboration, could have unforeseen consequences (e.g. Chromium’s impact on root DNS traffic).

Monitoring may require various forms of responsible disclosure or notification to the underlying parties. Further, monitoring the root server system poses logistical challenges because any outreach and remediation programs must work at internet scale, and because root operators have no direct relationship with many of the involved entities.

Despite these challenges, Verisign has conducted several successful internet-scale outreach efforts to address various issues we have observed in the DNS.

In response to the Internet Corporation for Assigned Names and Number (ICANN) proposal to mitigate name collision risks in 2013, Verisign conducted a focused study on the collision string .CBA. Our measurement study revealed evidence of a substantial internet-connected infrastructure in Japan that relied on the non-resolution of names that end in .CBA. Verisign informed the network operator, who subsequently reconfigured some of its internal systems, resulting in an immediate decline of queries for .CBA observed at A and J root servers.

Prior to the 2018 KSK rollover, several operators of DNSSEC-validating name servers appeared to be sending out-of-date RFC 8145 signals to root name servers. To ensure the KSK rollover did not disrupt internet name resolution functions for billions of end users, Verisign augmented ICANN’s outreach effort and conducted a multi-faceted technical outreach program by contacting and working with The United States Computer Emergency Readiness Team (US-CERT) and other national CERTs, industry partners, various DNS operator groups and performing direct outreach to out-of-date signalers. The ultimate success of the KSK rollover was due in large part to outreach efforts by ICANN and Verisign.

In response to the ICANN Board’s request in resolutions 2017.11.02.29 – 2017.11.02.31, the ICANN Security and Stability Advisory Committee (SSAC) was asked to conduct studies, and to present data and points of view on collision strings, including specific advice on three higher risk strings: .CORP, .HOME and .MAIL. While Verisign is actively engaged in this Name Collision Analysis Project (NCAP) developed by SSAC, we are also reviving and expanding our 2012 name collision outreach efforts.

Verisign’s name collision outreach program is based on the guidance we provided in several recent peer-reviewed name collision publications, which highlighted various name collision vulnerabilities and examined the root causes of leaked queries and made remediation recommendations. Verisign’s program uses A and J root name server traffic data to identify high-affinity strings related to particular networks, as well as high query volume strings that are contextually associated with device manufacturers, software, or platforms. We then attempt to contact the underlying parties and assist with remediation as appropriate.

While we partially rely on direct communication channel contact information, the key enabler of our outreach efforts has been Verisign’s relationships with the broader collective DNS community. Verisign’s active participation in various industry organizations within the ICANN and DNS communities, such as M3AAWG, FIRST, DNS-OARC, APWG, NANOG, RIPE NCC, APNIC, and IETF1, enables us to identify and communicate with a broad and diverse set of constituents. In many cases, participants operate infrastructure involved in name collisions. In others, they are able to put us in direct contact with the appropriate parties.

Through a combination of DNS traffic analysis and publicly accessible data, as well as the rolodexes of various industry partnerships, across 2020 we were able to achieve effective outreach to the anonymized entities listed in Table 1.

Organization Queries per Day to A & J Status Number of Collision Strings (TLDs) Notes / Root Cause Analysis
Search Engine 650M Fixed 1 string Application not using FQDNs
Telecommunications Provider 250M Fixed N/A Prefetching bug
eCommerce Provider 150M Fixed 25 strings Application not using FQDNs
Networking Manufacturer 70M Pending 3 strings Suffix search list
Cloud Provider 64M Fixed 15 strings Suffix search list
Telecommunications Provider 60M Fixed 2 strings Remediated through device vendor
Networking Manufacturer 45M Pending 2 strings Suffix search list problem in router/modem device
Financial Corporation 35M Fixed 2 strings Typo / misconfiguration
Social Media Company 30M Pending 9 strings Application not using FQDNs
ISP 20M Fixed 1 string Suffix search list problem in router/modem device
Software Provider 20M Pending 50+ strings Acknowledged but still investigating
ISP 5M Pending 1 string At time of writing, still investigating but confirmed it is a router/modem device
Table 1. Sample of outreach efforts performed by Verisign.

Many of the name collision problems encountered are the result of misconfigurations and not using fully qualified domain names. After operators deploy patches to their environments, as shown in Figure 1 below, Verisign often observes an immediate and dramatic traffic decrease at A and J root name servers. Although several networking equipment vendors and ISPs acknowledge their name collision problems, the development and deployment of firmware to a large userbase will take time.

Figure 1. Daily queries for two collision strings to A and J root servers during a nine month period of time.
Figure 1. Daily queries for two collision strings to A and J root servers during a nine month period of time.

Cumulatively, the operators who have deployed patches constitute a reduction of one billion queries per day to A and J root servers (roughly 3% of total traffic). Although root traffic is not evenly distributed among the 13 authoritative servers, we expect a similar impact at the other 11, resulting in a system-wide reduction of approximately 6.5 billion queries per day.

As the ICANN community prepares for Subsequent Procedures (the introduction of additional new TLDs) and the SSAC NCAP continues to work to answer the ICANN Board’s questions, we encourage the community to participate in our efforts to address name collisions through active outreach efforts. We believe our efforts show how outreach can have significant impact to both parties and the broader community. Verisign is committed to addressing name collision problems and will continue executing the outreach program to help minimize the attack surface exposed by name collisions and to be a responsible and hygienic root operator.

For additional information about name collisions and how to properly manage private-use TLDs, please see visit ICANN’s Name Collision Resource & Information website.


1. The Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), Forum of Incident Response and Security Teams (FIRST), DNS Operations, Analysis, and Research Center (DNS-OARC), Anti-Phishing Working Group (APWG), North American Network Operators’ Group (NANOG), Réseaux IP Européens Network Coordination Centre (RIPE NCC), Asia Pacific Network Information Centre (APNIC), Internet Engineering Task Force (IETF)

Learn how Verisign’s targeted outreach identifies and remediates name collision issues within the DNS.

The post Verisign Outreach Program Remediates Billions of Name Collision Queries appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

By Burt Kaliski

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

By Burt Kaliski

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

By Burt Kaliski
Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

Chromium’s Reduction of Root DNS Traffic

By Verisign
Search Bar

As we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the root server system.

In a previous blog post, we quantified that upwards of 45.80% of total DNS traffic to the root servers was, at the time, the result of Chromium intranet redirection detection tests. Since then, the Chromium team has redesigned its code to disable the redirection test on Android systems and introduced a multi-state DNS interception policy that supports disabling the redirection test for desktop browsers. This functionality was released mid-November of 2020 for Android systems in Chromium 87 and, quickly thereafter, the root servers experienced a rapid decline of DNS queries.

The figure below highlights the significant decline of query volume to the root server system immediately after the Chromium 87 release. Prior to the software release, the root server system saw peaks of ~143 billion queries per day. Traffic volumes have since decreased to ~84 billion queries a day. This represents more than a 41% reduction of total query volume.

Note: Some data from root operators was not available at the time of this publication.

This type of broad root system measurement is facilitated by ICANN’s Root Server System Advisory Committee standards document RSSAC002, which establishes a set of baseline metrics for the root server system. These root server metrics are readily available to the public for analysis, monitoring, and research. These metrics represent another milestone the DNS community could appreciate and continue to support and refine going forward.

Rightly noted in ICANN’s Root Name Service Strategy and Implementation publication, the root server system currently “faces growing volumes of traffic” from legitimate users but also from misconfigurations, misuse, and malicious actors and that “the costs incurred by the operators of the root server system continue to climb to mitigate these attacks using the traditional approach”.

As we reflect on how Chromium’s large impact to root server traffic was identified and then resolved, we as a DNS community could consider how outreach and engagement should be incorporated into a traditional approach of addressing DNS security, stability, and resiliency. All too often, technologists solve problems by introducing additional layers of technology abstractions and disregarding simpler solutions, such as outreach and engagement.

We believe our efforts show how such outreach and community engagement can have significant impact both to the parties directly involved, and to the broader community. Chromium’s actions will directly aide and ease the operational costs to mitigate attacks at the root. Reducing the root server system load by 41%, with potential further reduction depending on future Chromium deployment decisions, will lighten operational costs incurred to mitigate attacks by relinquishing their computing and network resources.

In pursuit of maintaining a responsible and common-sense root hygiene regimen, Verisign will continue to analyze root telemetry data and engage with entities such as Chromium to highlight operational concerns, just as Verisign has done in the past to address name collisions problems. We’ll be sharing more information on this shortly.

This piece was co-authored by Matt Thomas and Duane Wessels, Distinguished Engineers at Verisign.

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere

By Burt Kaliski
Lock image and DNS

Over the past several years, questions about how to protect information exchanged in the Domain Name System (DNS) have come to the forefront.

One of these questions was posed first to DNS resolver operators in the middle of the last decade, and is now being brought to authoritative name server operators: “to encrypt or not to encrypt?” It’s a question that Verisign has been considering for some time as part of our commitment to security, stability and resiliency of our DNS operations and the surrounding DNS ecosystem.

Because authoritative name servers operate at different levels of the DNS hierarchy, the answer is not a simple “yes” or “no.” As I will discuss in the sections that follow, different information protection techniques fit the different levels, based on a balance between cryptographic and operational considerations.

How to Protect, Not Whether to Encrypt

Rather than asking whether to deploy encryption for a particular exchange, we believe it is more important to ask how best to address the information protection objectives for that exchange — whether by encryption, alternative techniques or some combination thereof.

Information protection must balance three objectives:

  • Confidentiality: protecting information from disclosure;
  • Integrity: protecting information from modification; and
  • Availability: protecting information from disruption.

The importance of balancing these objectives is well-illustrated in the case of encryption. Encryption can improve confidentiality and integrity by making it harder for an adversary to view or change data, but encryption can also impair availability, by making it easier for an adversary to cause other participants to expend unnecessary resources. This is especially the case in DNS encryption protocols where the resource burden for setting up an encrypted session rests primarily on the server, not the client.

The Internet-Draft “Authoritative DNS-Over-TLS Operational Considerations,” co-authored by Verisign, expands on this point:

Initial deployments of [Authoritative DNS over TLS] may offer an immediate expansion of the attack surface (additional port, transport protocol, and computationally expensive crypto operations for an attacker to exploit) while, in some cases, providing limited protection to end users.

Complexity is also a concern because DNS encryption requires changes on both sides of an exchange. The long-installed base of DNS implementations, as Bert Hubert has parabolically observed, can only sustain so many more changes before one becomes the “straw that breaks the back” of the DNS camel.

The flowchart in Figure 1 describes a two-stage process for factoring in operational risk when determining how to mitigate the risk of disclosure of sensitive information in a DNS exchange. The process involves two questions.

Figure 1  Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.
Figure 1 Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.

This process can readily be applied to develop guidance for each exchange in the DNS resolution ecosystem.

Applying Guidance

Three main exchanges in the DNS resolution ecosystem are considered here, as shown in Figure 2: resolver-to-authoritative at the root and TLD level; resolver-to-authoritative below these levels; and client-to-resolver.

Figure 2 Three DNS resolution exchanges: (1) resolver-to-authoritative at the root and TLD levels; (2) resolver-to-authoritative at the SLD level (and below); (3) client-to-resolver. Different information protection guidance applies for each exchange. The exchanges are shown with qname minimization implemented at the root and TLD levels.

1. Resolver-to-Root and Resolver-to-TLD

The resolver-to-authoritative exchange at the root level enables DNS resolution for all underlying domain names; the exchange at the TLD level does the same for all names under a TLD. These exchanges provide global navigation for all names, benefiting all resolvers and therefore all clients, and making the availability objective paramount.

As a resolver generally services many clients, information exchanged at these levels represents aggregate interests in domain names, not the direct interests of specific clients. The sensitivity of this aggregated information is therefore relatively low to start with, as it does not contain client-specific details beyond the queried names and record types. However, the full domain name of interest to a client has conventionally been sent to servers at the root and TLD levels, even though this is more information than they need to know to refer the resolver to authoritative name servers at lower levels of the DNS hierarchy. The additional detail in the full domain name may be considered sensitive in some cases. Therefore, this data exchange merits consideration for protection.

The decision flow (as generally described in Figure 1) for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Yes. Techniques such as qname minimization, NXDOMAIN cut processing and aggressive DNSSEC caching — which we collectively refer to as minimization techniques — can reduce the information exchanged to just the resolver’s aggregate interests in TLDs and SLDs, removing the further detail of the full domain names and meeting the principle of minimum disclosure.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? No. The information exchanged is just the resolver’s aggregate interests in TLDs and SLDs after minimization techniques are applied, and any further protection offered by encryption wouldn’t justify the operational risk of a protocol change affecting both sides of the exchange. While some resolvers and name servers may be open to the operational risk, it shouldn’t be required of others.

Interestingly, while minimization itself is sufficient to address the disclosure risk at the root and TLD levels, encryption alone isn’t. Encryption protects against disclosure to outside observers but not against disclosure to (or by) the name server itself. Even though the sensitivity of the information is relatively low for reasons noted above, without minimization, it’s still more than the name server needs to know.

Summary: Resolvers should apply minimization techniques at the root and TLD levels. Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges.

Note that at this time, Verisign has no plans to implement DNS encryption at the root or TLD servers that the company operates. Given the availability of qname minimization, which we are encouraging resolver operators to implement, and other minimization techniques, we do not currently see DNS encryption at these levels as offering an appropriate risk / benefit tradeoff.

2. Resolver-to-SLD and Below

The resolver-to-authoritative exchanges at the SLD level and below enable DNS resolution within specific namespaces. These exchanges provide local optimization, benefiting all resolvers and all clients interacting with the included namespaces.

The information exchanged at these levels also represents the resolver’s aggregate interests, but in some cases, it may also include client-related information such as the client’s subnet. The full domain names and the client-related information, if any, are the most sensitive parts.

The decision flow for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Partially. Minimization techniques may apply to some exchanges at the SLD level and below if the full domain names have more than three labels. However, they don’t help as much with the lowest-level authoritative name server involved, because that name server needs the full domain name.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? In some cases. If the exchange includes client-specific information, then the risk may be justified. If full domain names include labels beyond the TLD and SLD that are considered more sensitive, this might be another justification. Otherwise, the benefit of encryption generally wouldn’t justify the operational risk, and for this reason, as in the previous case, encryption shouldn’t be required, except in the specific purposes noted.

Summary: Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names or client-specific information. Otherwise, they should not be required to implement DNS encryption.

3. Client-to-Resolver

The client-to-resolver exchange enables navigation to all domain names for all clients of the resolver.

The information exchanged here represents the interests of each specific client. The sensitivity of this information is therefore relatively high, making confidentiality vital.

The decision process in this case traverses the first and second steps:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Not really. Minimization techniques don’t apply here because the resolver needs the full domain name and the client-specific information included in the request, namely the client’s IP address. Other alternatives, such as oblivious DNS, have been proposed, but are more complex and arguably increase operational risk. Note that some clients use security and confidentiality solutions at different layers of the protocol stack (e.g. a virtual private network (VPN), etc.) to protect DNS exchanges.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? Yes.

Summary: Clients and resolvers should implement DNS encryption on this exchange, unless the exchange is otherwise adequately protected, for instance as part of the network connection provided by an enterprise or internet service provider.

Summary of Guidance

The following table summarizes the guidance for how to protect the various exchanges in the DNS resolution ecosystem.

In short, the guidance is “minimize at root and TLD, encrypt when needed elsewhere.”

DNS Exchange Purpose Guidance
Resolver-to-Root and TLD Global navigational availability • Resolvers should apply minimization techniques
• Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges
Resolver-to-SLD and Below Local performance optimization • Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names, or client-specific information
• Resolvers and SLD servers (and below) should not be required to implement DNS encryption otherwise
Client-to-Resolver General DNS resolution • Clients and resolvers should implement DNS encryption on this exchange, unless adequate protection is otherwise provided, e.g., as part of a network connection

Conclusion

If the guidance suggested here were followed, we could expect to see more deployment of minimization techniques on resolver-to-authoritative exchanges at the root and TLD levels; more deployment of DNS encryption, when needed, at the SLD levels and lower; and more deployment of DNS encryption on client-to-resolver exchanges.

In all these deployments, the DNS will serve the same purpose as it already does in today’s unencrypted exchanges: enabling general-purpose navigation to information and resources on the internet.

DNS encryption also brings two new capabilities that make it possible for the DNS to serve two new purposes. Both are based on concepts developed in Verisign’s research program.

  1. The client can authenticate itself to a resolver or name server that supports DNS encryption, and the resolver or name server can limit its DNS responses based on client identity. This capability enables a new set of security controls for restricting access to network resources and information. We call this approach authenticated resolution.
  2. The client can confidentially send client-related information to the resolver or name server, and the resolver or name server can optimize its DNS responses based on client characteristics. This capability enables a new set of performance features for speeding access to those same resources and information. We call this approach adaptive resolution.

You can read more about these two applications in my recent blog post on this topic, Authenticated Resolution and Adaptive Resolution: Security and Navigational Enhancements to the Domain Name System.

Verisign CTO Burt Kaliski explains why minimization techniques at the root and TLD levels of the DNS are a key component of a balanced DNS information protection strategy.

The post A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere appeared first on Verisign Blog.

DNSTrails v1.0 – DNS intelligence database

By MaxiSoler
DNSTrails is an intelligence database, featuring IP and Domain related data such as current and historical DNS records, current and historical WHOIS, technologies used, subdomains and the ability to...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
❌