FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayVerisign Blog

Celebrating Women Engineers Today and Every Day at Verisign

By Ellen Petrocci
Celebrating Verisign's women engineers for INWED 2022

Today, as the world celebrates International Women in Engineering Day, we recognize and honor women engineers at Verisign, whose own stories have helped shape dreams and encouraged young women and girls to take up engineering careers.

Here are three of their stories:

Shama Khakurel, Senior Software Engineer

When Shama Khakurel was in high school, she aspired to join the medical field. But she quickly realized that classes involving math or engineering came easiest to her, much more so than her work in biology or other subjects. It wasn’t until she took a summer computer programming course called “Lotus and dBase Programming” that she realized her career aspirations had officially changed; from that point on, she wanted to be an engineer.

In the nearly 20 years she’s been at Verisign, she’s expanded her skills, challenged herself, pursued opportunities – and always had the support of managers who mentored her along the way.

“Verisign has given me every opportunity to grow,” Shama says. And even though she continues to “learn something new every day,” she also provides mentorship to younger engineering employees.

Women tend to shy away from engineering roles, she says, because they think that math and science are harder subjects. “They seem to follow and believe that myth, but there is a lot of opportunity for a woman in this field.”

Vinaya Shenoy, Engineering Manager

For Vinaya Shenoy, an engineering manager who has worked for Verisign for 17 years, a passion for math and science at a young age steered her toward a career in computer science engineering.

She draws inspiration from other women who are industry leaders and immigrants from India and who made it to top rank with their determination and leadership skills. She credits their stories with helping her see what all women are capable of, especially in unconventional or unexpected areas.

“Engineering is not just coding. There are a lot of areas within engineering that you can explore and pursue,” she says. “If problem-solving and creating are your passions, you can harness the power of technology to solve problems and give back to the community.”

Tuyet Vuong, Software Engineer

Tuyet Vuong is one of those people who enjoys problem-solving. As a young girl and the child of two physics teachers, she would often build small gadgets – perhaps her own clock or a small fan – from things she would find around the house.

Today, the challenges are bigger and have a greater impact, and she still finds herself enjoying them.

“Engineering is a fun, exciting and rewarding discipline where you can explore and build new things that are helpful to society,” says Tuyet. And sharing the insights and experiences of so many talented people – both men and women – is what makes the role that much more rewarding.

That sense of fulfillment also comes from breaking down stereotypes, such as the attitudes about women only being suitable for a limited number of careers when she was growing up in Vietnam. That’s why she’s a firm believer that mentoring and encouraging young women engineers isn’t just the responsibility of other women.

“The effort should come from both genders,” she says. “The effort shouldn’t come from women alone.”

Looking to the Future

At Verisign, we see the real impact of all our women engineers’ contributions when it comes to ensuring that the internet is secure, stable and resilient. Today and every day, we celebrate Verisign’s women engineers. We thank you for all you’ve done and everything you’re yet to accomplish.


If you’re interested in pursuing your passion for engineering, view our open career opportunities here.

The post Celebrating Women Engineers Today and Every Day at Verisign appeared first on Verisign Blog.

More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

By Duane Wessels
Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

This blog was also published by APNIC.

With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.

That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.

That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.

While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.

Exploring the Mystery

One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)

Distribution of second-level label lengths in NS queries from AS 15169
Figure 1: Distribution of second-level label lengths in queries to root name servers, comparing AS 15169 to others, for several different TLDs.

Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)

Daily count of NS Queries from AS 15169
Figure 2: Historical data shows the mysterious queries began in late 2019.

Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.

Fraction of SLDs seen at A/J-root also seen at TLD (AS 15169 queries)
Figure 3: Fraction of queries from AS 15169 appearing on A-root and J-root that also appear on .com and .net name servers, by the length of the second-level label.

The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.

However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.

Queries per day from AS 15169 to A/J-root for names with second-level label length equal to 12 or 13 (July 6, 2021)
Figure 4: Queries per day from AS 15169, for names with second-level label length equal to 12 or 13, over a 24-hour period.

From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.

The Missing Piece

These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.

In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.

The Mystery Explained

Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.

With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.

Wrapping Up

Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.

The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.

Routing Without Rumor: Securing the Internet’s Routing System

By Danny McPherson
colorful laptop

This article is based on a paper originally published as part of the Global Commission on the Stability of Cyberspace’s Cyberstability Paper Series, “New Conditions and Constellations in Cyber,” on Dec. 9, 2021.

The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.

Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.

To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.

Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.

The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”

The Border Gateway Protocol

Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.

BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.

Route Hijacks and Route Leaks

Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.

Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.

Resource Public Key Infrastructure

Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.

RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.

Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.

Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.

As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.

Conclusion

Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.

The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.

Observations on Resolver Behavior During DNS Outages

By Verisign
Globe with caution signs

When an outage affects a component of the internet infrastructure, there can often be downstream ripple effects affecting other components or services, either directly or indirectly. We would like to share our observations of this impact in the case of two recent such outages, measured at various levels of the DNS hierarchy, and discuss the resultant increase in query volume due to the behavior of recursive resolvers.

During the beginning of October 2021, the internet saw two significant outages, affecting Facebook’s services and the .club top level domain, both of which did not properly resolve for a period of time. Throughout these outages, Verisign and other DNS operators reported significant increases in query volume. We provided consistent responses throughout, with the correct delegation data pointing to the correct nameservers.

While these higher query rates do not impact Verisign’s ability to respond, they raise a broader operational question – whether the repeated nature of these queries, indicative of a lack of negative caching, might potentially be mistaken for a denial-of-service attack.

Facebook

On Oct. 4, 2021, Facebook experienced a widespread outage, lasting nearly six hours. During this time most of its systems were unreachable, including those that provide Facebook’s DNS service. The outage impacted facebook.com, instagram.com, whatsapp.net and other domain names.

Under normal conditions, the .com and .net authoritative name servers answer about 7,000 queries per second in total for the three domain names previously mentioned. During this particular outage, however, query rates for these domain names reached upwards of 900,000 queries per second (an increase of more than 100x), as shown in Figure 1 below.

Rate of DNS queries

Figure 1: Rate of DNS queries for Facebook’s domain names during the 10/4/21 outage.

During this outage, recursive name servers received no response from Facebook’s name servers – instead, those queries timed out. In situations such as this, recursive name servers generally return a SERVFAIL or “server failure” response, presented to end users as a “this site can’t be reached” error.

Figure 1 shows an increasing query rate over the duration of the outage. Facebook uses relatively low Time-to-Lives (TTLs), a setting that tells DNS resolvers how long to cache an answer on their DNS records before issuing a new query, of from one to five minutes. This in turn means that, five minutes into the outage, all relevant records would have expired from all recursive resolver caches – or at least from those that honor the publisher’s TTLs. It is not immediately clear why the query rate continues to climb throughout the outage, nor whether it would eventually have plateaued had the outage continued.

To get a sense of where the traffic comes from, we group query sources by their autonomous system number. The top five autonomous systems, along with all others grouped together, are shown in Figure 2.

Rate of DNS queries grouped by AS

Figure 2: Rate of DNS queries for Facebook’s domain names grouped by source autonomous system.

From Figure 2 we can see that, at their peak, queries for these domain names to Verisign’s .com and .net authoritative name servers from the most active recursive resolvers – those of Google and Cloudflare – increased around 7,000x and 2,000x respectively over their average non-outage rates.

.club

On Oct. 7th, 2021, three days after Facebook’s outage, the .club and .hsbc TLDs also experienced a three-hour outage. In this case, the relevant authoritative servers remained reachable, but responded with SERVFAIL messages. The effect on recursive resolvers was essentially the same: Since they did not receive useful data, they repeatedly retried their queries to the parent zone. During the incident, the Verisign-operated A-root and J-root servers observed an increase in queries for .club domain names of 45x, from 80 queries per second before, to 3,700 queries per second during the outage.

Rate of DNS queries to A and J root servers

Figure 3: Rate of DNS queries to A and J root servers during the 10/7/2021 .club outage.

Similar to the previous example, this outage also demonstrated an increasing trend in query rate during the duration of the outage. In this case, it might be explained by the fact that the records for .club’s delegation in the root zone use two-day TTLs. However, the theoretical analysis is complicated by the fact that authoritative name server records in child zones use longer TTLs (six days), while authoritative name server address records use shorter TTLs (10 minutes). Here we do not observe a significant amount of query traffic from Google sources; instead, the increased query volume is largely attributable to the long tail of recursive resolvers in “All Others.”

Rate of DNS queries for .club and .hsbc

Figure 4: Rate of DNS queries for .club and .hsbc, grouped by source autonomous system.

Botnet Traffic

Earlier this year Verisign implemented a botnet sinkhole and analyzed the received traffic. This botnet utilizes more than 1,500 second-level domain names, likely for command and control. We observed queries from approximately 50,000 clients every day. As an experiment, we configured our sinkhole name servers to return SERVFAIL and REFUSED responses for two of the botnet domain names.

When configured to return a valid answer, each domain name’s query rate peaks at about 50 queries per second. However, when configured to return SERVFAIL, the query rate for a single domain name increases to 60,000 per second, as shown in Figure 5. Further, the query rate for the botnet domain name also increases at the TLD and root name servers, even though those services are functioning normally and data relevant to the botnet domain name has not changed – just as with the two outages described above. Figure 6 shows data from the same experiment (although for a different date), colored by the source autonomous system. Here, we can see that approximately half of the increased query traffic is generated by one organization’s recursive resolvers.

Query rates to name servers

Figure 5: Query rates to name servers for one domain name experimentally configured to return SERVFAIL.

Query rates to botnet sinkho

Figure 6: Query rates to botnet sinkhole name servers, grouped by autonomous system, when one domain name is experimentally configured to return SERVFAIL.

Common Threads

These two outages and one experiment all demonstrate that recursive name servers can become unnecessarily aggressive when responses to queries are not received due to connectivity issues, timeouts, or misconfigurations.

In each of these three cases, we observe significant query rate increases from recursive resolvers across the internet – with particular contributors, such as Google Public DNS and Cloudflare’s resolver, identified on each occasion.

Often in cases like this we turn to internet standards for guidance. RFC 2308 is a 1998 Standards Track specification that describes negative caching of DNS queries. The RFC covers name errors (e.g., NXDOMAIN), no data, server failures, and timeouts. Unfortunately, it states that negative caching for server failures and timeouts is optional. We have submitted an Internet-Draft that proposes updating RFC 2308 to require negative caching for DNS resolution failures.

We believe it is important for the security, stability and resiliency of the internet’s DNS infrastructure that the implementers of recursive resolvers and public DNS services carefully consider how their systems behave in circumstances where none of a domain name’s authoritative name servers are providing responses, yet the parent zones are providing proper referrals. We feel it is difficult to rationalize the patterns that we are currently observing, such as hundreds of queries per second from individual recursive resolver sources. The global DNS would be better served by more appropriate rate limiting, and algorithms such as exponential backoff, to address these types of cases we’ve highlighted here. Verisign remains committed to leading and contributing to the continued security, stability and resiliency of the DNS for all global internet users.

This piece was co-authored by Verisign Fellow Duane Wessels and Verisign Distinguished Engineers Matt Thomas and Yannis Labrou.

The post Observations on Resolver Behavior During DNS Outages appeared first on Verisign Blog.

Ongoing Community Work to Mitigate Domain Name System Security Threats

By Keith Drazek

For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.

As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.

As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.

In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:

  • Determining the roles and responsibilities of the various service providers across the internet ecosystem;
  • Delineating categories of threats: content, infrastructure, illegal vs. harmful, etc.;
  • Understanding the precise operational and technical capabilities of various types of providers across the internet ecosystem;
  • Relationships, if any, that respective service providers have with individuals or entities responsible for creating and/or removing the illegal or abusive activity;
  • Role of third-party “trusted notifiers,” including government actors, that may play a role in identifying and reporting illegal and abusive behavior to the appropriate service provider;
  • Processes to ensure infrastructure providers can trust third-party notifiers to reliably identify and provide evidence of illegal or harmful content;
  • Promoting administrative and operational scalability in trusted notifier engagements;
  • Determining the necessary safeguards around liability, due process, and transparency to ensure domain name registrants have recourse when the DNS is used as a tool to police DNS security threats, particularly when related to content.
  • Supporting ICANN’s important and appropriate role in coordination and facilitation, particularly as a centralized source of data, tools, and resources to help and hold accountable those parties responsible for managing and maintaining the internet’s unique identifiers.
Figure 1: The Internet Ecosystem

Definitions of Online Abuse

To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.

  • DNS Security Threats: defined as being “composed of five broad categories of harmful activity [where] they intersect with the DNS: malware, botnets, phishing, pharming, and spam when [spam] serves as a delivery mechanism for those other forms of DNS Abuse.”
  • Infrastructure Abuse: a broader set of security threats that can impact the DNS itself – including denial-of-service / distributed denial-of-service (DoS / DDoS) attacks, DNS cache poisoning, protocol-level attacks, and exploitation of implementation vulnerabilities.
  • Illegal Content: content that is unlawful and hosted on websites that are accessed via domain names in the global DNS. Examples might include the illegal sale of controlled substances or the distribution of child sexual abuse material (CSAM), and proven intellectual property infringement.
  • Abusive Content: is content hosted on websites using the domain name infrastructure that is deemed “harmful,” either under applicable law or norms, which could include scams, fraud, misinformation, or intellectual property infringement, where illegality has yet to be established by a court of competent jurisdiction.

Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.

ICANN Organization’s Efforts on DNS Abuse

The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.

The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.

The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.

The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.

ICANN Community Inputs on DNS Abuse

As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.

Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.

During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.

In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.

Other Groups and Actors Focused on DNS Security

It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:

Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.

DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.

Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.

ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.

Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.

What Verisign is Doing Today

As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:

  • Monitoring for abuse: Protecting against abuse starts with knowing what is happening in our systems and services, in a timely manner, and being capable of detecting anomalous or abusive behavior, and then reacting to address it appropriately. Verisign works closely with a range of actors, including trusted notifiers, to help ensure our abuse mitigation actions are informed by sources with necessary subject matter expertise and procedural rigor.
  • Blocking and redirecting abusive domain names: Blocking certain domain names that have been identified by Verisign and/or trusted third parties as security threats, including botnets that leverage well-understood and characterized domain generation algorithms, helps us to protect our infrastructure and neutralize or otherwise minimize potential security and stability threats more broadly by remediating abuse enabled via domain names in our TLDs. For example, earlier this year, Verisign observed a botnet family that was responsible for such a disproportionate amount of total global DNS queries, we were compelled to act to remediate the botnet. This was referenced in Verisign’s Q1 2021 Domain Name Industry Brief Volume 18, Issue 2.
  • Avoiding disposable domain name registrations: While heavily discounted domain name pricing strategies may promote short-term sales, they may also attract a spectrum of registrants who might be engaged in abuse. Some security threats, including phishing and botnets, exploit the ability to register large numbers of ‘disposable’ domain names rapidly and cheaply. Accordingly, Verisign avoids marketing programs that would permit our TLDs to be characterized in this class of ‘disposable’ domains, that have been shown to attract miscreants and enable abusive behavior.
  • Maintaining a cooperative and responsive partnership with law enforcement and government agencies, and engagement with courts of relevant jurisdiction: To ensure the security, stability and resiliency of the DNS and the internet at large, we have developed and maintained constructive relationships with United States and international law enforcement and government agencies to assist in addressing imminent and ongoing substantial security threats to operational applications and critical internet infrastructure, as well as illegal activity associated with domain names.
  • Ensuring adherence of contractual obligations: Our contractual frameworks, including our registry policies and .com Registry-Registrar Agreements, help provide an effective legal framework that discourages abusive domain name registrations. We believe that fair and consistent enforcement of our policies helps to promote good hygiene within the registrar channel.
  • Entering into a binding Letter of Intent with ICANN that commits both parties to cooperate in taking a leadership role in combating security threats. This includes working with the ICANN community to determine the appropriate process for, and development and implementation of, best practices related to combating security threats; to educate the wider ICANN community about security threats; and support activities that preserve and enhance the security, stability and resiliency of the DNS. Verisign also made a substantial financial commitment in direct support of these important efforts.

Trusted Notifiers

An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.

Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.

Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.

Conclusion

Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.

The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.

Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets

By Verisign
An image of multiple botnets for the Verisign blog "Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets"

Note: This article originally appeared in Verisign’s Q1 2021 Domain Name Industry Brief.

This article expands on observations of a botnet traffic group at various levels of the Domain Name System (DNS) hierarchy, presented at DNS-OARC 35.

Addressing DNS abuse and maintaining a healthy DNS ecosystem are important components of Verisign’s commitment to being a responsible steward of the internet. We continuously engage with the Internet Corporation for Assigned Names and Numbers (ICANN) and other industry partners to help ensure the secure, stable and resilient operation of the DNS.

Based on recent telemetry data from Verisign’s authoritative top-level domain (TLD) name servers, Verisign observed a widespread botnet responsible for a disproportionate amount of total global DNS queries – and, in coordination with several registrars, registries and ICANN, acted expeditiously to remediate it.

Just prior to Verisign taking action to remediate the botnet, upwards of 27.5 billion queries per day were being sent to Verisign’s authoritative TLD name servers, accounting for roughly 10% of Verisign’s total DNS traffic. That amount of query volume in most DNS environments would be considered a sustained distributed denial-of-service (DDoS) attack.

These queries were associated with a particular piece of malware that emerged in 2018, spreading throughout the internet to create a global botnet infrastructure. Botnets provide a substrate for malicious actors to theoretically perform all manner of malicious activity – executing DDoS attacks, exfiltrating data, sending spam, conducting phishing campaigns or even installing ransomware. This is the result of the malware’s ability to download and execute any other type of payload the malicious actor desires.

Malware authors often apply various forms of evasion techniques to protect their botnets from being detected and remediated. A Domain Generation Algorithm (DGA) is an example of such an evasion technique.

DGAs are seen in various families of malware that periodically generate a number of domain names, which can be used as rendezvous points for botnet command-and-control servers. By using a DGA to build the list of domain names, the malicious actor makes it more difficult for security practitioners to identify what domain names will be used and when. Only by exhaustively reverse-engineering a piece of malware can the definitive set of domain names be ascertained.

The choices made by miscreants to tailor malware DGAs directly influences the DGAs’ ability to evade detection. For instance, electing to use more TLDs and a large number of domain names in a given time period makes the malware’s operation more difficult to disrupt; however, this approach also increases the amount of network noise, making it easier to identify anomalous traffic patterns by security and network teams. Likewise, a DGA that uses a limited number of TLDs and domain names will generate significantly less network noise but is more fragile and susceptible to remediation.

Botnets that implement DGA algorithms or utilize domain names clearly represent an “abuse of the DNS,” opposed to other types of abuse that are executed “via the DNS,” such as phishing. This is an important distinction the DNS community should consider as it continues to refine the scope of DNS abuse and how remediation of the various abuses can be effectuated.

The remediation of domain names used by botnets as rendezvous points poses numerous operational challenges and insights. The set of domain names needs to be identified and investigated to determine their current registration status. Risk assessments must be evaluated on registered domain names to determine if additional actions should be performed, such as sending registrar notifications, issuing requests to transfer domain names, adding Extensible Provisioning Protocol (EPP) hold statuses, altering delegation records, etc. There are also timing and coordination elements that must be balanced with external entities, such as ICANN, law enforcement, Computer Emergency Readiness Teams (CERTs) and contracted parties, including registrars and registries. Other technical decisions also need to be considered, designed and deployed to achieve the desired remediation goal.

After coordinating with ICANN, and several registrars and registries, Verisign registered the remaining available botnet domain names and began a three-phase plan to sinkhole those domain names. Ultimately, this remediation effort would reduce the traffic sent to Verisign authoritative name servers and effectively eliminate the botnet’s ability to use command-and-control domain names within Verisign-operated TLDs.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.
Figure 1: The botnet’s DNS query volume at Verisign authoritative name servers.

Phase one was executed on Dec. 21, 2020, in which 100 .cc domain names were configured to resolve to Verisign-operated sinkhole servers. Subsequently, traffic at Verisign authoritative name servers quickly decreased. The second group of domain names contained 500 .com and .net domain names, which were sinkholed on Jan. 7, 2021. Again, traffic volume at Verisign authoritative name servers quickly decreased. The final group of 879 .com and .net domain names were sinkholed on Jan. 13, 2021. By the end of phase three, the cumulative DNS traffic reduction surpassed 25 billion queries per day. Verisign reserved approximately 10 percent of the botnet domain names to remain on serverHold as a placebo/control-group to better understand sinkholing effects as they relate to query volume at the child and parent zones. Verisign believes that sinkholing the remaining domain names would further reduce authoritative name server traffic by an additional one billion queries.

This botnet highlights the remarkable Pareto-like distribution of DNS query traffic, in which a few thousand domain names that span namespaces containing more than 165 million domain names, demand a vastly disproportionate amount of DNS resources.

What causes the amplification of DNS traffic volume for non-existent domain names to occur at the upper levels of the DNS hierarchy? Verisign is conducting a variety of measurements on the sinkholed botnet domain names to better understand the caching behavior of the resolver population. We are observing some interesting traffic changes at the TLD and root name servers when time to live (TTL) and response codes are altered at the sinkhole servers. Stay tuned.

In addition to remediating this botnet in late 2020 and into early 2021, Verisign extended its already four-year endeavor to combat the Avalanche botnet family. Since 2016, the Avalanche botnet had been significantly impacted due to actions taken by Verisign and an international consortium of law enforcement, academic and private organizations. However, many of the underlying Avalanche-compromised machines are still not remediated, and the threat from Avalanche could increase again if additional actions are not taken. To prevent this from happening, Verisign, in coordination with ICANN and other industry partners, is using a variety of tools to ensure Avalanche command-and-control domain names cannot be used in Verisign-operated TLDs.

Botnets are a persistent issue. And as long as they exist as a threat to the security, stability and resiliency of the DNS, cross-industry coordination and collaboration will continue to lie at the core of combating them.

This piece was co-authored by Matt Thomas and Duane Wessels, distinguished engineers at Verisign.

The post Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets appeared first on Verisign Blog.

Information Protection for the Domain Name System: Encryption and Minimization

By Burt Kaliski

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

By Burt Kaliski

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

By Burt Kaliski

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

By Burt Kaliski

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

By Burt Kaliski

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

By Burt Kaliski
Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere

By Burt Kaliski
Lock image and DNS

Over the past several years, questions about how to protect information exchanged in the Domain Name System (DNS) have come to the forefront.

One of these questions was posed first to DNS resolver operators in the middle of the last decade, and is now being brought to authoritative name server operators: “to encrypt or not to encrypt?” It’s a question that Verisign has been considering for some time as part of our commitment to security, stability and resiliency of our DNS operations and the surrounding DNS ecosystem.

Because authoritative name servers operate at different levels of the DNS hierarchy, the answer is not a simple “yes” or “no.” As I will discuss in the sections that follow, different information protection techniques fit the different levels, based on a balance between cryptographic and operational considerations.

How to Protect, Not Whether to Encrypt

Rather than asking whether to deploy encryption for a particular exchange, we believe it is more important to ask how best to address the information protection objectives for that exchange — whether by encryption, alternative techniques or some combination thereof.

Information protection must balance three objectives:

  • Confidentiality: protecting information from disclosure;
  • Integrity: protecting information from modification; and
  • Availability: protecting information from disruption.

The importance of balancing these objectives is well-illustrated in the case of encryption. Encryption can improve confidentiality and integrity by making it harder for an adversary to view or change data, but encryption can also impair availability, by making it easier for an adversary to cause other participants to expend unnecessary resources. This is especially the case in DNS encryption protocols where the resource burden for setting up an encrypted session rests primarily on the server, not the client.

The Internet-Draft “Authoritative DNS-Over-TLS Operational Considerations,” co-authored by Verisign, expands on this point:

Initial deployments of [Authoritative DNS over TLS] may offer an immediate expansion of the attack surface (additional port, transport protocol, and computationally expensive crypto operations for an attacker to exploit) while, in some cases, providing limited protection to end users.

Complexity is also a concern because DNS encryption requires changes on both sides of an exchange. The long-installed base of DNS implementations, as Bert Hubert has parabolically observed, can only sustain so many more changes before one becomes the “straw that breaks the back” of the DNS camel.

The flowchart in Figure 1 describes a two-stage process for factoring in operational risk when determining how to mitigate the risk of disclosure of sensitive information in a DNS exchange. The process involves two questions.

Figure 1  Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.
Figure 1 Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.

This process can readily be applied to develop guidance for each exchange in the DNS resolution ecosystem.

Applying Guidance

Three main exchanges in the DNS resolution ecosystem are considered here, as shown in Figure 2: resolver-to-authoritative at the root and TLD level; resolver-to-authoritative below these levels; and client-to-resolver.

Figure 2 Three DNS resolution exchanges: (1) resolver-to-authoritative at the root and TLD levels; (2) resolver-to-authoritative at the SLD level (and below); (3) client-to-resolver. Different information protection guidance applies for each exchange. The exchanges are shown with qname minimization implemented at the root and TLD levels.

1. Resolver-to-Root and Resolver-to-TLD

The resolver-to-authoritative exchange at the root level enables DNS resolution for all underlying domain names; the exchange at the TLD level does the same for all names under a TLD. These exchanges provide global navigation for all names, benefiting all resolvers and therefore all clients, and making the availability objective paramount.

As a resolver generally services many clients, information exchanged at these levels represents aggregate interests in domain names, not the direct interests of specific clients. The sensitivity of this aggregated information is therefore relatively low to start with, as it does not contain client-specific details beyond the queried names and record types. However, the full domain name of interest to a client has conventionally been sent to servers at the root and TLD levels, even though this is more information than they need to know to refer the resolver to authoritative name servers at lower levels of the DNS hierarchy. The additional detail in the full domain name may be considered sensitive in some cases. Therefore, this data exchange merits consideration for protection.

The decision flow (as generally described in Figure 1) for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Yes. Techniques such as qname minimization, NXDOMAIN cut processing and aggressive DNSSEC caching — which we collectively refer to as minimization techniques — can reduce the information exchanged to just the resolver’s aggregate interests in TLDs and SLDs, removing the further detail of the full domain names and meeting the principle of minimum disclosure.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? No. The information exchanged is just the resolver’s aggregate interests in TLDs and SLDs after minimization techniques are applied, and any further protection offered by encryption wouldn’t justify the operational risk of a protocol change affecting both sides of the exchange. While some resolvers and name servers may be open to the operational risk, it shouldn’t be required of others.

Interestingly, while minimization itself is sufficient to address the disclosure risk at the root and TLD levels, encryption alone isn’t. Encryption protects against disclosure to outside observers but not against disclosure to (or by) the name server itself. Even though the sensitivity of the information is relatively low for reasons noted above, without minimization, it’s still more than the name server needs to know.

Summary: Resolvers should apply minimization techniques at the root and TLD levels. Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges.

Note that at this time, Verisign has no plans to implement DNS encryption at the root or TLD servers that the company operates. Given the availability of qname minimization, which we are encouraging resolver operators to implement, and other minimization techniques, we do not currently see DNS encryption at these levels as offering an appropriate risk / benefit tradeoff.

2. Resolver-to-SLD and Below

The resolver-to-authoritative exchanges at the SLD level and below enable DNS resolution within specific namespaces. These exchanges provide local optimization, benefiting all resolvers and all clients interacting with the included namespaces.

The information exchanged at these levels also represents the resolver’s aggregate interests, but in some cases, it may also include client-related information such as the client’s subnet. The full domain names and the client-related information, if any, are the most sensitive parts.

The decision flow for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Partially. Minimization techniques may apply to some exchanges at the SLD level and below if the full domain names have more than three labels. However, they don’t help as much with the lowest-level authoritative name server involved, because that name server needs the full domain name.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? In some cases. If the exchange includes client-specific information, then the risk may be justified. If full domain names include labels beyond the TLD and SLD that are considered more sensitive, this might be another justification. Otherwise, the benefit of encryption generally wouldn’t justify the operational risk, and for this reason, as in the previous case, encryption shouldn’t be required, except in the specific purposes noted.

Summary: Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names or client-specific information. Otherwise, they should not be required to implement DNS encryption.

3. Client-to-Resolver

The client-to-resolver exchange enables navigation to all domain names for all clients of the resolver.

The information exchanged here represents the interests of each specific client. The sensitivity of this information is therefore relatively high, making confidentiality vital.

The decision process in this case traverses the first and second steps:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Not really. Minimization techniques don’t apply here because the resolver needs the full domain name and the client-specific information included in the request, namely the client’s IP address. Other alternatives, such as oblivious DNS, have been proposed, but are more complex and arguably increase operational risk. Note that some clients use security and confidentiality solutions at different layers of the protocol stack (e.g. a virtual private network (VPN), etc.) to protect DNS exchanges.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? Yes.

Summary: Clients and resolvers should implement DNS encryption on this exchange, unless the exchange is otherwise adequately protected, for instance as part of the network connection provided by an enterprise or internet service provider.

Summary of Guidance

The following table summarizes the guidance for how to protect the various exchanges in the DNS resolution ecosystem.

In short, the guidance is “minimize at root and TLD, encrypt when needed elsewhere.”

DNS Exchange Purpose Guidance
Resolver-to-Root and TLD Global navigational availability • Resolvers should apply minimization techniques
• Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges
Resolver-to-SLD and Below Local performance optimization • Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names, or client-specific information
• Resolvers and SLD servers (and below) should not be required to implement DNS encryption otherwise
Client-to-Resolver General DNS resolution • Clients and resolvers should implement DNS encryption on this exchange, unless adequate protection is otherwise provided, e.g., as part of a network connection

Conclusion

If the guidance suggested here were followed, we could expect to see more deployment of minimization techniques on resolver-to-authoritative exchanges at the root and TLD levels; more deployment of DNS encryption, when needed, at the SLD levels and lower; and more deployment of DNS encryption on client-to-resolver exchanges.

In all these deployments, the DNS will serve the same purpose as it already does in today’s unencrypted exchanges: enabling general-purpose navigation to information and resources on the internet.

DNS encryption also brings two new capabilities that make it possible for the DNS to serve two new purposes. Both are based on concepts developed in Verisign’s research program.

  1. The client can authenticate itself to a resolver or name server that supports DNS encryption, and the resolver or name server can limit its DNS responses based on client identity. This capability enables a new set of security controls for restricting access to network resources and information. We call this approach authenticated resolution.
  2. The client can confidentially send client-related information to the resolver or name server, and the resolver or name server can optimize its DNS responses based on client characteristics. This capability enables a new set of performance features for speeding access to those same resources and information. We call this approach adaptive resolution.

You can read more about these two applications in my recent blog post on this topic, Authenticated Resolution and Adaptive Resolution: Security and Navigational Enhancements to the Domain Name System.

Verisign CTO Burt Kaliski explains why minimization techniques at the root and TLD levels of the DNS are a key component of a balanced DNS information protection strategy.

The post A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere appeared first on Verisign Blog.

❌