FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Chromium’s Impact on Root DNS Traffic

By Duane Wessels
Search Bar

This article originally appeared Aug. 21, 2020 on the APNIC blog.

Introduction

Chromium is an open-source software project that forms the foundation for Google’s Chrome web browser, as well as a number of other browser products, including Microsoft Edge, Opera, Amazon Silk, and Brave. Since Chrome’s introduction in 2008, Chromium-based browsers have steadily risen in popularity and today comprise approximately 70% of the market share.1

Chromium has, since its early days, included a feature known as the omnibox. This is where users may enter either a web site name, URL, or search terms. But the omnibox has an interface challenge. The user might enter a word like “marketing” that could refer to both an (intranet) web site and a search term. Which should the browser choose to display? Chromium treats it as a search term, but also displays an infobar that says something like “did you mean http://marketing/?” if a background DNS lookup for the name results in an IP address.

At this point, a new issue arises. Some networks (e.g., ISPs) utilize products or services designed to intercept and capture traffic from mistyped domain names. This is sometimes known as “NXDomain hijacking.” Users on such networks might be shown the “did you mean” infobar on every single-term search. To work around this, Chromium needs to know if it can trust the network to provide non-intercepted DNS responses.

Chromium Probe Design

Inside the Chromium source code there is a file named intranet_redirect_detector.c. The functions in this file attempt to load three URLs whose hostnames consist of a randomly generated single-label domain name, as shown in Figure 1 below.

Figure 1: Chromium source code that implements random URL fetches.
Figure 1: Chromium source code that implements random URL fetches.

This code results in three URL fetches, such as http://rociwefoie/, http://uawfkfrefre/, and http://awoimveroi/, and these in turn result in three DNS lookups for the random host names. As can be deduced from the source code, these random names are 7-15 characters in length (line 151) and consist of only the letters a-z (line 153). In versions of the code prior to February 2014, the random names were always 10 characters in length.

The intranet redirect detector functions are executed each time the browser starts up, each time the system/device’s IP address changes, and each time the system/device’s DNS configuration changes. If any two of these fetches resolve to the same address, that address is stored as the browser’s redirect origin.

Identifying Chromium Queries

Nearly any cursory glance at root name server traffic will exhibit queries for names that look like those used in Chromium’s probe queries. For example, here are 20 sequential queries received at an a.root-servers.net instance:

20 sequential queries received at an a.root-servers.net

In this brief snippet of data, we can see six queries (yellow highlight) for random, single-label names, and another four (green highlight) with random first labels followed by an apparent domain search suffix. These match the pattern from the Chromium source code, being 7-15 characters in length and consisting of only the letters a-z.

To characterize the amount of Chromium probe traffic in larger amounts of data (i.e., covering a 24-hour period), we tabulate queries based on the following attributes:

  • Response code (NXDomain or NoError)
  • Popularity of the leftmost label
  • Length of the leftmost label
  • Characters used in the leftmost label
  • Number of labels in the full query name
Sankey graph showing classification of queries matching Chromium probe patterns.
Figure 2: Sankey graph showing classification of queries matching Chromium probe patterns.

Figure 2 shows a classification of data from a.root-servers.net on May 13, 2020. Here we can see that 51% of all queried names were observed fewer than four times in the 24-hour period. Of those, nearly all were for non-existent TLDs, although a very small amount come from the existing TLDs (labeled “YXD” on the left). This small sliver represents either false positives or Chromium probe queries that have been subject to domain suffix search appending by stub resolvers or end user applications.

Of the 51% observed fewer than four times, all but 2.86% of those have a first label between 7 and 15 characters in length (inclusive). Furthermore, most of those match the pattern consisting of only a-z characters (case insensitive), leaving us with 45.80% of total traffic on this day that appears to be from Chromium probes.

From there we break down the queries by number of labels and length of the first label. Note that label lengths, on the far right of the graph, have a very even distribution, except for 7 and 10 characters. Labels with 10 characters are more popular because older versions of Chromium generated only 10-character names. We believe that 7 is less popular due to the increased probability of collisions in only 7 characters, which can increase the query count to above our threshold of three.

Longitudinal Analysis

Next, we turn our attention to an analysis of how the total root traffic percentage of Chromium-like queries has changed over time. We use two data sets in this analysis: data from DNS-OARC’s “Day In The Life” (DITL) collections, and Verisign’s data for a.root-servers.net and j.root-servers.net.

Long-term trend analysis of Chromium-like queries to root name servers.
Figure 3: Long-term trend analysis of Chromium-like queries to root name servers.

Figure 3 shows the results of the long-term analysis. We were able to analyze the annual DITL data from 2006-2014, and from 2017-2018, labeled “DITL Full” in the figure. The 2015-2016 data was unavailable on the DNS-OARC systems. The 2019 dataset could not be analyzed in full due to its size, so we settled for a sampled analysis instead, labeled “DITL Sampled” in Figure 3. The 2020 data was not ready for analysis by the time our research was done.

In every DITL dataset, we analyzed each root server identity (“letter”) separately. This produces a range of values for each year. The solid line shows the average of all the identities, while the shaded area shows the range of values.

To fill in some of the DITL gaps we used Verisign’s own data for a.root-servers.net and j.root-servers.net. Here we selected a 24-hour period for each month. Again, the solid line shows the average and the shaded area represents the range.

The figure also includes a line labeled “Chrome market share” (note: Chrome, not Chromium-based browsers) and a marker indicating when the feature was first added to the source code. Note, there are some false positive Chromium-like queries observed in the DITL data prior to introduction of the feature, comprising about 1% of the total traffic, but in the 10+ years since the feature was added, we now find that half of the DNS root server traffic is very likely due to Chromium’s probes. That equates to about 60 billion queries to the root server system on a typical day.

Concluding Thoughts

The root server system is, out of necessity, designed to handle very large amounts of traffic. As we have shown here, under normal operating conditions, half of the traffic originates with a single library function, on a single browser platform, whose sole purpose is to detect DNS interception. Such interception is certainly the exception rather than the norm. In almost any other scenario, this traffic would be indistinguishable from a distributed denial of service (DDoS) attack.

Could Chromium achieve its goal while only sending one or two queries instead of three? Are other approaches feasible? For example, Firefox’s captive portal test uses delegated namespace probe queries, directing them away from the root servers towards the browser’s own infrastructure. While technical solutions such as Aggressive NSEC Caching (RFC 8198), Qname Minimization (RFC 7816), and NXDomain Cut (RFC 8020) could also significantly reduce probe queries to the root server system, these solutions require action by recursive resolver operators, who have limited incentive to deploy and support these technologies.

This piece was co-authored by Matt Thomas, Distinguished Engineer in Verisign’s CSO Applied Research division.


1https://www.w3counter.com/trends

The post Chromium’s Impact on Root DNS Traffic appeared first on Verisign Blog.

Will Altanovo’s Maneuvering Continue to Delay .web?

By Kirk Salzmann
Verisign Logo

The launch of .web top-level domain is once again at risk of being delayed by baseless procedural maneuvering.

On May 2, the Internet Corporation for Assigned Names and Numbers (ICANN) Board of Directors posted a decision on the .web matter from its April 30 meeting, which found “that NDC (Nu Dotco LLC) did not violate the Guidebook or the Auction Rules” and directed ICANN “to continue processing NDC’s .web application,” clearing the way for the delegation of .web. ICANN later posted a preliminary report from this meeting showing that the Board vote on the .web decision was without objection.

Less than 24 hours later, however, Altanovo (formerly Afilias) – a losing bidder whose repeatedly rejected claims already have delayed the delegation of .web for more than six years – dusted off its playbook from 2018 by filing yet another ICANN Cooperative Engagement Process (CEP), beginning the cycle of another independent review of the Board’s decision, which last time cost millions of dollars and resulted in years of delay.

Under ICANN rules, a CEP is intended to be a non-binding process designed to efficiently resolve or narrow disputes before the initiation of an Independent Review Process (IRP). ICANN places further actions on hold while a CEP is pending. It’s an important and worthwhile aspect of the multistakeholder process…when used in good faith.

But that does not appear to be what is happening here. Altanovo and its backers initiated this repeat CEP despite the fact that it lost a fair, ICANN-sponsored auction; lost, in every important respect, the IRP; lost its application for reconsideration of the IRP (which it was sanctioned for filing, and which was determined to be frivolous by the IRP panel); and has now lost before the ICANN Board.

The Board’s decision expressly found that these disputes “have delayed the delegation of .web for more than six years” and already cost each of the parties, including ICANN, “millions of dollars in legal fees.”

Further delay appears to be the only goal of this second CEP – and any follow-on IRP – because no one could conclude in good faith that an IRP panel would find that the thorough process and decision on .web established in the Board’s resolutions and preliminary report violated ICANN’s bylaws. At the end of the day, all that will be accomplished by this second CEP and a second IRP is continued delay, and delay for delay’s sake amounts to an abuse of process that threatens to undermine the multistakeholder processes and the rights of NDC and Verisign.

ICANN will, no doubt, follow its processes for resolving the CEP and any further procedural maneuvers attempted by Altanovo. But, given Altanovo’s track record of losses, delays, and frivolous maneuvering since the 2016 .web auction, a point has been reached when equity demands that this abuse of process not be allowed to thwart NDC’s right, as determined by the Board, to move ahead on its .web application.

The post Will Altanovo’s Maneuvering Continue to Delay .web? appeared first on Verisign Blog.

ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition

By Keith Drazek
Verisign Logo

As we passed five years since the Internet Assigned Numbers Authority transition took place, my co-authors and I paused to look back on this pivotal moment; to take stock of what we’ve learned and to re-examine some of the key events leading up to the transition and how careful planning ensured a successful transfer of IANA responsibilities from the United States Government to the Internet Corporation for Assigned Names and Numbers. I’ve excerpted the main themes from our work, which can be found in full on the Internet Governance Project blog.

In March 2014, the National Telecommunications and Information Administration, a division of the U.S. Department of Commerce, announced its intent to “transition key Internet domain name functions to the global multi-stakeholder community” and asked ICANN to “convene global stakeholders to develop a proposal to transition the current role played by NTIA in the coordination of the Internet’s domain name system.” This transition, as announced by NTIA, was a natural progression of ICANN’s multi-stakeholder evolution, and an outcome that was envisioned by its founders.

While there was general support for a transition to the global multi-stakeholder community, many in the ICANN community raised concerns about ICANN’s accountability, transparency and organizational readiness to “stand alone” without NTIA’s legacy supervision. In response, the ICANN community began a phase of intense engagement to ensure a successful transition with all necessary accountability and transparency structures and mechanisms in place.

As a result of this meticulous planning, we believe the IANA functions have been well-served by the transition and the new accountability structures designed and developed by the ICANN community to ensure the security, stability and resiliency of the internet’s unique identifiers.

But what does the future hold? While ICANN’s multi-stakeholder processes and accountability structures are functioning, even in the face of a global pandemic that interrupted our ability to gather and engage in person, they will require ongoing care to ensure they deliver on the original vision of private-sector-led management of the DNS.

The post ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition appeared first on Verisign Blog.

IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes

By Kirk Salzmann
Verisign Logo

The .web Independent Review Process (IRP) Panel issued a Final Decision six months ago, in May 2021. Immediately thereafter, the claimant, Afilias Domains No. 3 Limited (now a shell entity known as AltaNovo Domains Limited), filed an application seeking reconsideration of the Final Decision under Rule 33 of the arbitration rules. Rule 33 allows for the clarification of an ambiguous ruling and allows the Panel the opportunity to supplement its decision if it inadvertently failed to consider a claim or defense, but specifically does not permit wholesale reconsideration of a final decision. The problem for Afilias’ application, as we said at the time, was that it sought exactly that.

The Panel ruled on Afilias’ application on Dec. 21, 2021. In this latest ruling, the Panel not only rejected Afilias’ application in its entirety, but went further and sanctioned Afilias for having filed it in the first place. Quoting from the ruling:

In the opinion of the Panel, under the guise of seeking an additional decision, the Application is seeking reconsideration of core elements of the Final Decision. Likewise, under the guise of seeking interpretation, the Application is requesting additional declarations and advisory opinions on a number of questions, some of which had not been discussed in the proceedings leading to the Final Decision.

In such circumstances, the Panel cannot escape the conclusion that the Application is “frivolous” in the sense of it “having no sound basis (as in fact or law).” This finding suffices to entitle [ICANN] to the cost shifting decision it is seeking…the Panel hereby unanimously…Grants [ICANN’s] request that the Panel shift liability for the legal fees incurred by [ICANN] in connection with the Application, fixes at US $236,884.39 the amount of the legal fees to be reimbursed to [ICANN] by [Afilias]…and orders [Afilias] to pay this amount to [ICANN] within thirty (30) days….

In light of the Panel’s finding that Afililas’ Rule 33 application was so improper and frivolous as to be sanctionable, a serious question arises about the motives in filing it. Reading the history of the .web proceedings, one possible motivation is becoming more clear. The community will recall that, five years ago, Donuts (through its wholly-owned subsidiary Ruby Glen) failed in its bid to enjoin the .web auction when a federal court rejected false allegations that Nu Dot Co (NDC) had failed to disclose an ownership change. After the auction was conducted, Afilias then picked up the litigation baton from Donuts. Afilias’ IRP complaint demanded that the arbitration Panel nullify the auction results, and award .web to itself, thereby bypassing ICANN completely. In the May 2021 Final Decision the IRP Panel gave an unsurprising but firm “no” to Afilias’ request to supplant ICANN’s role, and instead directed ICANN’s Board to review the complaints about the conduct of the .web contention set members and then make a determination on delegation.

A result of this five-year battle has been to prevent ICANN from passing judgment on the .web situation. These proceedings have unsuccessfully sought to have courts and arbitrators stand in the shoes of ICANN, rather than letting ICANN discharge its mandated duty to determine what, if anything, should be done in response to the allegations regarding the pre-auction conduct of the contention set. This conduct includes Afilias’ own wrongdoing in violating the pre-auction communications blackout imposed in the Auction Rules. That misconduct is set forth in a July 23, 2021 letter by NDC to ICANN, since published by ICANN, containing written proof of Afilias’ violation of the auction rules. In its Dec. 21 ruling, the Panel made it unmistakably clear that it is ICANN – not a judge or a panel of arbitrators – who must first review all allegations of misconduct by the contention set, including the powerful evidence indicating that it is Afilias’ .web application, not NDC’s, that should be disqualified.

If Afilias’ motivation has been to avoid ICANN’s scrutiny of its own pre-auction misconduct, especially after exiting the registry business when it appears that its only significant asset is the .web application itself, then what we should expect to see next is for Afilias/AltaNovo to manufacture another delaying attack on the Final Decision. Perhaps this is why its litigation counsel has already written ICANN threatening to continue litigation “in all available fora whether within or outside of the United States of America.…”

It is long past time to put an end to this five-year campaign, which has interfered with ICANN’s duty to decide on the delegation of .web, harming the interests of the broader internet community. The new ruling obliges ICANN to take a decisive step in that direction.

The post IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes appeared first on Verisign Blog.

Industry Insights: RDAP Becomes Internet Standard

By Scott Hollenbeck
Technical header image of code

This article originally appeared in The Domain Name Industry Brief (Volume 18, Issue 3)

Earlier this year, the Internet Engineering Task Force’s (IETF’s) Internet Engineering Steering Group (IESG) announced that several Proposed Standards related to the Registration Data Access Protocol (RDAP), including three that I co-authored, were being promoted to the prestigious designation of Internet Standard. Initially accepted as proposed standards six years ago, RFC 7480, RFC 7481, RFC 9082 and RFC 9083 now comprise the new Standard 95. RDAP allows users to access domain registration data and could one day replace its predecessor the WHOIS protocol. RDAP is designed to address some widely recognized deficiencies in the WHOIS protocol and can help improve the registration data chain of custody.

In the discussion that follows, I’ll look back at the registry data model, given the evolution from WHOIS to the RDAP protocol, and examine how the RDAP protocol can help improve upon the more traditional, WHOIS-based registry models.

Registration Data Directory Services Evolution, Part 1: The WHOIS Protocol

In 1998, Network Solutions was responsible for providing both consumer-facing registrar and back-end registry functions for the legacy .com, .net and .org generic top-level domains (gTLDs). Network Solutions collected information from domain name registrants, used that information to process domain name registration requests, and published both collected data and data derived from processing registration requests (such as expiration dates and status values) in a public-facing directory service known as WHOIS.

From Network Solution’s perspective as the registry, the chain of custody for domain name registration data involved only two parties: the registrant (or their agent) and Network Solutions. With the introduction of a Shared Registration System (SRS) in 1999, multiple registrars began to compete for domain name registration business by using the registry services operated by Network Solutions. The introduction of additional registrars and the separation of registry and registrar functions added parties to the chain of custody of domain name registration data. Information flowed from the registrant, to the registrar, and then to the registry, typically crossing multiple networks and jurisdictions, as depicted in Figure 1.

Flowchart of registration process. Information flowed from the registrant, to the registrar, and then to the registry.
Figure 1. Flow of information in early data registration process.

Registration Data Directory Services Evolution, Part 2: The RDAP Protocol

Over time, new gTLDs and new registries came into existence, new WHOIS services (with different output formats) were launched, and countries adopted new laws and regulations focused on protecting the personal information associated with domain name registration data. As time progressed, it became clear that WHOIS lacked several needed features, such as:

  • Standardized command structures
  • Output and error structures
  • Support for internationalization and localization
  • User identification
  • Authentication and access control

The IETF made multiple attempts to add features to WHOIS to address some of these issues, but none of them were widely adopted. A possible replacement protocol known as the Internet Registry Information Service (IRIS) was standardized in 2005, but it was not widely adopted. Something else was needed, and the IETF went back to work to produce what became known as RDAP.

RDAP was specified in a series of five IETF Proposed Standard RFC documents, including the following, all of which were published in March 2015:

  • RFC 7480, HTTP Usage in the Registration Data Access Protocol (RDAP)
  • RFC 7481, Security Services for the Registration Data Access Protocol (RDAP)
  • RFC 7482, Registration Data Access Protocol (RDAP) Query Format
  • RFC 7483, JSON Responses for the Registration Data Access Protocol (RDAP)
  • RFC 7484, Finding the Authoritative Registration Data (RDAP) Service

Only when RDAP was standardized did we start to see broad deployment of a possible WHOIS successor by domain name registries, domain name registrars and address registries.

The broad deployment of RDAP led to RFCs 7480 and 7481 becoming Internet Standard RFCs (part of Internet Standard 95) without modification in March 2021. As operators of registration data directory services implemented and deployed RDAP, they found places in the other specifications where minor corrections and clarifications were needed without changing the protocol itself. RFC 7482 was updated to become Internet Standard RFC 9082, which was published in June 2021. RFC 7483 was updated to become Internet Standard RFC 9083, which was also published in June 2021. All were added to Standard 95. As of the writing of this article, RFC 7484 is in the process of being reviewed and updated for elevation to Internet Standard status.

RDAP Advantages

Operators of registration data directory services who implemented RDAP can take advantage of key features not available in the WHOIS protocol. I’ve highlighted some of these important features in the table below.

RDAP Feature Benefit
Standard, well-understood, and widely available HTTP transport Relatively easy to implement, deploy and operate using common web service tools, infrastructure and applications.
Securable via HTTPS Helps provide confidentiality for RDAP queries and responses, reducing the amount of information that is disclosed to monitors.
Structured output in JavaScript Object Notation (JSON) JSON is well-understood and tool friendly, which makes it easier for clients to parse and format responses from all servers without the need for software that’s customized for different service providers.
Easily extensible Designed to support the addition of new features without breaking existing implementations. This makes it easier to address future function needs with less risk of implementation incompatibility.
Internationalized output, with full support for Unicode character sets Allows implementations to provide human-readable inputs and outputs that are represented in a language appropriate to the local operating environment.
Referral capability, leveraging HTTP constructs Provides information to software clients that allow the client to retrieve additional information from other RDAP servers. This can be used to hide complexity from human users.
Support of standardized authentication RDAP can take full advantage of all of the client identification, authentication and authorization methods that are available to web services. This means that RDAP can be used to provide the basic framework for differentiated access to registration data based on attributes associated with the user and the user’s query.

Verisign and RDAP

Verisign’s RDAP service, which was originally launched as an experimental implementation several years before gaining widespread adoption, allows users to look up records in the registry database for all registered .com, .net, .name, .cc and .tv domain names. It also supports Internationalized Domain Names (IDNs).

We at Verisign were pleased not only to see the IETF recognize the importance of RDAP by elevating it to an Internet Standard, but also that the protocol became a requirement for ICANN-accredited registrars and registries as of August 2019. Widespread implementation of the RDAP protocol makes registration data more secure, stable and resilient, and we are hopeful that the community will evolve the prescribed implementation of RDAP such that the full power of this rich protocol will be deployed.

You can learn more in the RDAP Help section of the Verisign website, and access helpful documents such as the RDAP technical implementation guide and the RDAP response profile.

The post Industry Insights: RDAP Becomes Internet Standard appeared first on Verisign Blog.

Afilias’ Rule Violations Continue to Delay .WEB

By Kirk Salzmann
Verisign Logo

As I noted on May 26, the final decision issued on May 20 in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN) rejected Afilias’ petition to nullify the results of the public auction for .WEB, and it further rejected Afilias’ demand to have it be awarded .WEB (at a price substantially lower than the winning bid). Instead, as we urged, the IRP Panel determined that the ICANN Board should move forward with reviewing the objections made about .WEB, and to make a decision on delegation thereafter.

Afilias and its counsel both issued press releases claiming victory in an attempt to put a positive spin on the decision. In contrast to this public position, Afilias then quickly filed a 68-page application asking the Panel to reverse its decision. This application is, however, not permitted by the arbitration rules – which expressly prohibit such requests for “do overs.”

In addition to Afilias’ facially improper application, there is an even more serious instance of rule-breaking now described in a July 23 letter from Nu Dot Co (NDC) to ICANN. This letter sets out in considerable detail how Afilias engaged in prohibited conduct during the blackout period immediately before the .WEB auction in 2016, in violation of the auction rules. The letter shows how this rule violation is more than just a technicality; it was part of a broader scheme to rig the auction. The attachments to the letter shed light on how, during the blackout period, Afilias offered NDC money to stop ICANN’s public auction in favor of a private process – which would in turn deny the broader internet community the benefit of the proceeds of a public auction.

Afilias’ latest application to reverse the Panel’s decision, like its pre-auction misconduct 5 years ago, has only led to unnecessary delay of the delegation of .WEB. It is long past time for this multi-year campaign to come to an end. The Panel’s unanimous ruling makes clear that it strongly agrees.

The post Afilias’ Rule Violations Continue to Delay .WEB appeared first on Verisign Blog.

IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias

By Kirk Salzmann
Verisign Logo

On Thursday, May 20, a final decision was issued in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN), rejecting Afilias’ petition to nullify the results of the July 27, 2016 public auction for the .WEB new generic top level domain (gTLD) and to award .WEB to Afilias at a substantially lower, non-competitive price. Nu Dotco, LLC (NDC) submitted the highest bid at the auction and was declared the winner, over Afilias’ lower, losing bid. Despite Afilias’ repeated objections to participation by NDC or Verisign in the IRP, the Panel ordered that NDC and Verisign could participate in the IRP in a limited way each as amicus curiae.

Consistent with NDC, Verisign and ICANN’s position in the IRP, the Order dismisses “the Claimant’s [Afilias’] request that Respondent [ICANN] be ordered by the Panel to disqualify NDC’s bid for .WEB, proceed with contracting the Registry Agreement for .WEB with the Claimant in accordance with the New gTLD Program Rules, and specify the bid price to be paid by the Claimant.” Contrary to Afilias’ position, all objections to the auction are referred to ICANN for determination. This includes Afilias’ objections as well as objections by NDC that Afilias violated the auction rules by engaging in secret discussions during the Blackout Period under the Program Rules.

The Order Dismisses All of Afilias’ Claims of Violations by NDC or Verisign

Afilias’ claims for relief were based on its allegation that NDC violated the New gTLD Program Rules by entering into an agreement with Verisign, under which Verisign provided funds for NDC’s participation in the .WEB auction in exchange for NDC’s commitment, if it prevailed at the auction and entered into a registry agreement with ICANN, to assign its .WEB registry agreement to Verisign upon ICANN’s consent to the assignment. As the Panel determined, the relief requested by Afilias far exceeded the scope of proper IRP relief provided for in ICANN’s Bylaws, which limit an IRP to a determination whether or not ICANN has exceeded its mission or otherwise failed to comply with its Articles of Incorporation and Bylaws.

Issued two and a half years after Afilias initiated its IRP, the Panel’s decision unequivocally rejects Afilias’ attempt to misuse the IRP to rule on claims of NDC or Verisign misconduct or obtain the .WEB gTLD for itself despite its losing bid. The Panel held that it is for ICANN, which has the requisite knowledge, expertise, and experience, to determine whether NDC’s contract with Verisign violated ICANN’s Program Rules. The Panel further determined that it would be improper for the Panel to dictate what should be the consequences of an alleged violation of the rules contained in the gTLD Applicant Guidebook, if any took place. The Panel therefore denied Afilias’ requests for a binding declaration that ICANN must disqualify NDC’s bid for violating the Guidebook rules and award .WEB to Afilias.

Despite pursuing its claims in the IRP for over two years — at a cost of millions of dollars — Afilias failed to offer any evidence to support its allegations that NDC improperly failed to update its application and/or assigned its application to Verisign. Instead, the evidence was to the contrary. Indeed, Afilias failed to offer testimony from a single Afilias witness during the hearing on the merits, including witnesses with direct knowledge of relevant events and industry practices. It is apparent that Afilias failed to call as witnesses any of its own officers or employees because, testifying under penalty of perjury, they would have been forced to contradict the false allegations advanced by Afilias during the IRP. By contrast, ICANN, NDC and Verisign each supported their respective positions appropriately by calling witnesses to testify and be subject to cross-examination by the three-arbitrator panel and Afilias, under oath, with respect to the facts refuting Afilias’ claims.

Afilias also argued in the IRP that ICANN is a competition regulator and that ICANN’s commitment, contained in its Bylaws, to promote competition required ICANN to disqualify NDC’s bid for the .WEB gTLD because NDC’s contract with Verisign may lead to Verisign’s operation of .WEB. The Panel rejected Afilias’ claim, agreeing with ICANN and Verisign that “ICANN does not have the power, authority, or expertise to act as a competition regulator by challenging or policing anticompetitive transactions or conduct.” The Panel found ICANN’s evidence “compelling” that it fulfills its mission to promote competition through the expansion of the domain name space and facilitation of innovative approaches to the delivery of domain name registry services, not by acting as an antitrust regulator. The Panel quoted Afilias’ own statements to this effect, which were made outside of the IRP proceedings when Afilias had different interests.

Although the Panel rejected Afilias’ Guidebook and competition claims, it did find that the manner in which ICANN addressed complaints about the .WEB matter did not meet all of the commitments in its Bylaws. But even so, Afilias was awarded only a small portion of the legal fees it hoped to recover from ICANN.

Moving Forward with .WEB

It is now up to ICANN to move forward expeditiously to determine, consistent with its Bylaws, the validity of any objections under the New gTLD Program Rules in connection with the .WEB auction, including NDC and Verisign’s position that Afilias should be disqualified from making any further objections to NDC’s application.

As Verisign and NDC pointed out in 2016, the evidence during the IRP establishes that collusive conduct by Afilias in connection with the auction violated the Guidebook. The Guidebook and Auction Rules both prohibit applicants within a contention set from discussing “bidding strategies” or “settlement” during a designated Blackout Period in advance of an auction. Violation of the Blackout Period is a “serious violation” of ICANN’s rules and may result in forfeiture of an applicant’s application. The evidence adduced in the IRP proves that Afilias committed such violations and should be disqualified. On July 22, just four days before the public ICANN auction for .WEB, Afilias contacted NDC, following Afilias’ discussions with other applicants, to try to negotiate a private auction if ICANN would delay the public auction. Afilias knew the Blackout Period was in effect, but nonetheless violated it in an attempt to persuade NDC to participate in a private auction. Under the settlement Afilias proposed, Afilias would make millions of dollars even if it lost the auction, rather than auction proceeds being used for the internet community through the investment of such proceeds by ICANN as determined by the community.

All of the issues raised during the IRP were the subject of extensive briefing, evidentiary submissions and live testimony during the hearing on the merits, providing ICANN with a substantial record on which to render a determination with respect to .WEB and proceed forward with delegation of the new gTLD. Verisign stands ready to assist ICANN in any way we can to quickly resolve this matter so that domain names within the .WEB gTLD can finally be made available to businesses and consumers.

As a final observation: Afilias no longer operates a registry business, and has neither the platform, organization, nor necessary consents from ICANN, to support one. Inconsistent with Afilias’ claims in the IRP, Afilias transferred its entire registry business to Donuts during the pendency of the IRP. Although long in the works, the sale was not disclosed by Afilias either before or during the IRP hearings, nor, remarkably, did Afilias produce any company witness for examination who might have disclosed the sale to the panel of arbitrators or others. Based on a necessary public disclosure of the Donuts sale after the hearings and before entry of the Panel’s Order, the Panel included in its final Order a determination that it is for ICANN to determine whether the Afilias’ sale is itself a basis for a denial of Afilias’ claims with respect to .WEB.

Verisign’s analysis of the Independent Review Process decision regarding the awarding of the .web top level domain.

The post IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias appeared first on Verisign Blog.

Verisign Support for AAPI Communities and COVID Relief in India

By Verisign
Verisign Logo

At Verisign we have a commitment to making a positive and lasting impact on the global internet community, and on the communities in which we live and work.

This commitment guided our initial efforts to help these communities respond to and recover from the effects of the COVID-19 pandemic, over a year ago. And at the end of 2020, our sense of partnership with our local communities helped shape our efforts to alleviate COVID-related food insecurity in the areas where we have our most substantial footprint. This same sense of community is reflected in our partnership with Virginia Ready, which aims to help individuals in our home State of Virginia access training and certification to pivot to new careers in the technology sector.

We also believe that our team is one of our most important assets. We particularly value the diverse origins of our people; we have colleagues from all over the world who, in turn, are closely connected to their own communities both in the United States and elsewhere. A significant proportion of our staff are of either Asian American and Pacific Islander (AAPI) or South Asian origin, and today we are pleased to announce two charitable contributions, via our Verisign Cares program, directly related to these two communities.

First, Verisign is pleased to associate ourselves with the Stand with Asian Americans initiative, launched by AAPI business leaders in response to recent and upsetting episodes of aggression toward their community. Verisign supports this initiative and the pledge for which it stands, and has made a substantial contribution to the initiative’s partner, the Asian Pacific Fund, to help uplift the AAPI community.

Second, and after consultation with our staff, we have directed significant charitable contributions to organizations helping to fight the worsening wave of COVID-19 in India. Through Direct Relief we will be helping to provide oxygen and other medical equipment to hospitals, while through GiveIndia we will be supporting families in India impacted by COVID-19.

The ‘extended Verisign family’ of our employees, and their families and their communities, means a tremendous amount to us – it is only thanks to our talented and dedicated people that we are able to continue to fulfill our mission of enabling the world to connect online with reliability and confidence, anytime, anywhere.

Verisign expands its community support initiatives, with contributions to COVID-19 relief in India and to the Stand with Asian Americans initiative.

The post Verisign Support for AAPI Communities and COVID Relief in India appeared first on Verisign Blog.

Information Protection for the Domain Name System: Encryption and Minimization

By Burt Kaliski

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

By Burt Kaliski

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

By Burt Kaliski

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

By Burt Kaliski

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

By Burt Kaliski

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

By Burt Kaliski
Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

Chromium’s Reduction of Root DNS Traffic

By Verisign
Search Bar

As we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the root server system.

In a previous blog post, we quantified that upwards of 45.80% of total DNS traffic to the root servers was, at the time, the result of Chromium intranet redirection detection tests. Since then, the Chromium team has redesigned its code to disable the redirection test on Android systems and introduced a multi-state DNS interception policy that supports disabling the redirection test for desktop browsers. This functionality was released mid-November of 2020 for Android systems in Chromium 87 and, quickly thereafter, the root servers experienced a rapid decline of DNS queries.

The figure below highlights the significant decline of query volume to the root server system immediately after the Chromium 87 release. Prior to the software release, the root server system saw peaks of ~143 billion queries per day. Traffic volumes have since decreased to ~84 billion queries a day. This represents more than a 41% reduction of total query volume.

Note: Some data from root operators was not available at the time of this publication.

This type of broad root system measurement is facilitated by ICANN’s Root Server System Advisory Committee standards document RSSAC002, which establishes a set of baseline metrics for the root server system. These root server metrics are readily available to the public for analysis, monitoring, and research. These metrics represent another milestone the DNS community could appreciate and continue to support and refine going forward.

Rightly noted in ICANN’s Root Name Service Strategy and Implementation publication, the root server system currently “faces growing volumes of traffic” from legitimate users but also from misconfigurations, misuse, and malicious actors and that “the costs incurred by the operators of the root server system continue to climb to mitigate these attacks using the traditional approach”.

As we reflect on how Chromium’s large impact to root server traffic was identified and then resolved, we as a DNS community could consider how outreach and engagement should be incorporated into a traditional approach of addressing DNS security, stability, and resiliency. All too often, technologists solve problems by introducing additional layers of technology abstractions and disregarding simpler solutions, such as outreach and engagement.

We believe our efforts show how such outreach and community engagement can have significant impact both to the parties directly involved, and to the broader community. Chromium’s actions will directly aide and ease the operational costs to mitigate attacks at the root. Reducing the root server system load by 41%, with potential further reduction depending on future Chromium deployment decisions, will lighten operational costs incurred to mitigate attacks by relinquishing their computing and network resources.

In pursuit of maintaining a responsible and common-sense root hygiene regimen, Verisign will continue to analyze root telemetry data and engage with entities such as Chromium to highlight operational concerns, just as Verisign has done in the past to address name collisions problems. We’ll be sharing more information on this shortly.

This piece was co-authored by Matt Thomas and Duane Wessels, Distinguished Engineers at Verisign.

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

Meeting the Evolving Challenges of COVID-19

By Verisign
Verisign Logo

The COVID-19 pandemic, when it struck earlier this year, ushered in an immediate period of adjustment for all of us. And just as the challenges posed by COVID-19 in 2020 have been truly unprecedented, Verisign’s mission – enabling the world to connect online with reliability and confidence, anytime, anywhere – has never been more relevant. We are grateful for the continued dedication of our workforce, which enables us to provide the building blocks people need for remote working and learning, and simply for keeping in contact with each other.

At Verisign we took early action to adopt a COVID-19 work posture to protect our people, their families, and our operations. This involved the majority of our employees working from home, and implementing new cleaning and health safety protocols to protect those employees and contractors for whom on-site presence was essential to maintain key functions.

Our steps to address the pandemic did not stop there. On March 25 we announced a series of measures to help the communities where we live and work, and the broader DNS community in which we operate. This included, under our Verisign Cares program, making contributions to organizations supporting key workers, first responders and medical personnel, and doubling the company’s matching program for employee giving so that employee donations to support the COVID-19 response could have a greater impact.

Today, while vaccines may offer signs of long term hope, the pandemic has plunged many families into economic hardship and has had a dramatic effect on food insecurity in the U.S., with an estimated 50 million people affected. With this hardship in mind, we have this week made contributions totaling $275,000 to food banks in the areas where we have our most substantial footprint: the Washington DC-Maryland-Virginia region; Delaware; and the canton of Fribourg, in Switzerland. This will help local families put food on their tables during what will be a difficult winter for many.

The pandemic has also had a disproportionate, and potentially permanent, impact on certain sectors of the economy. So today Verisign is embarking on a partnership with Virginia Ready, which helps people affected by COVID-19 access training and certification for in-demand jobs in sectors such as technology. We are making an initial contribution of $250,000 to Virginia Ready, and will look to establish further partnerships of this kind across the country in 2021.

As people around the world gather online to address the global challenges posed by COVID-19, we want to share some of the steps we have taken so far to support the communities we serve, while keeping our critical internet infrastructure running smoothly.

The post Meeting the Evolving Challenges of COVID-19 appeared first on Verisign Blog.

❌