FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Feds Seize LockBit Ransomware Websites, Offer Decryption Tools, Troll Affiliates

By BrianKrebs

U.S. and U.K. authorities have seized the darknet websites run by LockBit, a prolific and destructive ransomware group that has claimed more than 2,000 victims worldwide and extorted over $120 million in payments. Instead of listing data stolen from ransomware victims who didn’t pay, LockBit’s victim shaming website now offers free recovery tools, as well as news about arrests and criminal charges involving LockBit affiliates.

Investigators used the existing design on LockBit’s victim shaming website to feature press releases and free decryption tools.

Dubbed “Operation Cronos,” the law enforcement action involved the seizure of nearly three-dozen servers; the arrest of two alleged LockBit members; the unsealing of two indictments; the release of a free LockBit decryption tool; and the freezing of more than 200 cryptocurrency accounts thought to be tied to the gang’s activities.

LockBit members have executed attacks against thousands of victims in the United States and around the world, according to the U.S. Department of Justice (DOJ). First surfacing in September 2019, the gang is estimated to have made hundreds of millions of U.S. dollars in ransom demands, and extorted over $120 million in ransom payments.

LockBit operated as a ransomware-as-a-service group, wherein the ransomware gang takes care of everything from the bulletproof hosting and domains to the development and maintenance of the malware. Meanwhile, affiliates are solely responsible for finding new victims, and can reap 60 to 80 percent of any ransom amount ultimately paid to the group.

A statement on Operation Cronos from the European police agency Europol said the months-long infiltration resulted in the compromise of LockBit’s primary platform and other critical infrastructure, including the takedown of 34 servers in the Netherlands, Germany, Finland, France, Switzerland, Australia, the United States and the United Kingdom. Europol said two suspected LockBit actors were arrested in Poland and Ukraine, but no further information has been released about those detained.

The DOJ today unsealed indictments against two Russian men alleged to be active members of LockBit. The government says Russian national Artur Sungatov used LockBit ransomware against victims in manufacturing, logistics, insurance and other companies throughout the United States.

Ivan Gennadievich Kondratyev, a.k.a. “Bassterlord,” allegedly deployed LockBit against targets in the United States, Singapore, Taiwan, and Lebanon. Kondratyev is also charged (PDF) with three criminal counts arising from his alleged use of the Sodinokibi (aka “REvil“) ransomware variant to encrypt data, exfiltrate victim information, and extort a ransom payment from a corporate victim based in Alameda County, California.

With the indictments of Sungatov and Kondratyev, a total of five LockBit affiliates now have been officially charged. In May 2023, U.S. authorities unsealed indictments against two alleged LockBit affiliates, Mikhail “Wazawaka” Matveev and Mikhail Vasiliev.

Vasiliev, 35, of Bradford, Ontario, Canada, is in custody in Canada awaiting extradition to the United States (the complaint against Vasiliev is at this PDF). Matveev remains at large, presumably still in Russia. In January 2022, KrebsOnSecurity published Who is the Network Access Broker ‘Wazawaka,’ which followed clues from Wazawaka’s many pseudonyms and contact details on the Russian-language cybercrime forums back to a 31-year-old Mikhail Matveev from Abaza, RU.

An FBI wanted poster for Matveev.

In June 2023, Russian national Ruslan Magomedovich Astamirov was charged in New Jersey for his participation in the LockBit conspiracy, including the deployment of LockBit against victims in Florida, Japan, France, and Kenya. Astamirov is currently in custody in the United States awaiting trial.

LockBit was known to have recruited affiliates that worked with multiple ransomware groups simultaneously, and it’s unclear what impact this takedown may have on competing ransomware affiliate operations. The security firm ProDaft said on Twitter/X that the infiltration of LockBit by investigators provided “in-depth visibility into each affiliate’s structures, including ties with other notorious groups such as FIN7, Wizard Spider, and EvilCorp.”

In a lengthy thread about the LockBit takedown on the Russian-language cybercrime forum XSS, one of the gang’s leaders said the FBI and the U.K.’s National Crime Agency (NCA) had infiltrated its servers using a known vulnerability in PHP, a scripting language that is widely used in Web development.

Several denizens of XSS wondered aloud why the PHP flaw was not flagged by LockBit’s vaunted “Bug Bounty” program, which promised a financial reward to affiliates who could find and quietly report any security vulnerabilities threatening to undermine LockBit’s online infrastructure.

This prompted several XSS members to start posting memes taunting the group about the security failure.

“Does it mean that the FBI provided a pentesting service to the affiliate program?,” one denizen quipped. “Or did they decide to take part in the bug bounty program? :):)”

Federal investigators also appear to be trolling LockBit members with their seizure notices. LockBit’s data leak site previously featured a countdown timer for each victim organization listed, indicating the time remaining for the victim to pay a ransom demand before their stolen files would be published online. Now, the top entry on the shaming site is a countdown timer until the public doxing of “LockBitSupp,” the unofficial spokesperson or figurehead for the LockBit gang.

“Who is LockbitSupp?” the teaser reads. “The $10m question.”

In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head — offering $10 million to anyone who could discover his real name.

“My god, who needs me?,” LockBitSupp wrote on Jan. 22, 2024. “There is not even a reward out for me on the FBI website. By the way, I want to use this chance to increase the reward amount for a person who can tell me my full name from USD 1 million to USD 10 million. The person who will find out my name, tell it to me and explain how they were able to find it out will get USD 10 million. Please take note that when looking for criminals, the FBI uses unclear wording offering a reward of UP TO USD 10 million; this means that the FBI can pay you USD 100, because technically, it’s an amount UP TO 10 million. On the other hand, I am willing to pay USD 10 million, no more and no less.”

Mark Stockley, cybersecurity evangelist at the security firm Malwarebytes, said the NCA is obviously trolling the LockBit group and LockBitSupp.

“I don’t think this is an accident—this is how ransomware groups talk to each other,” Stockley said. “This is law enforcement taking the time to enjoy its moment, and humiliate LockBit in its own vernacular, presumably so it loses face.”

In a press conference today, the FBI said Operation Cronos included investigative assistance from the Gendarmerie-C3N in France; the State Criminal Police Office L-K-A and Federal Criminal Police Office in Germany; Fedpol and Zurich Cantonal Police in Switzerland; the National Police Agency in Japan; the Australian Federal Police; the Swedish Police Authority; the National Bureau of Investigation in Finland; the Royal Canadian Mounted Police; and the National Police in the Netherlands.

The Justice Department said victims targeted by LockBit should contact the FBI at https://lockbitvictims.ic3.gov/ to determine whether affected systems can be successfully decrypted. In addition, the Japanese Police, supported by Europol, have released a recovery tool designed to recover files encrypted by the LockBit 3.0 Black Ransomware.

Combined Security Practices Changing the Game for Risk Management

By The Hacker News
A significant challenge within cyber security at present is that there are a lot of risk management platforms available in the market, but only some deal with cyber risks in a very good way. The majority will shout alerts at the customer as and when they become apparent and cause great stress in the process. The issue being that by using a reactive, rather than proactive approach, many risks

High-Severity Flaws Uncovered in Bosch Thermostats and Smart Nutrunners

By Newsroom
Multiple security vulnerabilities have been disclosed in Bosch BCC100 thermostats and Rexroth NXA015S-36V-B smart nutrunners that, if successfully exploited, could allow attackers to execute arbitrary code on affected systems. Romanian cybersecurity firm Bitdefender, which discovered the flaw in Bosch BCC100 thermostats last August, said the issue could be weaponized by an attacker to

Scaling Security Operations with Automation

By The Hacker News
In an increasingly complex and fast-paced digital landscape, organizations strive to protect themselves from various security threats. However, limited resources often hinder security teams when combatting these threats, making it difficult to keep up with the growing number of security incidents and alerts. Implementing automation throughout security operations helps security teams alleviate

6 Steps to Accelerate Cybersecurity Incident Response

By The Hacker News
Modern security tools continue to improve in their ability to defend organizations’ networks and endpoints against cybercriminals. But the bad actors still occasionally find a way in. Security teams must be able to stop threats and restore normal operations as quickly as possible. That’s why it’s essential that these teams not only have the right tools but also understand how to effectively

Russian Hackers Sandworm Cause Power Outage in Ukraine Amidst Missile Strikes

By Newsroom
The notorious Russian hackers known as Sandworm targeted an electrical substation in Ukraine last year, causing a brief power outage in October 2022. The findings come from Google's Mandiant, which described the hack as a "multi-event cyber attack" leveraging a novel technique for impacting industrial control systems (ICS). "The actor first used OT-level living-off-the-land (LotL) techniques to

Elon Musk Mocked Ukraine, and Russian Trolls Went Wild

By Matt Burgess
Inauthentic accounts on X flocked to its owner’s post about Ukrainian president Vlodymr Zelensky, hailing “Comrade Musk” and boosting pro-Russia propaganda.

16 New CODESYS SDK Flaws Expose OT Environments to Remote Attacks

By THN
A set of 16 high-severity security flaws have been disclosed in the CODESYS V3 software development kit (SDK) that could result in remote code execution and denial-of-service under specific conditions, posing risks to operational technology (OT) environments. The flaws, tracked from CVE-2022-47378 through CVE-2022-47393 and dubbed CoDe16, carry a CVSS score of 8.8 with the exception of CVE-2022-

Continuous Security Validation with Penetration Testing as a Service (PTaaS)

By THN
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external

5 Things CISOs Need to Know About Securing OT Environments

By The Hacker News
For too long the cybersecurity world focused exclusively on information technology (IT), leaving operational technology (OT) to fend for itself. Traditionally, few industrial enterprises had dedicated cybersecurity leaders. Any security decisions that arose fell to the plant and factory managers, who are highly skilled technical experts in other areas but often lack cybersecurity training or

Researchers Expose New Severe Flaws in Wago and Schneider Electric OT Products

By Ravie Lakshmanan
Three security vulnerabilities have been disclosed in operational technology (OT) products from Wago and Schneider Electric. The flaws, per Forescout, are part of a broader set of shortcomings collectively called OT:ICEFALL, which now comprises a total of 61 issues spanning 13 different vendors. "OT:ICEFALL demonstrates the need for tighter scrutiny of, and improvements to, processes related to

5 Reasons Why IT Security Tools Don't Work For OT

By The Hacker News
Attacks on critical infrastructure and other OT systems are on the rise as digital transformation and OT/IT convergence continue to accelerate. Water treatment facilities, energy providers, factories, and chemical plants — the infrastructure that undergirds our daily lives could all be at risk. Disrupting or manipulating OT systems stands to pose real physical harm to citizens, environments, and

Promising Jobs at the U.S. Postal Service, ‘US Job Services’ Leaks Customer Data

By BrianKrebs

A sprawling online company based in Georgia that has made tens of millions of dollars purporting to sell access to jobs at the United States Postal Service (USPS) has exposed its internal IT operations and database of nearly 900,000 customers. The leaked records indicate the network’s chief technology officer in Pakistan has been hacked for the past year, and that the entire operation was created by the principals of a Tennessee-based telemarketing firm that has promoted USPS employment websites since 2016.

The website FederalJobsCenter promises to get you a job at the USPS in 30 days or your money back.

KrebsOnSecurity was recently contacted by a security researcher who said he found a huge tranche of full credit card records exposed online, and that at first glance the domain names involved appeared to be affiliated with the USPS.

Further investigation revealed a long-running international operation that has been emailing and text messaging people for years to sign up at a slew of websites that all promise they can help visitors secure employment at the USPS.

Sites like FederalJobsCenter[.]com also show up prominently in Google search results for USPS employment, and steer applicants toward making credit card “registration deposits” to ensure that one’s application for employment is reviewed. These sites also sell training, supposedly to help ace an interview with USPS human resources.

FederalJobsCenter’s website is full of content that makes it appear the site is affiliated with the USPS, although its “terms and conditions” state that it is not. Rather, the terms state that FederalJobsCenter is affiliated with an entity called US Job Services, which says it is based in Lawrenceville, Ga.

“US Job Services provides guidance, coaching, and live assistance to postal job candidates to help them perform better in each of the steps,” the website explains.

The site says applicants need to make a credit card deposit to register, and that this amount is refundable if the applicant is not offered a USPS job within 30 days after the interview process.

But a review of the public feedback on US Job Services and dozens of similar names connected to this entity over the years shows a pattern of activity: Applicants pay between $39.99 and $100 for USPS job coaching services, and receive little if anything in return. Some reported being charged the same amount monthly.

The U.S. Federal Trade Commission (FTC) has sued several times over the years to disrupt various schemes offering to help people get jobs at the Postal Service. Way back in 1998, the FTC and the USPS took action against several organizations that were selling test or interview preparation services for potential USPS employees.

“Companies promising jobs with the U.S. Postal Service are breaking federal law,” the joint USPS-FTC statement said.

In that 1998 case, the defendants behind the scheme were taking out classified ads in newspapers. Ditto for a case the FTC brought in 2005. By 2008, the USPS job exam preppers had shifted to advertising their schemes mostly online. And in 2013, the FTC won a nearly $5 million judgment against a Kentucky company purporting to offer such services.

Tim McKinlay authored a report last year at Affiliateunguru.com on whether the US Job Services website job-postal[.]com was legitimate or a scam. He concluded it was a scam based on several factors, including that the website listed multiple other names (suggesting it had recently switched names), and that he got nothing from the transaction with the job site.

“They openly admit they’re not affiliated with the US Postal Service, but claim to be experts in the field, and that, just by following the steps on their site, you easily pass the postal exams and get a job in no time,” McKinlay wrote. “But it’s really just a smoke and mirrors game. The site’s true purpose is to collect $46.95 from as many people as possible. And considering how popular this job is, they’re probably making a killing.”

US JOB SERVICES

KrebsOnSecurity was alerted to the data exposure by Patrick Barry, chief information officer at Charlotte, NC based Rebyc Security. Barry said he found that not only was US Job Services leaking its customer payment records in real-time and going back to 2016, but its website also leaked a log file from 2019 containing the site administrator’s contact information and credentials to the site’s back-end database.

Barry shared screenshots of that back-end database, which show the email address for the administrator of US Job Services is tab.webcoder@gmail.com. According to cyber intelligence platform Constella Intelligence, that email address is tied to the LinkedIn profile for a developer in Karachi, Pakistan named Muhammed Tabish Mirza.

A search on tab.webcoder@gmail.com at DomainTools.com reveals that email address was used to register several USPS-themed domains, including postal2017[.]com, postaljobscenter[.]com and usps-jobs[.]com.

Mr. Mirza declined to respond to questions, but the exposed database information was removed from the Internet almost immediately after KrebsOnSecurity shared the offending links.

A “Campaigns” tab on that web panel listed several advertising initiatives tied to US Job Services websites, with names like “walmart drip campaign,” “hiring activity due to virus,” “opt-in job alert SMS,” and “postal job opening.”

Another page on the US Job Services panel included a script for upselling people who call in response to email and text message solicitations, with an add-on program that normally sells for $1,200 but is being “practically given away” for a limited time, for just $49.

An upselling tutorial for call center employees.

“There’s something else we have you can take advantage of that can help you make more money,” the script volunteers. “It’s an easy to use 12-month career development plan and program to follow that will result in you getting any job you want, not just at the post office….anywhere…and then getting promoted rapidly.”

It’s bad enough that US Job Services was leaking customer data: Constella Intelligence says the email address tied to Mr. Mirza shows up in more than a year’s worth of “bot logs” created by a malware infection from the Redline infostealer.

Constella reports that for roughly a year between 2021 and 2022, a Microsoft Windows device regularly used by Mr. Mirza and his colleagues was actively uploading all of the device’s usernames, passwords and authentication cookies to cybercriminals based in Russia.

NEXT LEVEL SUPPORT

The web-based backend for US Job Services lists more than 160 people under its “Users & Teams” tab. This page indicates that access to the consumer and payment data collected by US Job Services is currently granted to several other coders who work with Mr. Mirza in Pakistan, and to multiple executives, contractors and employees working for a call center in Murfreesboro, Tennessee.

The call center — which operates as Nextlevelsupportcenters[.]com and thenextlevelsupport[.]com — curiously has several key associates with a history of registering USPS jobs-related domain names.

The US Job Services website has more than 160 users, including most of the employees at Next Level Support.

The website for NextLevelSupport says it was founded in 2017 by a Gary Plott, whose LinkedIn profile describes him as a seasoned telecommunications industry expert. The leaked backend database for US Job Services says Plott is a current administrator on the system, along with several other Nextlevel founders listed on the company’s site.

Reached via telephone, Plott initially said his company was merely a “white label” call center that multiple clients use to interact with customers, and that the content their call center is responsible for selling on behalf of US Job Services was not produced by NextLevelSupport.

“A few years ago, we started providing support for this postal product,” Plott said. “We didn’t develop the content but agreed we would support it.”

Interestingly, DomainTools says the Gmail address used by Plott in the US Jobs system was also used to register multiple USPS job-related domains, including postaljobssite[.]com, postalwebsite[.]com, usps-nlf[.]com, usps-nla[.]com.

Asked to reconcile this with his previous statement, Plott said he never did anything with those sites but acknowledged that his company did decide to focus on the US Postal jobs market from the very beginning.

Plott said his company never refuses to issue a money-back request from a customer, because doing so would result in costly chargebacks for NextLevel (and presumably for the many credit card merchant accounts apparently set up by Mr. Mirza).

“We’ve never been deceptive,” Plott said, noting that customers of the US Job Services product receive a digital download with tips on how to handle a USPS interview, as well as unlimited free telephone support if they need it.

“We’ve never told anyone we were the US Postal Service,” Plott continued. “We make sure people fully understand that they are not required to buy this product, but we think we can help you and we have testimonials from people we have helped. But ultimately you as the customer make that decision.”

An email address in the US Job Services teams page for another user — Stephanie Dayton — was used to register the domains postalhiringreview[.]com, and postalhiringreviewboard[.]org back in 2014. Reached for comment, Ms. Dayton said she has provided assistance to Next Level Support Centers with their training and advertising, but never in the capacity as an employee.

Perhaps the most central NextLevel associate who had access to US Job Services was Russell Ramage, a telemarketer from Warner Robins, Georgia. Ramage is listed in South Carolina incorporation records as the owner of a now-defunct call center service called Smart Logistics, a company whose name appears in the website registration records for several early and long-running US Job Services sites.

According to the state of Georgia, Russell Ramage was the registered agent of several USPS job-themed companies.

The leaked records show the email address used by Ramage also registered multiple USPS jobs-related domains, including postalhiringcenter[.]com, postalhiringreviews[.]com, postaljobs-email[.]com, and postaljobssupport1[.]com.

A review of business incorporation records in Georgia indicate Ramage was the registered agent for at least three USPS-related companies over the years, including Postal Career Placement LLC, Postal Job Services Inc., and Postal Operations Inc. All three companies were founded in 2015, and are now dissolved.

An obituary dated February 2023 says Russell Ramage recently passed away at the age of 41. No cause of death was stated, but the obituary goes on to say that Russ “Rusty” Ramage was “preceded in death by his mother, Anita Lord Ramage, pets, Raine and Nola and close friends, Nicole Reeves and Ryan Rawls.”

In 2014, then 33-year-old Ryan “Jootgater” Rawls of Alpharetta, Georgia pleaded guilty to conspiring to distribute controlled substances. Rawls also grew up in Warner Robins, and was one of eight suspects charged with operating a secret darknet narcotics ring called the Farmer’s Market, which federal prosecutors said trafficked in millions of dollars worth of controlled substances.

Reuters reported that an eighth suspect in that case had died by the time of Rawls’ 2014 guilty plea, although prosecutors declined to offer further details about that. According to his obituary, Ryan Christopher Rawls died at the age of 38 on Jan. 28, 2019.

In a comment on Ramage’s memorial wall, Stephanie Dayton said she began working with Ramage in 2006.

“Our friendship far surpassed a working one, we had a very close bond and became like brother and sister,” Dayton wrote. “I loved Russ deeply and he was like family. He was truly one of the best human beings I have ever known. He was kind and sweet and truly cared about others. Never met anyone like him. He will be truly missed. RIP brother.”

The FTC and USPS note that while applicants for many entry-level postal jobs are required to take a free postal exam, the tests are usually offered only every few years in any particular district, and there are no job placement guarantees based on score.

“If applicants pass the test by scoring at least 70 out of 100, they are placed on a register, ranked by their score,” the FTC explained. “When a position becomes open, the local post office looks to the applicable register for that geographic location and calls the top three applicants. The score is only one of many criteria taken into account for employment. The exams test general aptitude, something that cannot necessarily be increased by studying.”

The FTC says anyone interested in a job at the USPS should inquire at their local postal office, where applicants generally receive a free packet of information about required exams. More information about job opportunities at the postal service is available at the USPS’s careers website.

Michael Martel, spokesperson for the United States Postal Inspection Service, said in a written statement that the USPS has no affiliation with the websites or companies named in this story.

“To learn more about employment with USPS, visit USPS.com/careers,” Martel wrote. “If you are the victim of a crime online report it to the FBI’s Internet Crime Complaint Center (IC3) at www.ic3.gov. To report fraud committed through or toward the USPS, its employees, or customers, report it to the United States Postal Inspection Service (USPIS) at www.uspis.gov/report.”

According to the leaked back-end server for US Job Services, here is a list of the current sites selling this product:

usjobshelpcenter[.]com
usjobhelpcenter[.]com
job-postal[.]com
localpostalhiring[.]com
uspostalrecruitment[.]com
postalworkerjob[.]com
next-level-now[.]com
postalhiringcenters[.]com
postofficehiring[.]com
postaljobsplacement[.]com
postal-placement[.]com
postofficejobopenings[.]com
postalexamprep[.]com
postaljobssite[.]com
postalwebsite[.]com
postalcareerscenters[.]com
postal-hiring[.]com
postal-careers[.]com
postal-guide[.]com
postal-hiring-guide[.]com
postal-openings[.]com
postal-placement[.]com
postofficeplacements[.]com
postalplacementservices[.]com
postaljobs20[.]com
postal-jobs-placement[.]com
postaljobopenings[.]com
postalemployment[.]com
postaljobcenters[.]com
postalmilitarycareers[.]com
epostaljobs[.]com
postal-job-center[.]com
postalcareercenter[.]com
postalhiringcenters[.]com
postal-job-center[.]com
postalcareercenter[.]com
postalexamprep[.]com
postalplacementcenters[.]com
postalplacementservice[.]com
postalemploymentservices[.]com
uspostalhiring[.]com

Beyond Traditional Security: NDR's Pivotal Role in Safeguarding OT Networks

By The Hacker News
Why is Visibility into OT Environments Crucial? The significance of Operational Technology (OT) for businesses is undeniable as the OT sector flourishes alongside the already thriving IT sector. OT includes industrial control systems, manufacturing equipment, and devices that oversee and manage industrial environments and critical infrastructures. In recent years, adversaries have recognized the

UK Sets Up Fake Booter Sites To Muddy DDoS Market

By BrianKrebs

The United Kingdom’s National Crime Agency (NCA) has been busy setting up phony DDoS-for-hire websites that seek to collect information on users, remind them that launching DDoS attacks is illegal, and generally increase the level of paranoia for people looking to hire such services.

The warning displayed to users on one of the NCA’s fake booter sites. Image: NCA.

The NCA says all of its fake so-called “booter” or “stresser” sites — which have so far been accessed by several thousand people — have been created to look like they offer the tools and services that enable cyber criminals to execute these attacks.

“However, after users register, rather than being given access to cyber crime tools, their data is collated by investigators,” reads an NCA advisory on the program. “Users based in the UK will be contacted by the National Crime Agency or police and warned about engaging in cyber crime. Information relating to those based overseas is being passed to international law enforcement.”

The NCA declined to say how many phony booter sites it had set up, or for how long they have been running. The NCA says hiring or launching attacks designed to knock websites or users offline is punishable in the UK under the Computer Misuse Act 1990.

“Going forward, people who wish to use these services can’t be sure who is actually behind them, so why take the risk?” the NCA announcement continues.

The NCA campaign comes closely on the heels of an international law enforcement takedown involving four-dozen websites that made powerful DDoS attacks a point-and-click operation.

In mid-December 2022, the U.S. Department of Justice (DOJ) announced “Operation Power Off,” which seized four-dozen booter business domains responsible for more than 30 million DDoS attacks, and charged six U.S. men with computer crimes related to their alleged ownership of popular DDoS-for-hire services. In connection with that operation, the NCA also arrested an 18-year-old man suspected of running one of the sites.

According to U.S. federal prosecutors, the use of booter and stresser services to conduct attacks is punishable under both wire fraud laws and the Computer Fraud and Abuse Act (18 U.S.C. § 1030), and may result in arrest and prosecution, the seizure of computers or other electronics, as well as prison sentences and a penalty or fine.

The United Kingdom, which has been battling its fair share of domestic booter bosses, started running online ads in 2020 aimed at young people who search the Web for booter services.

As part of last year’s mass booter site takedown, the FBI and the Netherlands Police joined the NCA in announcing they are running targeted placement ads to steer those searching for booter services toward a website detailing the potential legal risks of hiring an online attack.

Black Hat Europe 2022 NOC: The SOC Inside the NOC

By Jessica Bair

Our core mission in the NOC is network resilience. We also provide integrated security, visibility and automation, a SOC inside the NOC.

In part one, we covered:

  • Designing the Black Hat Network, by Evan Basta
  • AP Placement Planning, by Sandro Fasser
  • Wi-Fi Air Marshal, by Jérémy Couture, Head of SOC, Paris 2024 Olympic Games
  • Meraki Dashboards, by Rossi Rosario Burgos
  • Meraki Systems Manager, by Paul Fidler
  • A Better Way to Design Training SSIDs/VLANs, by Paul Fidler

In part two, we are going deep with security:

  • Integrating Security
  • First Time at Black Hat, by Jérémy Couture, Head of SOC, Paris 2024 Olympic Games
  • Trojan on an Attendee Laptop, by Ryan MacLennan
  • Automated Account Provisioning, by Adi Sankar
  • Integrating Meraki Scanning Data with Umbrella Security Events, by Christian Clasen
  • Domain Name Service Statistics, by Adi Sankar

Integrating Security

As the needs of Black Hat evolved, so did the Cisco Secure Technologies in the NOC:

The SecureX dashboard made it easy to see the status of each of the connected Cisco Secure technologies.

Since joining the Black Hat NOC in 2016, my goal remains integration and automation. As a NOC team comprised of many technologies and companies, we are pleased that this Black Hat NOC was the most integrated to date, to provide an overall SOC cybersecurity architecture solution.

We have ideas for even more integrations for Black Hat Asia and Black Hat USA 2023. Thank you, Piotr Jarzynka, for designing the integration diagram.

Below are the SecureX threat response integrations for Black Hat Europe, empowering analysts to investigate Indicators of Compromise very quickly, with one search.

The original Black Hat NOC integration for Cisco was NetWitness sending suspicious files to Threat Grid (know Secure Malware Analytics). We expanded that in 2022 with Palo Alto Networks Cortex XSOAR and used it in London, for investigation of malicious payload attack.

NetWitness observed a targeted attack against the Black Hat network. The attack was intended to compromise the network.

NetWitness extracted the payload and sent it to Secure Malware Analytics for detonation.

Reviewing the analysis report, we were able to quickly determine it was the MyDoom worm, which would have been very damaging.

The attack was blocked at the perimeter and the analysts were able to track and enrich the incident in XSOAR.

First Time at Black Hat, by Jérémy Couture, Head of SOC, Paris 2024 Olympic Games

My first time at Black Hat turned out to be an incredible journey!

Thanks to the cybersecurity partnership between Paris 2024 and Cisco, I was able to integrate into the Cisco Crew, to operate the NOC/SOC as a Threat Hunter on the most dangerous network in the world for this European Edition of Black Hat.

My first day, I helped with deploying the network by installing the wireless Meraki APs on the venue, understanding how they were configured and how they could help analysts to identify and locate any client connected to the network that could have a bad behavior during the event, the idea being to protect the attendees if an attack was to spray on the network.

Following this “physical” deployment, I’ve been able to access the whole Cisco Secure environment including Meraki, Secure Malware Analytics, Umbrella, SecureX and the other Black Hat NOC partners software tools.

SecureX was definitely the product on which I wanted to step up. By having so fantastic professionals around me, we were able to dig in the product, identifying potential use cases to deploy in the orchestration module and expected integrations for Paris 2024.

Time was flying and so were the attendees to the conference, a network without user is fun but can be quite boring as nothing happens, having so many cybersecurity professional at the same place testing different security malwares, attacks and so on led us to very interesting investigations. A paradox at the Black Hat, we do not want to block malicious content as it could be part of exercises or training classes, quite a different mindset as what we, security defenders, are used to! Using the different components, we were able to find some observables/IOCs that we investigate through SecureX, SecureX being connected to all the other components helped us to enrich the observables (IPs, urls, domains…), understanding the criticality of what we identified (such as malware payloads) and even led us to poke the folks in the training classes to let them know that something really wrong was happening on their devices.

Being part of the Black Hat NOC was an incredible experience, I was able to meet fantastic professionals, fully committed on making the event a success for all attendees and exhibitors. It also helped me to better understand how products, that we use or will use within Paris 2024, could be leveraged to our needs and which indicators could be added to our various Dashboards, helping us to identify, instantaneously, that something is happening. 

Trojan on an Attendee Laptop, by Ryan MacLennan

During the last day of Black Hat Europe, our NOC partner, NetWitness saw some files being downloaded on the network. The integration again automatically carved out the file and submitted the Cisco Secure Malware Analytics (SMA) platform. One of those files came back as a trojan, after SMA detonated the file in a sandbox environment. The specific hash is the below SHA-256:

938635a0ceed453dc8ff60eab20c5d168a882bdd41792e5c5056cc960ebef575

The screenshot below shows some of the behaviors that influenced the decision:

The result of seeing these behaviors caused SMA to give it the highest judgement score available to a detonated file:

After this judgement was made, we connected with the Palo Alto Networks team, and they found the IP address associated with the file download.

Once we had this information, we went to the Meraki dashboard and did a search for the IP address. The search returned only one client that has been associated with the address for the entire Black Hat conference.

Knowing that there has only been one client associated with the address made finding the attendee easier. We then needed to know where they were and Meraki had this figured out. After opening the client’s profile, we saw what SSID and access point (AP) they were connected to using the Meraki location map.

We then found the attendee and let them know to have their IT inspect their laptop to make sure it is clean.

Apart from the technical challenges of running a temporary network for N thousand people, the Black Hat event reminded us that success doesn’t happen without teamwork; that leadership isn’t just about keeping the project on track. It is also about looking after the team and that small details in planning, build up and tear down can be just as important, as having all the right tools and fantastically skilled Individuals using them during the event itself.

Automated Account Provisioning, by Adi Sankar

In the Cisco Secure technology stack, within the Black Hat NOC, we use SecureX Single Sign-on. This reduces the confusion of managing multiple accounts and passwords. It also streamlines the integrations between the Cisco products and our fellow NOC partners. We have an open ecosystem approach to integrations and access in the NOC, so we will provision Cisco Secure accounts for any staff member of the NOC. Logging into each individual console and creating an account is time consuming and can often lead to confusion on which tools to provision and which permission levels are needed.

To automate this process, I developed two workflows: one to create non-admin users for NOC partners and one to create administrator accounts in all the tools for Cisco staff. The workflows create accounts in SecureX, Secure Malware Analytics (Threat Grid), Umbrella DNS and Meraki dashboard, all using SecureX Single Sign-On.

Here is what the workflow looks like for creating non-admin users.

The workflow requires three inputs: first name, last name, and email. Click Run.

The sequence of API calls is as follows:

  • Generate a SecureX token to access the SecureX API including the “admin/invite:write, invite:write” scopes.
  • Invite the User to SecureX using the invite API (https://visibility.amp.cisco.com/iroh/invite/index.html#/). In the body of this POST the role is set to “user”. In the Administrator workflow this would be set to “admin” allowing full access to SecureX.
  • If the invite fails due to a duplicate invite, print an error message in Webex teams.
  • Invite the user to the Meraki dashboard using the “admins” API (https://api.meraki.com/api/v1/organizations/{organizationId}/admins). In the body of this call, the organization access is set to none, and access to two networks (Wireless network and Systems Manager) are set to “read-only” to ensure the user cannot make any changes to affect the network. In the Administrator version org access is still set to none but “full” permissions are provided to the two networks, something we do not want all users to have.
  • Generate a token to the new Umbrella API using https://api.umbrella.com/auth/v2/token with the following scopes (read admin users, write admin users, read admin roles). This single endpoint for generating a token based on scopes has made using the Umbrella API significantly easier.
  • Then invite the user to Umbrella using the “admins” API at (https://api.umbrella.com/admin/v2/users) and in the body of this POST the “role ID” is set to 2 to ensure read-only permissions are provisioned for Umbrella.
  • Create a user in Secure Malware analytics using the API at (https://panacea.threatgrid.com/api/v3/organizations/<ORG_ID>/users). The body of this request simply creates a Malware Analytics login using the users last name and appending “_blackhat”
  • The last call is to send a password reset email for the Malware Analytics user. (https://panacea.threatgrid.com/api/v3/users/<LOGIN>/password-email) They can set their password via the email, login to the Malware Analytics console and then link their SecureX sign-on account, which means they will no longer need to use their Malware Analytics credentials.

Once the workflow has completed successfully, the user will receive four emails to create a SecureX Sign-On account and accept the invitations to the various products. These workflows really improved our responsiveness to account provisioning requests and makes it much easier to collaborate with other NOC partners.

Integrating Meraki Scanning Data with Umbrella Security Events, by Christian Clasen

Over the previous Black Hat events, we have been utilizing Meraki scanning data to get location data for individual clients, as they roamed conference. In the initial blog post (Black Hat Asia 2022), we created a Docker container to accept the data from the Meraki Scanning API and save it for future analysis. At Black Hat USA 2022, we wrote about how to use Python Folium to use the flat text files to generate chronological heatmaps that illustrated the density of clients throughout the conference.

This time around, we’ve stepped it up again by integrating Umbrella DNS Security events and adding the ability to track clients across the heatmap using their local IP address.

To improve the portability of our data and the efficiency of our code, we began by moving from flat JSON files to a proper database. We chose SQLite this time around, though going forward we will likely use Mongo.

Both can be queried directly into Python Pandas dataframes which is what will give us the optimal performance we are looking for. We have a dedicated Docker container (Meraki-Receiver) that will validate the incoming data stream from the Meraki dashboard and insert the values into the database.

The database is stored on a Docker volume that can be mounted by our second container, the Meraki-Mapper. Though this container’s primary purpose is building the heatmaps, it also performs the task of retrieving and correlating Umbrella DNS security events. That is, any DNS query from the Black Hat network that matches one of several predefined security categories. Umbrella’s APIs were recently improved to add OAuth and simplify the URI scheme for each endpoint. After retrieving a token, we can get all security events in the time frame of the current heatmap with one call.

What we want to do with these events is to create Folium Markers. These are static “pins” that will sit on the map to indicate where the DNS query originated from. Clicking on a marker will popup more information about the query and the client who sent it.

Thanks to the Umbrella Virtual Appliances in the Black Hat network, we have the internal IP address of the client who sent the DNS query. We also have the internal IP address in the Meraki scanning data, along with the latitude and longitude. After converting the database query into a Pandas dataframe, our logic takes the IP address from the DNS query and finds all instances in the database of location data for that IP within a 5-minute window (the resolution of our heatmap).

What we end up with is a list of dictionaries representing the markers we want to add to the map. Using Bootstrap, we can format the popup for each event to make it look a bit more polished. Folium’s Popup plugin allows for an iFrame for each marker popup.

The result is a moving heatmap covering an entire day on a given conference floor, complete with markers indicating security events (the red pushpin icon).

Clicking on the pushpin shows the details of the query, allowing us in the NOC to see the exact location of the client when they sent it.

To further improve this service during the next conference, we plan to implement a web page where NOC staff can submit an IP address and immediately get map tracking that client through the conference floor. This should give us an even more efficient way to find and notify folks who are either behaving maliciously or appear to be infected.

Domain Name Service Statistics, by Adi Sankar

For years we have been tracking the DNS stats at the Blackhat conferences. The post-pandemic 2022 numbers look like we never skipped a beat after the dip in DNS queries from 2021, seen in the bar graph below. This year’s attendance saw well over 11 million total DNS queries.

The Activity volume view from Umbrella gives a top-level level glance of activity by category, which we can drill into for deeper threat hunting. On trend with the previous Black Hat Europe events, the top Security categories were Dynamic DNS and Newly Seen Domains. However, it’s worth noting a proportionally larger increase in the cryptomining and phishing categories from 9 to 17 and 28 to 73, respectively, compared to last year.

These years, Black Hat saw over 4,100 apps connect to the network, which is nearly double of what was seen last year. However, still not topping over 6,100 apps seen at Black Hat USA early this year.

Should the need arise, we can block any application, such as Mail.ru above.

Black Hat Europe 2022 was the best planned and executed NOC in my experience, with the most integrations and visibility. This allowed us the time to deal with problems, which will always arise.

We are very proud of the collaboration of the team and the NOC partners.

Black Hat Asia will be in May 2023, at the Marina Bay Sands, Singapore…hope to see you there!

Acknowledgments

Thank you to the Cisco NOC team:

  • Cisco Secure: Ian Redden, Christian Clasen, Aditya Sankar, Ryan MacLennan, Guillaume Buisson, Jerome Schneider, Robert Taylor, Piotr Jarzynka, Tim Wadhwa-Brown and Matthieu Sprunck
  • Threat Hunter / Paris 2024 Olympics SOC: Jérémy Couture
  • Meraki Network: Evan Basta, Sandro Fasser, Rossi Rosario Burgos, Otis Ioannou, Asmae Boutkhil, Jeffry Handal and Aleksandar Dimitrov Vladimirov
  • Meraki Systems Manager: Paul Fidler

Also, to our NOC partners NetWitness (especially David Glover, Iain Davidson, Alessandro Contini and Alessandro Zatti), Palo Alto Networks (especially James Holland, Matt Ford, Matt Smith and Mathew Chase), Gigamon, IronNet, and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, Steve Fink, James Pope, Jess Stafford and Steve Oldenbourg).

About Black Hat

For 25 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and USA. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Black Hat Europe 2022 NOC: When planning meets execution

By Jessica Bair

In this blog about the design, deployment and automation of the Black Hat network, we have the following sections:

  • Designing the Black Hat Network, by Evan Basta
  • AP Placement Planning, by Sandro Fasser
  • Wi-Fi Air Marshal, by Jérémy Couture, Head of SOC, Paris 2024 Olympic Games
  • Meraki Dashboards, by Rossi Rosario Burgos
  • Meraki Systems Manager, by Paul Fidler
  • A Better Way to Design Training SSIDs/VLANs, by Paul Fidler

Cisco is honored to be a Premium Partner of the Black Hat NOC, and is the Official Network Platform, Mobile Device Management, Malware Analysis and DNS (Domain Name Service) Provider of Black Hat.

2022 was Cisco’s sixth year as a NOC partner for Black Hat Europe. However, it was our first time building the network for Black Hat Europe. We used experiences of Black Hat Asia 2022 and Black Hat USA 2022 to refine the planning for network topology design and equipment. Below are our fellow NOC partners providing hardware, to build and secure the network, for our joint customer: Black Hat.

Designing the Black Hat Network, by Evan Basta

We are grateful to share that Black Hat Europe 2022 was the smoothest experience we’ve had in the years at Black Hat. This is thanks to the 15 Cisco Meraki and Cisco Secure engineers on site (plus virtually supporting engineers) to build, operate and secure the network; and great NOC leadership and collaborative partners.

To plan, configure, deploy (in two days), maintain resilience, and recover (in four hours) an enterprise class network, took a lot of coordination. We appreciate the Black Hat NOC leadership, Informa and the NOC partners; meeting each week to discuss the best design, staffing, gear selection and deployment, to meet the unique needs of the conference. Check out the “Meraki Unboxed” podcast – Episode 94: Learnings from the Black Hat Europe 2022 Cybersecurity Event

We must allow real malware on the Black Hat network: for training, demonstrations, and briefing sessions; while protecting the attendees from attack within the network from their fellow attendees, and prevent bad actors from using the network to attack the Internet. It is a critical balance to ensure everyone has a safe experience, while still being able to learn from real world malware, vulnerabilities, and malicious websites.

In addition to the weekly meetings with Black Hat and the other partners, the Cisco Meraki engineering team of Sandro Fasser, Rossi Rosario Burgos, Otis Ioannou, Asmae Boutkhil, Jeffry Handal and I met every Friday for two months. We also discussed the challenges in a Webex space with other engineers who worked on past Black Hat events.

The mission:

Division of labor is essential to reduce mistakes and stay laser focused on security scope. Otis took the lead working on network topology design with Partners. Asmae handled the port assignments for the switches. Rossi ensured every AP and Switch was tracked, and the MAC addresses were provided to Palo Alto Networks for DCHP assignments. Otis and Rossi spent two days in the server room with the NOC partners, ensuring every switch was operating and configured correctly. Rossi also deployed and configured a remote Registration switch for Black Hat.

AP Placement Planning, by Sandro Fasser

In the weeks before deployment, our virtual Meraki team member, Aleksandar Dimitrov Vladimirov, and I focused on planning and creating a virtual Wi-Fi site survey. Multiple requirements and restrictions had to be taken into consideration. The report was based on the ExCel centre floor plans, the space allocation requirements from Black Hat and the number of APs we had available to us. Although challenging to create, with some uncertainties and often changing requirements due to the number of stakeholders involved, the surveys AP placement for best coverage ended up being pivotal at the event.

Below is the Signal Strength plan for the Expo Hall Floor on the 5 GHz band. The original plan to go with a dual-Band deployment was adjusted onsite and the 2.4 GHz band was disabled to enhance performance and throughput. This was a decision made during the network setup, in coordination with the NOC Leadership and based on experience from past conferences.

Upon arrival at the ExCel Centre, we conducted a walkthrough of the space that most of us had only seen as a floor plan and on some photos. Thanks to good planning, we could start deploying the 100+ APs immediately, with only a small number of changes to optimize the deployment on-site. As the APs had been pre-staged and added to the Meraki dashboard, including their location on the floor maps, the main work was placing and cabling them physically. During operation, the floor plans in the Meraki Dashboard were a visual help to easily spot a problem and navigate the team on the ground to the right spot, if something had to be adjusted.

As the sponsors and attendees filled each space, in the Meraki dashboard, we were able to see in real-time the number of clients connected to each AP, currently and over the time of the conference. This enabled quick reaction if challenges were identified, or APs could be redeployed to other zones. Below is the ExCel Centre Capital Hall and London Suites, Level 0. We could switch between the four levels with a single click on the Floor Plans, and drill into any AP, as needed.

The Location heatmaps also provided essential visibility into conference traffic, both on the network and footfalls of attendees. Physical security is also an important aspect of cybersecurity; we need to know how devices move in space, know where valuable assets are located and monitor their safety.

Below is the Business Hall at lunchtime, on the opening day of the conference. You can see no live APs in the bottom right corner of the Location heatmap. This is an example of adapting the plan to reality onsite. In past Black Hat Europe conferences, the Lobby in that area was the main entrance. Construction in 2022 closed this entrance. So, those APs were reallocated to the Level 1 Lobby, where attendees would naturally flow from Registration.

The floor plans and heatmaps also helped with the Training, Briefings and Keynote network resilience. Capacity was easy to add temporarily, and we were able to remove it and relocate it after a space emptied.

Meraki API Integration for automatic device blocking

During our time in the NOC, we had the chance to work with other vendor engineers and some use cases that came up led to interesting collaborations. One specific use case was that we wanted to block wireless clients, that show some malicious or bad behavior, automatically after they have been identified by one of the SOC analysts on the different security platforms, in addition we wanted to show them a friendly warning page that guides them to the SOC for a friendly conversation.

The solution was a script that can be triggered thru the interfaces of the other security products and attaches a group policy thru the Meraki Dashboard, including a quarantine VLAN and a splash page, via the Meraki APIs. This integration was just one of the many collaboration bits that we worked on.

Wi-Fi Air Marshal, by Jérémy Couture, Head of SOC, Paris 2024 Olympic Games

During the first day of training, in the Meraki dashboard Air Marshal, I observed packet flood attacks, against we were able to adapt and remain resilient.

I also observed an AP spoofing and broadcast de-authentication attack. I was able to quickly identify the location of the attack, which was at the Lobby outside the Business Hall.  Should the attacks continue, physical security had the information to intervene. We also had the ability to track the MAC address throughout the venue, as discussed in Christian Clasen’s section in part two.

From our experiences at Black Hat USA 2022, we had encrypted frames enabled, blunting the attack.

Meraki Dashboards, by Rossi Rosario Burgos

The Meraki dashboards made it very easy to monitor the health of the network APs and Switches, with the ability to aggregate data, and quickly pivot into any switch, AP or clients.

Through the phases of the conference, from two days of pre-conference setup, to focused and intense training the first two days, and transition to the briefings and Business Hall, we were able to visualize the network traffic.

In addition, we could see the number of attendees who passed through the covered area of the conference, with or without connecting to the network. Christian Clasen takes this available data to a new level in Part 2 of the blog.

As the person with core responsibilities for the switch configuration and uptime, the Meraki dashboard made it very simple to quickly change the network topology, according to the needs of the Black Hat customer.

Meraki Systems Manager, by Paul Fidler

If you refer back to Black Hat USA 2022, you’d have seen that we had over 1,000 iOS devices to deploy, with which we had several difficulties. For context, the company that leases the devices to Black Hat doesn’t use a Mobile Device Management (MDM) platform for any of their other shows…Black Hat is the only one that does. So, instead of using a mass deployment technology, like Apple’s Automated Device Enrollment, the iOS devices are “prepared” using Apple Configurator. This includes uploading a Wi-Fi profile to the devices as part of that process. In Las Vegas, this Wi-Fi profile wasn’t set to auto join the Wi-Fi, resulting in the need to manually change this on 1,000 devices. Furthermore, 200 devices weren’t reset or prepared, so we had those to reimage as well.

Black Hat Europe 2022 was different. We took the lessons from US and coordinated with the contractor to prepare the devices. Now, if you’ve ever used Apple Configurator, there’s several steps needed to prepare a device. However, all of these can be actions can be combined into a Blueprint:

Instead of there being several steps to prepare a device, there is now just one! Applying the Blueprint!

For Black Hat Europe, this included:

  • Wi-Fi profile
  • Enrollment, including supervision
  • Whether to allow USB pairing
  • Setup Assistant pane skipping

There’s lots of other things that can be achieved as well, but this results in the time taken to enroll and set up a device to around 30 seconds. Since devices can be set up in parallel (you’re only limited by the number of USB cables / ports you have), this really streamlines the enrollment and set up process.

Now, for the future, whilst you can’t Export these blueprints, they are transportable. If you open Terminal on a Mac and type:
cd /Users/<YOUR USER NAME>/Library/Group Containers/K36BKF7T3D.group.com.apple.configurator/Library/Application Support/com.apple.configurator/Blueprints

You’ll see a file / package called something.blueprint This can be zipped up and emailed to some else so, they can then use the exact same Blueprint! You may need to reboot your computer for the Blueprint to appear in Apple Configurator.

Device Naming / Lock Screen Messages

As mentioned, the registration / lead capture / session scanning devices are provided by the contractor. Obviously, these are all catalogued and have a unique device code / QR code on the back of them. However, during setup, any device name provisioned on the device gets lost.

So, there’s three things we do to know, without having to resort to using the unwieldy serial number, what devices is what.

  • The first thing that we do is to use the Meraki API to rename Systems Manager Devices. The script created has some other functionality too, such as error handling, but it is possible to do this without a script. You can find it here. This ensures that the device has a name: iOS devices default to being called iPhone or iPad in Systems Manager when they first enroll, so, already, this is incredibly helpful.
  • The second thing we do is to use a simple Restrictions profile for iOS, which keeps the physical device’s name in sync with that in the dashboard
  • Lastly, we then use a Lock Screen payload to format the message on the device when it’s locked:

In the footnote, you’ll see Device Name and Device Serial in blue. This denotes that the values are actually dynamic and change per device. They include:

  • Organization name
  • Network name
  • Device name
  • Device serial
  • Device model
  • Device OS version
  • Device notes
  • Owner name
  • Owner email
  • Owner username
  • SM device ID

On the Lock Screen, it’s now possible to see the device’s name and serial number, without having to flip the device over (A problem for the registration devices which are locked in a secure case) or open systems preferences.

We also had integration with SecureX device insights, to see the security status of each iOS device.

With the ability to quickly check on device health from the SecureX dashboard.

 

Data Security

This goes without saying, but the iOS devices (Registration, Lead Capture and Session Scanning) do have access to personal information. To ensure the security of the data, devices are wiped at the end of the conference. This is incredibly satisfying, hitting the Erase Devices button in Meraki Systems Manager, and watching the 100+ devices reset!

A Better Way to Design Training SSIDs/VLANs, by Paul Fidler

Deploying a network like Black Hat takes a lot of work, and repetitive configuration. Much of this has been covered in previous blogs. However, to make things easier for this event, instead of the 60 training SSIDs we had in Black Hat US 2022, the Meraki team discussed the benefits of moving to iPSKs with Black Hat NOC Leadership, which accepted the plan.

For context, instead of having a single pre shared key for an SSID, iPSK functionality allows you to have 1000+. Each of these iPSKs can be assigned its own group policy / VLAN. So, we created a script:

  • That consumed networkID, SSID, Training name, iPSK and VLAN from a CSV
  • Created a group policy for that VLAN with the name of the training
  • Created an iPSK for the given SSID that referred to the training name

This only involves five API calls:

  • For a given network name, get the network ID
  • Get Group Policies
  • If the group policy exists, use that, else create a group policy, retaining the group policy ID
  • Get the SSIDs (to get the ID of the SSID)
  • Create an iPSK for the given SSID ID

The bulk of the script is error handling (The SSID or network doesn’t exist, for example) and logic!

The result was one SSID for all of training: BHTraining, and each classroom had their own password. This reduced the training SSIDs from over a dozen and helped clear the airwaves.

Check out part two – Black Hat Europe 2022 NOC: The SOC Inside the NOC 

Acknowledgments

Thank you to the Cisco NOC team:

  • Meraki Network: Evan Basta, Sandro Fasser, Rossi Rosario Burgos, Otis Ioannou, Asmae Boutkhil, Jeffry Handal and Aleksandar Dimitrov Vladimirov
  • Meraki Systems Manager: Paul Fidler
  • Cisco Secure: Ian Redden, Christian Clasen, Aditya Sankar, Ryan MacLennan, Guillaume Buisson, Jerome Schneider, Robert Taylor, Piotr Jarzynka, Tim Wadhwa-Brown and Matthieu Sprunck
  • Threat Hunter / Paris 2024 Olympics SOC: Jérémy Couture

Also, to our NOC partners NetWitness (especially David Glover, Iain Davidson, Alessandro Contini and Alessandro Zatti), Palo Alto Networks (especially James Holland, Matt Ford, Matt Smith and Mathew Chase), Gigamon, IronNet, and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, Steve Fink, James Pope, Jess Stafford and Steve Oldenbourg).

About Black Hat

For 25 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and USA. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

3 New Vulnerabilities Affect OT Products from German Companies Festo and CODESYS

By Ravie Lakshmanan
Researchers have disclosed details of three new security vulnerabilities affecting operational technology (OT) products from CODESYS and Festo that could lead to source code tampering and denial-of-service (DoS). The vulnerabilities, reported by Forescout Vedere Labs, are the latest in a long list of flaws collectively tracked under the name OT:ICEFALL. "These issues exemplify either an

Your OT Is No Longer Isolated: Act Fast to Protect It

By The Hacker News
Not too long ago, there was a clear separation between the operational technology (OT) that drives the physical functions of a company – on the factory floor, for example – and the information technology (IT) that manages a company's data to enable management and planning.  As IT assets became increasingly connected to the outside world via the internet, OT remained isolated from IT – and the

RSA Conference® 2022 Security Operations Center Findings Report

By Jessica Bair

NetWitness and Cisco released the third annual Findings Report from the RSA Conference® 2022 Security Operations Center (SOC).

The RSA Conference® SOC analyzes the Moscone Center wireless traffic, which is an open network during the week of the Conference.

The role of the SOC at RSA Conference is an educational exhibit sponsored by NetWitness and Cisco. It has elements of a SOC like you would create to protect an organization. The RSAC SOC coordinated with the Moscone Center Network Operation Center for a SPAN of the network traffic from the Moscone Center wireless network. In the SOC, NetWitness had real time visibility of the traffic traversing the wireless network. Cisco provided automated malware analysis, threat intelligence, DNS visibility and Intrusion Detection; brought together with SecureX.

The goal of the RSAC SOC is to use technology to educate conference attendees about what happens on a typical wireless network. The education comes in the form of daily SOC tours and an RSA Conference® session. You can watch the replay of the ‘EXPOSURE: The 3rd Annual RSAC SOC Report’ session here.

The findings report addresses several security topics, including:

  • Encrypted vs. Unencrypted network traffic
  • Cleartext Usernames and Passwords
  • Voice over IP
  • Threat Hunting
  • Malware Analysis, through the NetWitness® integration
  • Malicious Behavior
  • Domain Name Server (DNS)
  • Automate, Automate
  • Intrusion Detection
  • Firepower Encrypted Visibility Engine (EVE)
  • Firepower and NetWitness® Integration

Look forward to seeing you in 2023!

Download the RSA Conference® 2022 Security Operations Center Findings Report here.

Acknowledgements: Our appreciation to those who made the RSAC SOC possible.

NetWitness Staff

Percy Tucker

Steve Fink

Bart Stump

Dave Glover

Cisco Staff

Jessica Bair Oppenheimer – Cisco SOC Manager

Ian Redden – Team Lead & Integrations

Aditya Sankar / Ben Greenbaum – SecureX & Malware Analytics

Alejo Calaoagan / Christian Clasen – Cisco Umbrella

Dinkar Sharma / Seyed Khadem-Djahaghi – Cisco Secure Firewall

Matt Vander Horst – SecureX Orchestration

Doug Hurd – Partnerships

Hardware Support

Eric Kostlan

Navin Sinha

Zohreh Khezri

Eric Goodwin

Gabe Gilligan and the amazing staff at XPO Digital!


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Secure Your Hybrid Workforce Using These SOC Best Practices

By Pat Correia

Hybrid Workforce is here to stay

Just a few years ago when the topic of supporting offsite workers arose, some of the key conversation topics were related to purchase, logistics, deployment, maintenance and similar issues. The discussions back then were more like “special cases” vs. today’s environment where supporting workers offsite (now known as the hybrid workforce) has become a critical mainstream topic.

Figure 1: Security challenges in supporting the hybrid workforce

Now with the bulk of many organization’s workers off-premise, the topic of security and the ability of a security vendor to help support an organization’s hybrid workers has risen to the top of the selection criteria.  In a soon to be released Cisco endpoint survey, it’s not surprising that the ability of a security vendor to make supporting the hybrid workforce easier and more efficient was the key motivating factor when organizations choose security solutions.

Figure 2: Results from recent Cisco Survey

Best Practices complement your security tools

Today, when prospects and existing customers look at Cisco’s ability to support hybrid workers with our advanced security solution set and open platform, it’s quite clear that we can deliver on that promise. But, yes, good tools make it easier and more efficient, but the reality is that running a SOC or any security group, large or small, still takes a lot of work. Most organizations not only rely on advanced security tools but utilize a set of best practices to provide clarity of roles, efficiency of operation, and for the more prepared, have tested these best practices to prove to themselves that they are prepared for what’s next.

Give this a listen!

Knowing that not all organizations have this degree of security maturity and preparedness, we gathered a couple of subject matter experts together to discuss 5 areas of time-tested best practices that, besides the advanced tools offered by Cisco and others, can help your SOC (or small security team) yield actionable insights and guide you faster, and with more confidence, toward the outcomes you want.

In this webinar you will hear practical advice from Cisco technical marketing and a representative from our award winning Talos Threat Intelligence group, the same group who have created and are maintaining breach defense in partnership with Fortune 500 Security Operating Centers (SOC) around the globe.

Figure 3: Webinar Speakers

You can expect to hear our 5 Best Practices recommendations on the following topics;

  1. Establishing Consistency – know your roles and responsibilities without hesitation.
  2. Incident Response Plan – document it, share it and test it with your stakeholders.
  3. Threat Hunting – find out what you don’t know and minimize the threat.
  4. Retro Learning – learn from the past and be better prepared.
  5. Unifying stakeholders – don’t go it alone.

Access this On-Demand Webinar now!

Check out our webinar to find out how you can become more security resilient and be better prepared for what’s next.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

4 Key Takeaways from "XDR is the Perfect Solution for SMEs" webinar

By The Hacker News
Cyberattacks on large organizations dominate news headlines. So, you may be surprised to learn that small and medium enterprises (SMEs) are actually more frequent targets of cyberattacks. Many SMEs understand this risk firsthand.  In a recent survey, 58% of CISOs of SMEs said that their risk of attack was higher compared to enterprises. Yet, they don't have the same resources as enterprises –

Hands-on Review: Stellar Cyber Security Operations Platform for MSSPs

By The Hacker News
As threat complexity increases and the boundaries of an organization have all but disappeared, security teams are more challenged than ever to deliver consistent security outcomes. One company aiming to help security teams meet this challenge is Stellar Cyber.  Stellar Cyber claims to address the needs of MSSPs by providing capabilities typically found in NG-SIEM, NDR, and SOAR products in their

Black Hat USA 2022: Creating Hacker Summer Camp

By Jessica Bair

In part one of this issue of our Black Hat USA NOC (Network Operations Center) blog, you will find:

  • Adapt and Overcome
  • Building the Hacker Summer Camp network, by Evan Basta
  • The Cisco Stack’s Potential in Action, by Paul Fidler
  • Port Security, by Ryan MacLennan, Ian Redden and Paul Fiddler
  • Mapping Meraki Location Data with Python, by Christian Clausen

Adapt and Overcome, by Jessica Bair Oppenheimer

In technology, we plan as best as we can, execute tactically with the resources and knowledge we have at the time, focus on the strategic mission, adjust as the circumstances require, collaborate, and improve; with transparency and humility. In short, we adapt and we overcome. This is the only way a community can have trust and grow, together. Every deployment comes with its challenges and Black Hat USA  2022 was no exception. Looking at the three Ps (people, process, platform), flexibility, communication, and an awesome Cisco platform allowed us to build and roll with the changes and challenges in the network. I am proud of the Cisco Meraki and Secure team members and our NOC partners.

The Buck Stops Here. Full stop. I heard a comment that the Wi-Fi service in the Expo Hall was “the worst I’ve ever experienced at a conference.” There were a lot of complaints about the Black Hat USA 2022 Wi-Fi network in the Expo Hall on 10 August. I also heard a lot of compliments about the network. Despite that the Wi-Fi and wired network was generally very good the most of the conference, and before my awesome colleagues share the many successes of designing, building, securing, managing, automating and tearing down one of the most hostile networks on Earth; I want to address where and how we adapted and what we did to fix the issues that arose, as we built an evolving, enterprise class network in a week.

First, a little history of how Cisco came to be the Official Network Provider of Black Hat USA 2022, after we were already successfully serving as the Official Mobile Device Management, Malware Analysis and Domain Name Service Provider. An Official Provider, as a Premium Partner, is not a sponsorship and no company can buy their way into the NOC for any amount of money. From the beginning of Black Hat 25 years ago, volunteers built the network for the conference rather than using the hotel network. This continues today, with the staff of Black Hat hand selecting trusted partners to build and secure the network.

After stepping up to help Black Hat with the network at Black Hat Asia, we had only two and a half months until Black Hat USA, in Las Vegas, 6-11 August 2022. Cisco was invited to build and secure the network for the much larger Black Hat USA flagship conference, affectionally known as ‘Hacker Summer Camp’, as the Official Network Equipment Provider. There were few other options, given the short timeframe to plan, supply chain difficulties in procuring the networking gear and assembling a team of network engineers, to join the Cisco Secure engineers and threat hunters. All the work, effort and loaned equipment were a gift from Cisco Meraki and Cisco Secure to the community.

We were proud to collaborate with NOC partners Gigamon, IronNet, Lumen, NetWitness and Palo Alto Networks; and work with Neil ‘Grifter’ Wyler, Bart Stump, Steve Fink and James Pope of Black Hat. We built strong bonds of familial ties over the years of challenges and joint successes. I encourage you to watch the replay of the Black Hat session An Inside Look at Defending the Black Hat Network with Bart and Grifter.

In June 2022, adjacent to Cisco Live Americas, the NOC partners met with Black Hat to plan the network. Cisco Meraki already donated 45 access points (APs), seven MS switches, and two Meraki MX security and SD-WAN appliances to Black Hat, for regional conferences.

I looked at the equipment list from 2019, that was documented in the Bart and Grifter presentation, and estimated we needed to source an additional 150 Cisco Meraki MR AP (with brackets and tripods) and 70+ Cisco Meraki MS switches to build the Black Hat USA network in just a few weeks. I wanted to be prepared for any changes or new requirements on-site. We turned to JW McIntire, who leads the network operations for Cisco Live and Cisco Impact. JW was enthusiastically supportive in helping identify the equipment within the Cisco Global Events inventory and giving his approval to utilize the equipment. A full thanks to those who made this possible is in the Acknowledgements below.

Over the week-long conference, we used all but three of the switches and all the APs.

We worked off the draft floor plans from 13 June 2022, for the training rooms, briefing rooms, support rooms, keynote rooms, conference public areas, registration, and of course the Expo Hall: over two million square feet of venue. We received updated plans for the training rooms, Expo Hall and support needs 12 days before we arrived on site. There were about 60 training rooms planned, each requiring their own SSID and Virtual Local Area Network, without host isolation. The ‘most access possible’ was the requirement, to use real world malware and attacks, without attacking other classrooms, attendees, sponsors or the rest of the world. Many of the training rooms changed again nine days before the start of the network build, as the number confirmed students rose or fell, we adjusted the AP assignments.

For switching allocation, we could not plan until we arrived onsite, to assess the conference needs and the placement of the cables in the walls of the conference center. The Black Hat USA network requires that every switch be replaced, so we always have full control of the network. Every network drop to place an AP and put the other end of a cable into the new switches in the closets costs Black Hat a lot of money. It also requires the time of ‘Doc’ – the lead network engineer at the Mandalay Bay, to whom we are all deeply grateful.

The most important mission of the NOC is Access, then Security, Visibility, Automation, etc. People pay thousands of dollars to attend the trainings and the briefings; and sponsors pay tens of thousands for their booth space. They need Access to have a successful conference experience.

With that background, let’s discuss the Wi-Fi in the Expo Hall. Cisco has a service to help customers do a methodical predictive survey of their space for the best allocation of their resources. We had 74 of the modern MR57 APs for the conference and prioritized their assignment in the Expo Hall and Registration. Specifications for MR57s include a 6 GHz 4×4:4, 5 GHz 4×4:4 and 2.4 GHz 4×4:4 radio to offer a combined tri–radio aggregate frame rate of 8.35 Gbps, with up to 4,804 Mbps in 6GHz band, 2,402 Mbps 5 GHz band and, 1,147 Mbps / 574 Mbps in the 2.4 GHz band based on 40MHz / 20MHz configuration. Technologies like transmit beamforming and enhanced receive sensitivity allow the MR57 to support a higher client density than typical enterprise-class access points, resulting in better performance for more clients, from each AP.

We donated top of the line gear for use at Black Hat USA. So, what went wrong on the first day in the Expo Hall? The survey came back with the following map and suggestions of 34 MR57s in the locations below. Many assumptions were made in pre-planning, since we did not know the shapes, sizes and materials of the booths that would be present inside the allocated spaces. We added an AP in the Arsenal Lab on the far-left side, after discussing the needs with Black Hat NOC leadership.

In the Entrance area (Bayside Foyer) of the Expo Hall (bottom of the map), you can see that coverage drops. There were four MR57s placed in the Bayside Foyer for iPad Registration and attendee Wi-Fi, so they could access their emails and obtain their QR code for scanning and badge printing.

I believed that would be sufficient and we allocated other APs to the rest of the conference areas. We had positive reports on coverage in most areas of the rest of the conference. When there were reported issues, we quickly deployed Cisco Meraki engineers or NOC technical associates. to confirm and were able to make changes in radio strength, broadcasting bands, SSIDs, etc. to fine tune the network. All while managing a large amount of new or changing network requirements, as the show expanded due to its success and was fully hybrid, with the increased streaming of the sponsored sessions, briefings and keynotes and remote Registration areas in hotels.

As the attendees queued up in mass outside of the Expo Hall on the morning of 10 August, the number of attendee devices connecting to the four MR57s in the foyer grew into the thousands. This degraded the performance of the Registration network. We adjusted by making the APs closest to the registration iPads only dedicated to the Registration. This fixed Registration lag but reduced the performance of the network for the attendees, as they waited to rush into the Expo Hall. From the site survey map, it is clear that the replacement APs were now needed in the Entrance for a connected mesh network, as you entered the Expo Hall from the Bayside foyer. Here lies Lesson 1: expected people flow should be taken into account in the RF design process.

Another challenge the morning of the Expo Hall opening was that five of the 57MRs inside were not yet connected to the Internet when it opened at 10am. The APs were installed three days earlier, then placed up on tripods the afternoon prior. However, the volume of newly requested network additions, to support the expanded hybrid element required the deployment of extra cables and switches. This cascaded down and delayed the conference center team from finalizing the Expo Hall line drops until into the afternoon. Lesson 2: Layer 1 is still king; without it, no Wi-Fi or power.

A major concern for the sponsors in their booths was that as the Expo Hall filled with excited attendees, the connectivity of the 900+ iOS devices used for lead management dropped. Part of this congestion was thousands of 2.4Ghz devices connected to the Expo Hall network. We monitored this and pushed as many as possible to 5Ghz, to relieve pressure on those airwaves. Lesson 3: With Wi-Fi 6e now available in certain countries, clean spectrum awaits, but our devices need to come along as well.

We also adjusted in the Cisco Meraki Systems Manager Mobile Device Management, to allow the iPhones for scanning to connect securely to the Mandalay Bay conference network, while still protecting your personal information with Cisco SecureX, Security Connector and Umbrella DNS, to ensure access as we expanded the network capacity in the Expo Hall. Lesson 4: Extreme security by default where you can control the end point. Do not compromise when dealing with PPI.

Using the Cisco Meraki dashboard access point location heat map and the health status of the network, we identified three places in the front of the Expo Hall to deploy additional drops with the Mandalay Bay network team. Since adding network drops takes some time (and costs Black Hat extra money), we took immediate steps to deploy more MS120 switches and eight additional APs at hot spots inside the Expo Hall with the densest client traffic, at no expense to Black Hat. Lesson 5: Footfall is not only about sales analytics. It does play a role into RF planning. Thereby, allowing for a data-driven design decision.

Above is the heat map of the conference Expo Hall at noon on 11 August. You can see the extra APs at the Entrance of the Expo Hall, connected by the three drops set up by the Mandalay Bay to the Cisco Meraki switches in the closets. Also, you can see the clusters of APs connected to the extra MS120 switches. At the same time, our lead Meraki engineer, Evan Basta, did a speed test from the center left of the Expo Hall.

As I am sharing lessons learned, I want to provide visibility to another situation encountered. On the afternoon of 9 August, the last day of training, a Black Hat attendee walked the hallways outside several training rooms and deliberately attacked the network, causing students and instructors not to be able to connect to their classes. The training rooms have host isolation removed and we designed the network to provide as much safe access as possible. The attacker took advantage of this openness, spoofed the SSIDs of the many training rooms and launched malicious attacks against the network.

We must allow real malware on the network for training, demonstrations and briefing sessions; while protecting the attendees from attack within the network from their fellow attendees and prevent bad actors from using the network to attack the Internet. It is a critical balance to ensure everyone has a safe experience, while still being able to learn from real world malware, vulnerabilities and malicious websites.

The attack vector was identified by a joint investigation of the NOC teams, initiated by the Cisco Meraki Air Marshal review. Note the exact same MAC addresses of the spoofed SSIDs and malicious broadcasts. A network protection measure was suggested by the Cisco Meraki engineering team to the NOC leadership. Permission was granted to test on one classroom, to confirm it stopped the attack, while not also disrupting the training. Lesson 6: The network-as-a-sensor will help mitigate issues but will not fix the human element.

Once confirmed, the measure was implemented network wide to return resiliency and access. The NOC team continued the investigation on the spoofed MAC addresses, using syslogs, firewall logs, etc. and identified the likely app and device used. An automated security alerting workflow was put in place to quickly identify if the attacker resumed/returned, so physical security could also intervene to revoke the badge and eject the attacker from the conference for violation of the Black Hat code of conduct.

I am grateful to the 20+ Cisco engineers, plus Talos Threat Hunters, deployed to the Mandalay Bay Convention Center, from the United States, Canada, Qatar and United Kingdom who made the Cisco contributions to the Black Hat USA 2022 NOC possible. I hope you will read on, to learn more lessons learned about the network and the part two blog about Cisco Secure in the NOC

Building the Hacker Summer Camp Network, by Evan Basta

It was the challenge of my career to take on the role of the lead network engineer for Black Hat USA. The lead engineer, who I replaced, was unable to travel from Singapore, just notifying us two weeks before we were scheduled to deploy to Las Vegas.

We prepared as much as possible before arrival, using the floor plans and the inventory of equipment that was ordered and on its way from the warehouse. We met with the Black Hat NOC leadership, partners and Mandalay Bay network engineers weekly on conference calls, adjusted what we could and then went to Black Hat, ready for a rapidly changing environment.

Our team was able to remain flexible and meet all the Black Hat requests that came in, thanks to the ability of the Cisco Meraki dashboard to manage the APs and switches from the cloud. Often, we were configuring the AP or switch as it was being transported to the location of the new network segment, laptop in hand.

For the construction of the Black Hat network, let’s start with availability. Registration and training rooms had priority for connectivity. iPads and iPhones needed secure connectivity to scan QR codes of registering attendees. Badge printers needed hardline access to the registration system. Training rooms all needed their separate wireless networks, for a safe sandbox for network defense and attack. Thousands of attendees attended, ready to download and upload terabytes of data through the main conference wireless network. All the keynotes, briefings and sponsored sessions needed to be recorded and streamed. Below are all the APs stacked up for assignment, including those assigned to the Expo Hall in the foreground.

All this connectivity was provided by Cisco Meraki access points and switches along with integrations into SecureX, Umbrella, and other Cisco platforms. We fielded a literal army of engineers to stand up the network in six days.

Let’s talk security and visibility. For a few days, the Black Hat network is one of the most hostile in the world. Attendees learn new exploits, download new tools, and are encouraged to test them out. Being able to drill down on attendee connection details and traffic was instrumental in ensuring attendees followed the Black Hat code of conduct.

On the wireless front, we made extensive use of our Radio Profiles to reduce interference by tuning power and channel settings. We enabled band steering to get more clients on the 5GHz bands versus 2.4GHz and watched the Location Heatmap like a hawk looking for hotspots and dead areas. Handling the barrage of wireless change requests – enable or disabling this SSID, moving VLANs (Virtual Local Area Networks), enabling tunneling for host isolation on the general conference Wi-Fi, mitigating attacks – was a snap with the Cisco Meraki Dashboard.

Floor Plan and Location Heatmap

On the first day of NOC setup, the Cisco team worked with the Mandalay Bay networking engineers to deploy core switches and map out the switches for the closets, according to the number of cables coming in from the training and briefing rooms. The floor plans in PDF were uploaded into the Meraki Dashboard; and with a little fine tuning, aligned perfectly with the Google Map.

Cisco Meraki APs were then placed physically in the venue meeting and training rooms. Having the APs named, as mentioned above, made this an easy task. This enabled accurate heatmap capability.

The Location Heatmap provided the capability to drill into the four levels of the conference, including the Expo Hall, lower level (North Conference Center), 2nd Floor and 3rd Floor. Below is the view of the entire conference.

Network Visibility

We were able to monitor the number of connected clients, network usage, the people passing by the network and location analytics, throughout the conference days. We provided visibility access to the Black Hat NOC management and the technology partners, along with full API (Application Programming Interface) access, so they could integrate with the network platform.

Alerts

Cisco Meraki alerts provide notification when something happens in the Dashboard. Default behavior is to be emailed when something happens. Obviously, emails got lost in the noise, at Black Hat Asia 2022, we made a web hook in Cisco SecureX orchestration to be able to consume Cisco Meraki alerts and send it to Slack (the messaging platform within the Black Hat NOC), using the native template in the Cisco Meraki Dashboard.

The alert kicked off if an AP or a switch lost connectivity. At Black Hat USA, we modified this to text alerts, as these were a priority. In the following example, we knew at the audio-visual team unplugged a switch to move it and were able to deploy technical associates from the NOC to ensure it was reconnected properly.

The Cisco Stack’s Potential in Action, by Paul Fidler

As we planned for Black Hat USA, the number of iOS devices to manage and protect rose from 300+ to over 900, and finally over 1,000.

The first amongst these was the use of the Cisco Meraki API. We were able to import the list of MAC addresses of the Cisco Meraki APs, to ensure that the APs were named appropriately and tagged, using a single source of truth document shared with the NOC management and partners, with the ability to update en masse at any time. Over three quarters of the AP configuration was able to be completed before arriving on site. 

Meraki Systems Manager – Initial device enrollment and provisioning

We’ll start with the positive: When it comes to creating the design to manage X number of devices, it doesn’t matter if it’s 10 devices, or 10,000… And this was certainly true for Black Hat. The requirements were straightforward:

  • Have several apps installed on devices, which each had a particular role
  • Have a passcode policy on some devices
  • Use home screen layout to help the conferences associates know which app to use
  • Use Name synchronization, so that the name of the device (on a label on the back) was also in the SM dashboard and under Settings > General > About
  • Use restrictions to prevent modification of accounts, Wi-Fi and prevention of screenshots (to protect the personal information of attendees)
  • Prevent the devices from having their management profile removed
  • Ensure that the devices could connect to the initial WPA based network, but then also to the 802.1x based network (using certificates)

All this configuration was done ahead of time in the Meraki Dashboard, almost a month before the conference.

Now the negatives: Of all the events that the company who supplies the devices attends; Black Hat is the only one where devices are managed. Using mass deployment techniques like Apple’s Automated Device Enrollment, therefore, is not used. The company pre-stages the devices using Apple Configurator, which allows for both Supervision and Enrollment.

It became more difficult: Whilst the pre-staged devices were fine (other than having to handle all 1,000+ devices to turn Wi-Fi to Autojoin and opening the Meraki Systems Manager app [to give us Jailbreak and Location visibility]), an extra 100 devices were supplied that were not enrolled. As these devices were enrolled elsewhere from the prior Black Hat conferences, a team of around 10 people pitched in to restore each device, adding the Wi-Fi profile and then enrollment.

Fortunately, Apple Configurator can create Blueprints:

A Blueprint is essential a list of actions, in a particular order, that Apple Configurator can run through autonomously

But why did it need a team of ten? There were several limitations:

  • Number of USB ports on a computer
  • Number in USB-A to USB-C converters (the devices were supplied with USB-A cables)
  • Downloading of the restore image (although Airdrop was used to distribute the image quickly)
  • Speed of the devices to do the restore (the actual Wi-Fi and enrollment steps take less than 10 seconds)

However, the task was completed in around three hours, given the limitations! If there’s one lesson to learn from this: Use Apple’s Automated Device Enrollment. 

Command vs Profile

One of the slight nuances of Apple Mobile Device Manager is the difference between a ‘command’ and ‘profile’. Within the Meraki Systems Manager dashboard, we don’t highlight the difference between the two. But it’s important to know. A ‘profile’ is something that remains on the device: If there’s a state change on the device, or the user attempts something, the profile is always on there. However, a ‘command’ is exactly that: It’s sent once, and if something changes in the future, then the command won’t have any effect.

So, why is this highlighted here? Well, in some instances, some apps weren’t pushed successfully: You’d see them on the device, but with a cloud icon next to them. The only way to resolve this would be to remove the app, and then repost it. But we were also using a Homepage Layout, which put various apps on various pages. Pushing the app would result in it appearing on the wrong page. To ensure a consistent user experience, we would push the homepage profile again to devices to take effect.

Meraki BSSID Geolocation

We’ve mentioned this before in past Black Hat events, but, given the scale of The Mandalay Bay, it’s important to circle back to this. GPS is notoriously unreliable in conference centers like this, but it was still important to know where devices are. Because we’d ensured the correct placement of the Access Points on the floor plan, and because Systems Manager was in the same organisation, it ensured that the devices reported their location accurately! If one were to ‘walk’ we could wipe it remotely to protect your personal details.

Protection of PPI (Protected Private Information)

When the conference Registration closed on the last day and the Business Hall Sponsors all returned their iPhones, we were able to remotely wipe all the devices, removing all attendee data, prior to returning to the device contractor.

APIs

As mentioned elsewhere in this blog, this was a conference of APIs. Just the sheer scale of the conference resulted in the use of APIs. Various API projects included:

  • Getting any ports down events with the getNetworkEvents API call
  • Getting the port status of switches with a given tag with getDeviceSwitchPorts
  • Turning off all the Training SSIDs in one go with getNetworkWirelessSsids and updateNetworkWirelessSsids
  • From a CSV, claiming devices into various networks with tags being applied with claimNetworkDevices and updateDevice (to name it)
  • Creation of networks from CSV with createOrganizationNetwork
  • Creation of SSIDs from CSV with updateNetworkWirelessSsids: This was to accommodate the 70+ SSIDs just for training! This also included the Tag for the SSIDs
  • Adding the Attendee SSID to every training network with updateNetworkWirelessSsids: This was due to us having several networks to accommodate the sheer number of SSIDs
  • Amending the Training SSIDs with the correct PSK using updateNetworkWirelessSsids

From a Systems Manager perspective, there were:

  • The renaming of devices from CSV: Each of the devices had a unique code on the back which was NOT the serial number. Given that it’s possible to change the name of the device on the device with Systems Manager, this meant that the number could be seen on the lock screen too. It also made for the identical of devices in the Systems Manager dashboard quick and easy too. The last thing you want is 1,000 iPhones all called “iPhone!”

Port Security, by Ryan MacLennan, Ian Redden and Paul Fidler

During the Cisco Meraki deployment, we had a requirement to shutdown ports as they went inactive to prevent malicious actors from removing an official device and plugging in theirs. This ability is not directly built into the Cisco Meraki dashboard, so we built a workflow for the Black Hat customer, using the Cisco Meraki API. To achieve this, we created a small python script that was hosted as an AWS (Amazon Web Services) Lambda function and listened for webhooks from the Cisco Meraki Dashboard when a port went down. Initially this did solve our issue, but it was not fast enough, about five minutes from the time the port went down/a cable was unplugged. This proof of concept laid the groundwork to make the system better. We migrated from using a webhook in the Cisco Meraki Dashboard to using syslogs. We also moved the script from Lambda to a local server. Now, a python script was scanning for syslogs from the switches and when it saw a port down log, it will immediately call out to the locally hosted python script that calls out to the Cisco Meraki API and disabled the port.

This challenge had many setbacks and iterations while it was being built. Before we settled on listening for syslogs, we tried using SNMP polling. After figuring out the information we needed to use, we found that trying to poll SNMP would not work because SNMP would not report the port being down if the switch to another device was fast enough. This led us to believe we might not be able to do what we needed in a timely manner. After some deliberation with fellow NOC members, we started working on a script to listen for the port down syslogs. This became the best solution and provided immediate results. The ports would be disabled within milliseconds of going downThe diagram below shows an example of what will happen: If the Workshop Trainer’s device is un-plugged and a Threat Actor tries to plug into their port, a syslog is sent from the Cisco Meraki switch to our internal server hosting the python listener. Once the python script gets the request, it sends an API call to the Cisco Meraki API gateway and the Cisco Meraki cloud then tells the switch to disable the port that went down very briefly.

However, what was apparent was that the script was working TOO well! As discussed, several times already in this blog, the needs of the conference were very dynamic, changing on a minute-by-minute basis. This was certainly true in Registration and with the Audio-Visual teams. We discovered quickly that legitimate devices were being unplugged and plugged in to various ports, even if just temporarily. Of course, the script was so quick that it disabled ports before the users in registration knew what was happening. This resulted in NOC staff having to re-enable ports. So, more development was done. The task? For a given network tag, show the status of all the ports of all the switches. Given the number of switches at the conference, tags were used to reduce the amount of data being brought back, so it was easier to read and manage.

Mapping Meraki Location Data with Python, by Christian Clausen

In the blog post we published after Black Hat Asia 2022, we provided details on how to collect Bluetooth and Wi-Fi scanning data from a Meraki organization, for long-term storage and analysis. This augmented the location data provided by the Meraki dashboard, which is limited to 24-hours. Of course, the Meraki dashboard does more than just provide location data based on Wi-Fi and Bluetooth scanning from the access points. It also provides a neat heatmap generated from this data. We decided to take our long-term data project a step further and see if we could generate our own heatmap based on the data collected from the Meraki Scanning API.

The Folium Python library “builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the leaflet.js library” to provide all kinds of useful mapping functions. We can take location data (longitude and latitude) and plot them on lots of built-in map tiles from the likes of OpenStreetMap, MapBox, Stamen, and more. Among the available Folium plugins is a class called “HeatMapWithTime.” We can use this to plot our Meraki location data and have the resulting map animate the client’s movements.

Step 1: Collect the data

During the previous conference, we used a Docker container containing a couple Flask endpoints connected via ngrok to collect the large amount of data coming from Meraki. We re-used the same application stack this time around, but moved it out from behind ngrok into our own DMZ with a public domain and TLS (Transport Layer Security) certificate, to avoid any bandwidth limitations. We ended up with over 40GB of JSON data for the conference week to give to Black Hat!

Step 2: Format the data

Folium’s HeatMapWithTime plugin requires a “list of lists of points of time.” What we wanted to do is generate an ordered dictionary in Python that is indexed by the timestamp. The data we received from the Meraki API was formatted into “apFloor” labels provided by the admin when the access points are placed. Within each “apFloor” is a list of “observations” that contain information about individual clients spotted by the AP scanners, during the scanning interval.

Here’s what the data looked like straight from the Meraki API, with some dummy values:

The “observations” list is what we wanted to parse. It contains lots of useful information, but what we wanted is MAC address, latitude and longitude numbers, and timestamp:

We used Python to iterate through the observations and to eliminate the data we did not use. After a lot of data wrangling, de-duplicating MAC addresses, and bucketizing the observations into 15-minute increments, the resulting data structure looks like this:

Now that the data is in a usable format, we can feed it into Folium and see what kind of map we get back!

Step 3: Creating the map

Folium is designed to project points onto a map tile. Map tiles can show satellite images, streets, or terrain, and are projected onto a globe. In our case, however, we want to use the blueprint of the conference center. Folium’s allows for an image’s overlay to be added, and the bounds of the image to be set by specifying the coordinates for the top-left and bottom-right corners of image. Luckily, we can get this from the Meraki dashboard.  

This enabled us to overlay the floorplan image on the map. Unfortunately, the map tiles themselves limit the amount of zoom available to the map visualization. Lucky for us, we did not care about the map tile now that we have the floorplan image. We passed “None” as the map tile source and finally received our data visualization and saved the map as an HTML file for Black Hat leadership.

We opened the HTML file, and we had an auto-playing heatmap that lets us zoom at far in as we want:

Detail at 1:30pm PT, on 10 August 2022 below.

To improve this going forward, the logical next steps would be to insert the data into a database for the Black Hat conference organizers, for quick retrieval and map generation. We can then start looking at advanced use-cases in the NOC, such as tracking individual a MAC address that may be producing suspicious traffic, by cross-referencing data from other sources (Umbrella, NetWitness, etc.).

——————————————————————————————————

Network Recovery, by Jessica Bair Oppenheimer

Once the final session ended, the Expo Hall closed and the steaming switched off, dozens of conference associates, technical associates, Mandalay Bay engineers and Cisco staff spread out through two million square feet and numerous switching closets to recover the equipment for inventory and packing. It took less than four hours to tear down a network that was built and evolved 11 days prior. Matt Vander Horst made a custom app to scan in each item, separating equipment donated to Black Hat from that which needed to be returned to the warehouse for the next global Cisco event.

Adapt and overcome! Check out part two of this blog, Black Hat USA 2022 Continued: Innovation in the NOC.

Until then, thanks again to our Cisco Meraki engineers, pictured below with a MR57 access point.

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team.

Meraki Systems Manager: Paul Fidler (team leader), Paul Hasstedt and Kevin Carter

Meraki Network Engineering: Evan Basta (team leader), Gregory Michel, Richard Fung and CJ Ramsey

Network Design and Wireless Site Survey: Jeffry Handal, Humphrey Cheung, JW McIntire and Romulo Ferreira

Network Build/Tear Down: Dinkar Sharma, Ryan Maclennan, Ron Taylor and Leo Cruz

Critical support in sourcing and delivering the Meraki APs and switches: Lauren Frederick, Eric Goodwin, Isaac Flemate, Scott Pope and Morgan Mann

SecureX threat response, orchestration, device insights, custom integrations, and Malware Analytics: Ian Redden, Aditya Sankar, Ben Greenbaum, Matt Vander Horst and Robert Taylor

Umbrella DNS: Christian Clasen and Alejo Calaoagan

Talos Incident Response Threat Hunters: Jerzy ‘Yuri’ Kramarz and Michael Kelley

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially Jason Reverri), Lumen, Gigamon, IronNet, and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, Steve Fink, James Pope, Jess Stafford and Steve Oldenbourg).

Read Part 2:

Black Hat USA 2022 Continued: Innovation in the NOC

About Black Hat

For 25 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and USA. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Black Hat USA 2022 Continued: Innovation in the NOC

By Jessica Bair

In part one of our Black Hat USA 2022 NOC blog, we discussed building the network with Meraki:

  • Adapt and Overcome
  • Building the Hacker Summer Camp network, by Evan Basta
  • The Cisco Stack’s Potential in Action, by Paul Fidler
  • Port Security, by Ryan MacLennan, Ian Redden and Paul Fiddler
  • Mapping Meraki Location Data with Python, by Christian Clausen

In this part two, we will discuss:

  • Bringing it all together with SecureX
  • Creating Custom Meraki Dashboard Tiles for SecureX, by Matt Vander Horst
  • Talos Threat Hunting, by Jerzy ‘Yuri’ Kramarz and Michael Kelley
  • Unmistaken Identity, by Ben Greenbaum
  • 25+ Years of Black Hat (and some DNS stats), by Alejo Calaoagan

Cisco is a Premium Partner of the Black Hat NOC, and is the Official Wired & Wireless Network Equipment, Mobile Device Management, DNS (Domain Name Service) and Malware Analysis Provider of Black Hat.

Watch the video: Building and Securing the Black Hat USA Network

Black Hat USA is my favorite part of my professional life each year. We had an incredible staff of 20 Cisco engineers to build and secure the network. Also, for the first time, we had two Talos Threat Hunters from the Talos Incident Response (TIR) team, providing unique perspectives and skills to the attacks on the network. I really appreciated the close collaboration with the Palo Alto Networks and NetWitness team members. We created new integrations and the NOC continued to serve as an incubator for innovation.

We must allow real malware on the network for training, demonstrations, and briefing sessions; while protecting the attendees from attack within the network from their fellow attendees and prevent bad actors using the network to attack the Internet. It is a critical balance to ensure everyone has a safe experience, while still being able to learn from real world malware, vulnerabilities, and malicious websites. So, context is what really matters when investigating a potential attack and bringing so many technologies together in SecureX really accelerated investigation and response (when needed).

All the Black Hat network traffic was supported by Meraki switches and wireless access points, using the latest Meraki gear donated by Cisco. Our Meraki team was able to block people from the Black Hat network, when an investigation showed they did something in violation of the attendee Code of Conduct, upon review and approval by the Black Hat NOC leadership.

Cisco Secure provided all the domain name service (DNS) requests on the Black Hat network through Umbrella, whenever attendees wanted to connect to a website. If there is a specific DNS attack that threatened the conference, we supported Black Hat in blocking it to protect the network. However, by default, we allow and monitor DNS requests to malware, command and control, phishing, crypto mining, and other dangerous domains, which would be blocked in a production environment. That balance of allowing cybersecurity training and demos to occur, but ready to block when needed.

In addition to the Meraki networking gear, Cisco Secure also shipped an Umbrella DNS virtual appliance to Black Hat USA, for internal network visibility with redundancy. The Intel NUC containing the virtual appliance also contained the bridge to the NetWitness on-premises SIEM, custom developed by Ian Redden.

We also deployed the following cloud-based security software:

We analyzed files that were downloaded on the network, checking them for malicious behavior. When malware is downloaded, we confirm it is for a training, briefing or demonstration, and not the start of an attack on attendees.

During an investigation, we used SecureX to visualize the threat intelligence and related artifacts, correlating data. In the example below, an attacker was attempting remote code execution on the Registration Servers, alerted by the Palo Alto team, investigated by the NOC threat hunters, and blocked by order of the NOC leadership upon the results of the investigation.

Cisco Secure Threat Intelligence (correlated through SecureX)

Donated Partner Threat Intelligence (correlated through SecureX)

Open-Source Threat Intelligence (correlated through SecureX)

Continued Integrations from past Black Hat events

  • NetWitness SIEM integration with SecureX
  • NetWitness PCAP file carving and submission to Cisco Secure Malware Analytics (formerly Threat Grid) for analysis
  • Meraki syslogs into NetWitness SIEM and Palo Alto Firewall
  • Umbrella DNS into NetWitness SIEM and Palo Alto Firewall 

New Integrations Created at Black Hat USA 2022

  • Secure Malware Analytics integration with Palo Alto Cortex XSOAR, extracting files from the network stream via the firewall

The NOC partners, especially NetWitness and Palo Alto Networks, were so collaborative and we left Vegas with more ideas for future integration development

Creating Custom Meraki Dashboard Tiles for SecureX, by Matt Vander Horst

One of the biggest benefits of Cisco SecureX is its open architecture. Anyone can build integrations for SecureX if they can develop an API with the right endpoints that speak the right language. In the case of SecureX, the language is the Cisco Threat Intelligence Model (CTIM). As mentioned above, Cisco Meraki powered Black Hat USA 2022 by providing wired and wireless networking for the entire conference. This meant a lot of equipment and users to keep track of. To avoid having to switch between two different dashboards in the NOC, we decided to build a SecureX integration that would provide Meraki dashboard tiles directly into our single pane of glass: SecureX.

Building an integration for SecureX is simple: decide what functionality you want your integration to offer, build an internet-accessible API that offers those functions, and then add the integration to SecureX. At Black Hat, our Meraki integration supported two capabilities: health and dashboard. Here’s a summary of those capabilities and the API endpoints they expect:

Capability Description API Endpoints
Health Enables SecureX to make sure the module is reachable and working properly. /health
Dashboard Provides a list of available dashboard tiles and, after a tile is added to a dashboard, the tile data itself. /tiles

/tile-data

 

With our capabilities decided, we moved on to building the API for SecureX to talk to. SecureX doesn’t care how you build this API if it has the expected endpoints and speaks the right language. You could build a SecureX-compatible API directly into your product, as a serverless Amazon Web Services (AWS) Lambda, as a Python script with Django, and so on. To enable rapid development at Black Hat, we chose to build our integration API on an existing Ubuntu server in AWS running Apache and PHP.

After building the API framework on our AWS server, we had to decide which dashboard tiles to offer. Here’s what we ended up supporting:

Tile Name Description
Top Applications Shows the top 10 applications by flow count
Client Statistics Shows a summary of clients
Top SSIDs by Usage in GB Shows the top 10 SSIDs by data usage in GB
Access Point Status Shows a summary of access points

 

Finally, once the API was up and running, we could add the integration to SecureX. To do this, you need to create a module definition and then push it to SecureX using its IROH-INT API. After the module is created, it appears in the Available Integration Modules section of SecureX and can be added. Here’s what our module looked like after being added to the Black Hat SecureX instance:

After adding our new tiles to the SecureX dashboard, SecureX would ask our API for data. The API we built would fetch the data from Meraki’s APIs, format the data from Meraki for SecureX, and then return the formatted data. Here’s the result:

These dashboard tiles gave us useful insights into what was going on in the Meraki network environment alongside our existing dashboard tiles for other products such as Cisco Secure Endpoint, Cisco Umbrella, Cisco Secure Malware Analytics, and so on.

If you want to learn more about building integrations with SecureX, check out these resources:

Talos Threat Hunting, by Jerzy ‘Yuri’ Kramarz and Michael Kelly

Black Hat USA 2022 was our first fully supported event, where we deployed an onsite threat hunting team from Talos Incident Response (TIR). Our colleagues and friends from various business units, connected by SecureX integration, granted us access to all the underlying consoles and API points to support the threat hunting efforts enhanced by Talos Intelligence.

The threat hunting team focused on answering three key hypothesis-driven questions and matched that with data modelling across all of the different technology stacks deployed in Black Hat NOC:

  • Are there any attendees attempting to breach each other’s systems in or outside of a classroom environment?
  • Are there any attendees attempting to subvert any NOC Systems?
  • Are there any attendees that are compromised and we could warn them about that?

To answer the above hypothesis, our analysis started with understanding of how the network architecture is laid out and what kind of data access is granted to NOC. We quickly realized that our critical partners are key to extending visibility beyond Cisco deployed technologies. Great many thanks go to our friends from NetWitness and Palo Alto Networks for sharing full access to their technologies, to ensure that hunting did not stop on just Cisco kit and contextual intelligence could be gathered across different security products.

Daily threat hunt started with gathering data from Meraki API to identify IP and DNS level requests leaving the devices connected to wireless access points across entire conference. Although Meraki does not directly filter the traffic, we wanted to find signs of malicious activity such as DNS exfiltration attempts or connections to known and malicious domains which were not part of the class teaching. Given the level of access, we were then able to investigate network traffic capture associated with suspicious connections and check for suspected Command and Control (C2) points (there were a few from different threat actors!) or attempts to connect back to malicious DNS or Fast Flux domains which indicated that some of the attendee devices were indeed compromised with malware.

That said, this is to be expected given hostility of the network we were researching and the fact that classroom environments have users who can bring their own devices for hands-on labs. SecureX allowed us to quickly plot this internally to find specific hosts which were connecting and talking with malicious endpoints while also showing a number of additional datapoints which were useful for the investigation and hunting. Below is one such investigation, using SecureX threat response.

While looking at internal traffic, we have also found and plotted quite a few different port-scans running across the internal network. While not stopping these, it was interesting to see different tries and attempts by students to find ports and devices across networks. Good thing that network isolation was in place to prevent that! We blurred out the IP and MAC addresses in the image below.

Here is another example of really nice port scan clusters that were running across both internal and external networks we have found. This time it was the case of multiple hosts scanning each other and looking to discovery ports locally and across many of the Internet-based systems. All of that was part of the class but we had to verify that as it looked quite suspicious from the outset. Again, blurred picture for anonymity.

In a few instances, we also identified remarkably interesting clear-text LDAP traffic leaving the environment and giving a clear indicator of which organization the specific device belonged to simply because of the domain name which was requested in the cleartext. It was quite interesting to see that in 2022, we still have a lot of devices talking clear text protocols such as POP3, LDAP, HTTP or FTP, which are easy to subvert via Man-In-The-Middle type of attacks and can easily disclose the content of important messages such as email or server credentials. Below is an example of the plain text email attachments, visible in NetWitness and Cisco Secure Malware Analytics.

In terms of the external attacks, Log4J exploitation attempts were pretty much a daily occurrence on the infrastructure and applications used for attendee registration along with other typical web-based attacks such as SQL injections or path traversals. Overall, we saw a good number of port scans, floods, probes and all kind of web application exploitation attempts showing up daily, at various peak hours. Fortunately, all of them were successfully identified for context (is this part of a training class or demonstration) and contained (if appropriate) before causing any harm to external systems. Given the fact that we could intercept boundary traffic and investigate specific PCAP dumps, we used all these attacks to identify various command-and-control servers for which we also hunted internally to ensure that no internal system is compromised.

The final piece of the puzzle we looked to address, while threat hunting during Black Hat 2022, was automation to discover interesting investigation avenues. Both of us investigated a possibility of threat hunting using Jupyter playbooks to find outliers that warrant a closer look. We have created and developed a set of scripts which would gather the data from API endpoints and create a data frames which could be modeled for further analysis. This allowed us to quickly gather and filter out systems and connections which were not that interesting. Then, focus on specific hosts we should be checking across different technology stacks such as NetWitness and Palo Alto.

Unmistaken Identity, by Ben Greenbaum

An unusual aspect of the Black Hat NOC and associated security operations activities is that this is an intentionally hostile network. People come to learn new tricks and to conduct what would in any other circumstance be viewed rightfully as malicious, unwanted behavior. So, determining whether this is “acceptable” or “unacceptable” malicious behavior is an added step. Additionally, this is a heavily BYOD environment and while we do not want attendees attacking each other, or our infrastructure, there is a certain amount of suspicious or indicative behavior we may need to overlook to focus on higher priority alerts.

In short, there are broadly speaking 3 levels of security event at Black Hat:

  • Allowed – classroom or demonstration activities; i.e. a large part of the purpose of Black Hat
  • Tolerated –C&C communications from BYOD systems, other evidence of infections that are not evidence of direct attacks; attendee cleartext communications that should be encrypted, but are not relevant to the operation of the conference.
  • Forbidden – direct attacks on attendees, instructors, or infrastructure; overt criminal activity, or other violations of the Code of Conduct

When Umbrella alerted us (via a SecureX orchestration Webex workflow) of DNS requests for a domain involved in “Illegal Activity” it was reminiscent of an event at a previous conference where an attendee was caught using the conference network to download forged vaccination documents.

Using the Cisco Secure Malware Analytics platform’s phishing investigation tools, I loaded and explored the subject domain and found it to be a tool that generates and provides pseudo-randomized fake identities, customizable in various ways to match on demographics. Certainly, something that could be used for nefarious purposes, but is not illegal in and of itself. Physical security and access control is, however, also important at Black Hat, and if this activity was part of an effort to undermine that, then this was still a concern.

This is, however, also the kind of thing that gets taught at Black Hat…

Using the reported internal host IP from Umbrella, Meraki’s connection records, and the Meraki access point map, we were able to narrow the activity down to a specific classroom. Looking up what was being taught in that room, we were able to confirm that the activity was related to the course’s subject matter

Network owners and administrators, especially businesses, typically don’t want their network to be used for crimes. However, here at Black Hat what some would consider “crimes” is just “the curriculum”. This adds a layer of complexity to securing and protecting not just Black Hat, but also Black Hat attendees. In security operations, not every investigation leads to a smoking gun. At Black Hat, even when it does, you may find that the smoking gun was fired in a safe manner at an approved target range. Having the right tools on hand can help you make these determinations quickly and free you up to investigate the next potential threat.

25 Years of Black Hat – Musings from the show (and some DNS stats), by Alejo Calaoagan

Back in Singapore, I wrote about cloud app usage and the potential threat landscape surrounding them.  My original plan at Black Hat USA was to dig deeper into this vector to see what interesting tidbits I could find on our attendee network. However, given that this was the 25th anniversary of Black Hat (and my 14th in total between Vegas, Singapore, and London), I’ve decided to pivot to talk about the show itself.

I think it’s safe to say, after two difficult pandemic years, Black Hat is back. Maybe it’s the fact that almost everyone has caught COVID by now (or that a lot of people just stopped caring). I caught it myself at RSA this year back in June, the first of consecutive summer super spread events (Cisco Live Vegas was the following week). Both of those shows were in the 15-18k attendee range, well below their pre-pandemic numbers. Black Hat USA 2022 was estimated at 27,000 attendees.

If I remember correctly, 2019 was in the 25-30K range. Last year in Vegas, there were ~3,000 people at the event, tops. 2021 in London, was even lower…it felt like there were less than 1,000 attendees. Things certainly picked up in Singapore (2-3k attendees), though that event doesn’t typically see attendee numbers as high as the other locations. All in all, while the pandemic certainly isn’t over, Las Vegas gave glimpses of what things were like before the “Rona” took over our lives.

The show floor was certainly back to the norm, with swag flying off the countertops and lines for Nike sneaker and Lego giveaways wrapping around different booths.  The smiles on people’s faces as they pitched, sold, hustled, and educated the masses reminded me how much I missed this level of engagement.  RSA gave me this feeling as well, before COVID sidelined me midway through the show anyway.

Not everything was quite the same. The Black Hat party scene certainly is not what it used to be. There was no Rapid 7 rager this year or last, or a happy hour event thrown by a security company you’ve never heard of at every bar you walk by on the strip. There were still some good networking events here and there, and there were some awesomely random Vanilla Ice, Sugar Ray, and Smashmouth shows. For those of you familiar with Jeremiah Grossman’s annual Black Hat BJJ throwdown, that’s still, thankfully, a thing. Hopefully, in the coming years, some of that old awesomeness returns….

Enough reminiscing, here are our DNS numbers from the show:

From a sheer traffic perspective, this was the busiest Black Hat ever, with over 50 million DNS requests made…

Digging into these numbers, Umbrella observed over 1.3 million security events, including various types of malware across the attendee network. Our threat hunting team was busy all week!

We’ve also seen an increase in app usage at Black Hat:

  • 2019: ~3,600
  • 2021: ~2,600
  • 2022: ~6,300

In a real-world production environment, Umbrella can block unapproved or high-risk apps via DNS.

The increases in DNS traffic volume and Cloud App usage obviously mirrors Black Hat’s return to the center stage of security conferences, following two years of pandemic uncertainty. I’m hopeful that things will continue to trend in a positive direction leading up to London and, hopefully, we’ll see you all there.

——

Hats off to the entire NOC team. Check out Black Hat Europe in London, 5-8 December 2022!

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team.

SecureX threat response, orchestration, device insights, custom integrations and Malware Analytics: Ian Redden, Aditya Sankar, Ben Greenbaum, Matt Vander Horst and Robert Taylor

Umbrella DNS: Christian Clasen and Alejo Calaoagan

Talos Incident Response Threat Hunters: Jerzy ‘Yuri’ Kramarz and Michael Kelley

Meraki Systems Manager: Paul Fidler (team leader), Paul Hasstedt and Kevin Carter

Meraki Network Engineering: Evan Basta (team leader), Gregory Michel, Richard Fung and CJ Ramsey

Network Design and Wireless Site Survey: Jeffry Handal, Humphrey Cheung, JW McIntire and Romulo Ferreira

Network Build/Tear Down: Dinkar Sharma, Ryan Maclennan, Ron Taylor and Leo Cruz

Critical support in sourcing and delivering the Meraki APs and switches: Lauren Frederick, Eric Goodwin, Isaac Flemate, Scott Pope and Morgan Mann

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially Jason Reverri), Lumen, Gigamon, IronNet, and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, Steve Fink, James Pope, Jess Stafford and Steve Oldenbourg).

Read Part 1:

Black Hat USA 2022: Creating Hacker Summer Camp

About Black Hat

For 25 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and USA. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

New Evil PLC Attack Weaponizes PLCs to Breach OT and Enterprise Networks

By Ravie Lakshmanan
Cybersecurity researchers have elaborated a novel attack technique that weaponizes programmable logic controllers (PLCs) to gain an initial foothold in engineering workstations and subsequently invade the operational technology (OT) networks. Dubbed "Evil PLC" attack by industrial security firm Claroty, the issue impacts engineering workstation software from Rockwell Automation, Schneider

Researchers Disclose 56 Vulnerabilities Impacting OT Devices from 10 Vendors

By Ravie Lakshmanan
Nearly five dozen security vulnerabilities have been disclosed in devices from 10 operational technology (OT) vendors due to what researchers call are "insecure-by-design practices." Collectively dubbed OT:ICEFALL by Forescout, the 56 issues span as many as 26 device models from Bently Nevada, Emerson, Honeywell, JTEKT, Motorola, Omron, Phoenix Contact, Siemens, and Yokogawa. "Exploiting these

Black Hat Asia 2022 Continued: Cisco Secure Integrations

By Jessica Bair

In part one of our Black Hat Asia 2022 NOC blog, we discussed building the network with Meraki: 

  • From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung 
  • Meraki MR, MS, MX and Systems Manager by Paul Fidler 
  • Meraki Scanning API Receiver by Christian Clasen 

In this part two, we will discuss:  

  • SecureX: Bringing Threat Intelligence Together by Ian Redden 
  • Device type spoofing event by Jonny Noble 
  • Self Service with SecureX Orchestration and Slack by Matt Vander Horst 
  • Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar 
  • Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan 
  • Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum 

SecureX: Bringing Threat Intelligence Together by Ian Redden 

In addition to the Meraki networking gear, Cisco Secure also shipped two Umbrella DNS virtual appliances to Black Hat Asia, for internal network visibility with redundancy, in addition to providing: 

Cisco Secure Threat Intelligence (correlated through SecureX)

Donated Partner Threat Intelligence (correlated through SecureX)

Open-Source Threat Intelligence (correlated through SecureX)

Continued Integrations from past Black Hat events

  • NetWitness PCAP file carving and submission to Cisco Secure Malware Analytics (formerly Threat Grid) for analysis

New Integrations Created at Black Hat Asia 2022

  • SecureX threat response and NetWitness SIEM: Sightings in investigations
  • SecureX orchestration workflows for Slack that enabled:
    • Administrators to block a device by MAC address for violating the conference Code of Conduct
    • NOC members to query Meraki for information about network devices and their clients
    • NOC members to update the VLAN on a Meraki switchport
    • NOC members to query Palo Alto Panorama for client information
    • Notification if an AP went down
  • NetWitness SIEM integration with Meraki syslogs
  • Palo Alto Panorama integration with Meraki syslogs
  • Palo Alto Cortex XSOAR integration with Meraki and Umbrella

Device type spoofing event by Jonny Noble

Overview

During the conference, a NOC Partner informed us that they received an alert from May 10 concerning an endpoint client that accessed two domains that they saw as malicious:

  • legendarytable[.]com
  • drakefollow[.]com

Client details from Partner:

  • Private IP: 10.XXX.XXX.XXX
  • Client name: LAPTOP-8MLGDXXXX
  • MAC: f4:XX:XX:XX:XX:XX
  • User agent for detected incidents: Mozilla/5.0 (iPhone; CPU iPhone OS 11_1_2 like Mac OS X) AppleWebKit/602.2.8 (KHTML, like Gecko) Version/11.0 Mobile/14B55c Safari/602.1

Based on the user agent, the partner derived that the device type was an Apple iPhone.

SecureX analysis

  • legendarytable[.]com à Judgement of Suspicious by alphaMountain.ai
  • drakefollow[.]com à Judgement of Malicious by alphaMountain.ai

Umbrella Investigate analysis

Umbrella Investigate positions both domains as low risk, both registered recently in Poland, and both hosted on the same IP:

Despite the low-risk score, the nameservers have high counts of malicious associated domains:

Targeting users in ASA, UK, and Nigeria:

Meraki analysis

Based on the time of the incident, we can trace the device’s location (based on its IP address). This is thanks to the effort we invested in mapping out the exact location of all Meraki APs, which we deployed across the convention center with an overlay of the event map covering the area of the event:

  • Access Point: APXX
  • Room: Orchid Ballroom XXX
  • Training course at time in location: “Web Hacking Black Belt Edition”

Further analysis and conclusions

The device name (LAPTOP-8MLGXXXXXX) and MAC address seen (f4:XX:XX:XX:XX:XX) both matched across the partner and Meraki, so there was no question that we were analyzing the same device.

Based on the useragent captured by the partner, the device type was an Apple iPhone. However, Meraki was reporting the Device and its OS as “Intel, Android”

A quick look up for the MAC address confirmed that the OUI (organizationally unique identifier) for f42679 was Intel Malaysia, making it unlikely that this was an Apple iPhone.

The description for the training “Web Hacking Black Belt Edition” can be seen here:

https://www.blackhat.com/asia-22/training/schedule/#web-hacking-black-belt-edition–day-25388

It is highly likely that the training content included the use of tools and techniques for spoofing the visibility of useragent or device type.

There is also a high probability that the two domains observed were used as part of the training activity, rather than this being part of a live attack.

It is clear that integrating the various Cisco technologies (Meraki wireless infrastructure, SecureX, Umbrella, Investigate) used in the investigation of this incident, together with the close partnership and collaboration of our NOC partners, positioned us where we needed to be and provided us with the tools we needed to swiftly collect the data, join the dots, make conclusions, and successfully bring the incident to closure.

Self Service with SecureX Orchestration and Slack by Matt Vander Horst

Overview

Since Meraki was a new platform for much of the NOC’s staff, we wanted to make information easier to gather and enable a certain amount of self-service. Since the Black Hat NOC uses Slack for messaging, we decided to create a Slack bot that NOC staff could use to interact with the Meraki infrastructure as well as Palo Alto Panorama using the SecureX Orchestration remote appliance. When users communicate with the bot, webhooks are sent to Cisco SecureX Orchestration to do the work on the back end and send the results back to the user.

Design

Here’s how this integration works:

  1. When a Slack user triggers a ‘/’ “slash command” or other type of interaction, a webhook is sent to SecureX Orchestration. Webhooks trigger orchestration workflows which can do any number of things. In this case, we have two different workflows: one to handle slash commands and another for interactive elements such as forms (more on the workflows later).
  2. Once the workflow is triggered, it makes the necessary API calls to Meraki or Palo Alto Panorama depending on the command issued.
  3. After the workflow is finished, the results are passed back to Slack using either an API request (for slash commands) or webhook (for interactive elements).
  4. The user is presented with the results of their inquiry or the action they requested.

Workflow #1: Handle Slash Commands

Slash commands are a special type of message built into Slack that allow users to interact with a bot. When a Slack user executes a slash command, the command and its arguments are sent to SecureX Orchestration where a workflow handles the command. The table below shows a summary of the slash commands our bot supported for Black Hat Asia 2022:

Here’s a sample of a portion of the SecureX Orchestration workflow that powers the above commands:

And here’s a sample of firewall logs as returned from the “/pan_traffic_history” command:

Workflow #2: Handle Interactivity

A more advanced form of user interaction comes in the form of Slack blocks. Instead of including a command’s arguments in the command itself, you can execute the command and Slack will present you with a form to complete, like this one for the “/update_vlan” command:

These forms are much more user friendly and allow information to be pre-populated for the user. In the example above, the user can simply select the switch to configure from a drop-down list instead of having to enter its name or serial number. When the user submits one of these forms, a webhook is sent to SecureX Orchestration to execute a workflow. The workflow takes the requested action and sends back a confirmation to the user:

Conclusion

While these two workflows only scratched the surface of what can be done with SecureX Orchestration webhooks and Slack, we now have a foundation that can be easily expanded upon going forward. We can add additional commands, new forms of interactivity, and continue to enable NOC staff to get the information they need and take necessary action. The goal of orchestration is to make life simpler, whether it is by automating our interactions with technology or making those interactions easier for the user. 

Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan

Since 2017 (starting in Black Hat USA – Las Vegas), Cisco Umbrella has provided DNS security to the Black Hat attendee network, added layers of traffic visibility previously not seen. Our efforts have largely been successful, identifying thousands of threats over the years and mitigating them via Umbrella’s blocking capabilities when necessary. This was taken a step further at Black Hat London 2021, where we introduced our Virtual Appliances to provide source IP attribution to the devices making requests.

 

 

Here at Black Hat Asia 2022, we’ve been noodling on additional ways to provide advanced protection for future shows, and it starts with Umbrella’s Cloud Application Discovery’s feature, which identified 2,286 unique applications accessed by users on the attendee network across the four-day conference.  Looking at a snapshot from a single day of the show, Umbrella captured 572,282 DNS requests from all cloud apps, with over 42,000 posing either high or very high risk.

Digging deeper into the data, we see not only the types of apps being accessed…

…but also see the apps themselves…

…and we can flag apps that look suspicious.

We also include risk downs breaks by category…

…and drill downs on each.

While this data alone won’t provide enough information to take action, including this data in analysis, something we have been doing, may provide a window into new threat vectors that may have previously gone unseen. For example, if we identify a compromised device infected with malware or a device attempting to access things on the network that are restricted, we can dig deeper into the types of cloud apps those devices are using and correlate that data with suspicious request activity, potential uncovering tools we should be blocking in the future.

I can’t say for certain how much this extra data set will help us uncover new threats, but, with Black Hat USA just around the corner, we’ll find out soon.

Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar

From five years ago to now, Cisco has tremendously expanded our presence at Black Hat to include a multitude of products. Of course, sign-on was simple when it was just one product (Secure Malware Analytics) and one user to log in. When it came time to add a new technology to the stack it was added separately as a standalone product with its own method of logging in. As the number of products increased, so did the number of Cisco staff at the conference to support these products. This means sharing usernames and passwords became tedious and not to mention insecure, especially with 15 Cisco staff, plus partners, accessing the platforms.

The Cisco Secure stack at Black Hat includes SecureX, Umbrella, Malware Analytics, Secure Endpoint (iOS clarity), and Meraki. All of these technologies support using SAML SSO natively with SecureX sign-on. This means that each of our Cisco staff members can have an individual SecureX sign-on account to log into the various consoles. This results in better role-based access control, better audit logging and an overall better login experience. With SecureX sign-on we can log into all the products only having to type a password one time and approve one Cisco DUO Multi-Factor Authentication (MFA) push.

How does this magic work behind the scenes? It’s actually rather simple to configure SSO for each of the Cisco technologies, since they all support SecureX sign-on natively. First and foremost, you must set up a new SecureX org by creating a SecureX sign-on account, creating a new organization and integrating at least one Cisco technology. In this case I created a new SecureX organization for Black Hat and added the Secure Endpoint module, Umbrella Module, Meraki Systems Manager module and the Secure Malware Analytics module. Then from Administration à Users in SecureX, I sent an invite to the Cisco staffers that would be attending the conference, which contained a link to create their account and join the Blackhat SecureX organization. Next let’s take a look at the individual product configurations.

Meraki:

In the Meraki organization settings enable SecureX sign-on. Then under Organization à Administrators add a new user and specify SecureX sign-on as the authentication method. Meraki even lets you limit users to particular networks and set permission levels for those networks. Accepting the email invitation is easy since the user should already be logged into their SecureX sign-on account. Now, logging into Meraki only requires an email address and no password or additional DUO push.

Umbrella:

Under Admin à Authentication configure SecureX sign-on which requires a test login to ensure you can still login before using SSO for authentication to Umbrella. There is no need to configure MFA in Umbrella since SecureX sign-on comes with built in DUO MFA. Existing users and any new users added in Umbrella under Admin à Accounts will now be using SecureX sign-on to login to Umbrella. Logging into Umbrella is now a seamless launch from the SecureX dashboard or from the SecureX ribbon in any of the other consoles.

Secure Malware Analytics:

A Secure Malware Analytics organization admin can create new users in their Threat Grid tenant. This username is unique to Malware Analytics, but it can be connected to a SecureX sign-on account to take advantage of the seamless login flow. From the email invitation the user will create a password for their Malware Analytics user and accept the EULA. Then in the top right under My Malware Analytics Account, the user has an option to connect their SecureX sign-on account which is a one click process if already signed in with SecureX sign-on. Now when a user navigates to Malware Analytics login page, simply clicking “Login with SecureX Sign-On” will grant them access to the console.

 

Secure Endpoint:

The Secure Endpoint deployment at Blackhat is limited to IOS clarity through Meraki Systems Manager for the conference IOS devices. Most of the asset information we need about the iPhones/iPads is brought in through the SecureX Device Insights inventory. However, for initial configuration and to view device trajectory it is required to log into Secure Endpoint. A new Secure Endpoint account can be created under Accounts à Users and an invite is sent to corresponding email address. Accepting the invite is a smooth process since the user is already signed in with SecureX sign-on. Privileges for the user in the Endpoint console can be granted from within the user account.

Conclusion:

To sum it all up, SecureX sign-on is the standard for the Cisco stack moving forward. With a new SecureX organization instantiated using SecureX sign-on any new users to the Cisco stack at Black Hat will be using SecureX sign-on. SecureX sign-on has helped our user management be much more secure as we have expanded our presence at Black Hat. SecureX sign-on provides a unified login mechanism for all the products and modernized our login experience at the conference.

Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum

I’d gotten used to people’s reactions upon seeing SecureX in use for the first time. A few times at Black Hat, a small audience gathered just to watch us effortlessly correlate data from multiple threat intelligence repositories and several security sensor networks in just a few clicks in a single interface for rapid sequencing of events and an intuitive understanding of security events, situations, causes, and consequences. You’ve already read about a few of these instances above. Here is just one example of SecureX automatically putting together a chronological history of observed network events detected by products from two vendors (Cisco Umbrella and NetWitness) . The participation of NetWitness in this and all of our other investigations was made possible by our open architecture, available APIs and API specifications, and the creation of the NetWitness module described above.

In addition to the traffic and online activities of hundreds of user devices on the network, we were responsible for monitoring a handful of Black Hat-owned devices as well. Secure X Device Insights made it easy to access information about these assets, either en masse or as required during an ongoing investigation. iOS Clarity for Secure Endpoint and Meraki System Manager both contributed to this useful tool which adds business intelligence and asset context to SecureX’s native event and threat intelligence, for more complete and more actionable security intelligence overall.

SecureX is made possible by dozens of integrations, each bringing their own unique information and capabilities. This time though, for me, the star of the SecureX show was our malware analysis engine, Cisco Secure Malware Analytics (CSMA). Shortly before Black Hat Asia, the CSMA team released a new version of their SecureX module. SecureX can now query CSMA’s database of malware behavior and activity, including all relevant indicators and observables, as an automated part of the regular process of any investigation performed in SecureX Threat Response.

This capability is most useful in two scenarios:

1: determining if suspicious domains, IPs and files reported by any other technology had been observed in the analysis of any of the millions of publicly submitted file samples, or our own.
2: rapidly gathering additional context about files submitted to the analysis engine by the integrated products in the Black Hat NOC.

The first was a significant time saver in several investigations. In the example below, we received an alert about connections to a suspicious domain. In that scenario, our first course of action is to investigate the domain and any other observables reported with it (typically the internal and public IPs included in the alert). Due to the new CSMA module, we immediately discovered that the domain had a history of being contacted by a variety of malware samples, from multiple families, and that information, corroborated by automatically gathered reputation information from multiple sources about each of those files, gave us an immediate next direction to investigate as we hunted for evidence of those files being present in network traffic or of any traffic to other C&C resources known to be used by those families. From the first alert to having a robust, data-driven set of related signals to look for, took only minutes, including from SecureX partner Recorded Future, who donated a full threat intelligence license for the Black Hat NOC.

The other scenario, investigating files submitted for analysis, came up less frequently but when it did, the CSMA/SecureX integration was equally impressive. We could rapidly, nearly immediately, look for evidence of any of our analyzed samples in the environment across all other deployed SecureX-compatible technologies. That evidence was no longer limited to searching for the hash itself, but included any of the network resources or dropped payloads associated with the sample as well, easily identifying local targets who had not perhaps seen the exact variant submitted, but who had nonetheless been in contact with that sample’s Command and Control infrastructure or other related artifacts.

And of course, thanks to the presence of the ribbon in the CSMA UI, we could be even more efficient and do this with multiple samples at once.

SecureX greatly increased the efficiency of our small volunteer team, and certainly made it possible for us to investigate more alerts and events, and hunt for more threats, all more thoroughly, than we would have been able to without it. SecureX truly took this team to the next level, by augmenting and operationalizing the tools and the staff that we had at our disposal.

We look forward to seeing you at Black Hat USA in Las Vegas, 6-11 August 2022!

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).

About Black Hat

For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.

Black Hat Asia 2022: Building the Network

By Jessica Bair

In part one of this issue of our Black Hat Asia NOC blog, you will find: 

  • From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung 
  • Meraki MR, MS, MX and Systems Manager by Paul Fidler 
  • Meraki Scanning API Receiver by Christian Clasen 

Cisco Meraki was asked by Black Hat Events to be the Official Wired and Wireless Network Equipment, for Black Hat Asia 2022, in Singapore, 10-13 May 2022; in addition to providing the Mobile Device Management (since Black Hat USA 2021), Malware Analysis (since Black Hat USA 2016), & DNS (since Black Hat USA 2017) for the Network Operations Center. We were proud to collaborate with NOC partners Gigamon, IronNet, MyRepublic, NetWitness and Palo Alto Networks. 

To accomplish this undertaking in a few weeks’ time, after the conference had a green light with the new COVID protocols, Cisco Meraki and Cisco Secure leadership gave their full support to send the necessary hardware, software licenses and staff to Singapore. Thirteen Cisco engineers deployed to the Marina Bay Sands Convention Center, from Singapore, Australia, United States and United Kingdom; with two additional remote Cisco engineers from the United States.

From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung

Loops in the networking world are usually considered a bad thing. Spanning tree loops and routing loops happen in an instant and can ruin your whole day, but over the 2nd week in May, I made a different kind of loop. Twenty years ago, I first attended the Black Hat and Defcon conventions – yay Caesars Palace and Alexis Park – a wide-eyed tech newbie who barely knew what WEP hacking, Driftnet image stealing and session hijacking meant. The community was amazing and the friendships and knowledge I gained, springboarded my IT career.

In 2005, I was lucky enough to become a Senior Editor at Tom’s Hardware Guide and attended Black Hat as accredited press from 2005 to 2008. From writing about the latest hardware zero-days to learning how to steal cookies from the master himself, Robert Graham, I can say, without any doubt, Black Hat and Defcon were my favorite events of the year.

Since 2016, I have been a Technical Solutions Architect at Cisco Meraki and have worked on insanely large Meraki installations – some with twenty thousand branches and more than a hundred thousand access points, so setting up the Black Hat network should be a piece of cake right? Heck no, this is unlike any network you’ve experienced!

As an attendee and press, I took the Black Hat network for granted. To take a phrase that we often hear about Cisco Meraki equipment, “it just works”. Back then, while I did see access points and switches around the show, I never really dived into how everything was set up.

A serious challenge was to secure the needed hardware and ship it in time for the conference, given the global supply chain issues. Special recognition to Jeffry Handal for locating the hardware and obtaining the approvals to donate to Black Hat Events. For Black Hat Asia, Cisco Meraki shipped:

Let’s start with availability. iPads and iPhones are scanning QR codes to register attendees. Badge printers need access to the registration system. Training rooms all have their separate wireless networks – after all, Black Hat attendees get a baptism by fire on network defense and attack. To top it all off, hundreds of attendees gulped down terabytes of data through the main conference wireless network.

All this connectivity was provided by Cisco Meraki access points, switches, security appliances, along with integrations into SecureX, Umbrella and other products. We fielded a literal army of engineers to stand up the network in less than two days… just in time for the training sessions on May 10  to 13th and throughout the Black Hat Briefings and Business Hall on May 12 and 13.

Let’s talk security and visibility. For a few days, the Black Hat network is probably one of the most hostile in the world. Attendees learn new exploits, download new tools and are encouraged to test them out. Being able to drill down on attendee connection details and traffic was instrumental on ensuring attendees didn’t get too crazy.

On the wireless front, we made extensive use of our Radio Profiles to reduce interference by tuning power and channel settings. We enabled band steering to get more clients on the 5GHz bands versus 2.4GHz and watched the Location Heatmap like a hawk looking for hotspots and dead areas. Handling the barrage of wireless change requests – enable or disabling this SSID, moving VLANs (Virtual Local Area Networks), enabling tunneling or NAT mode, – was a snap with the Meraki Dashboard.

Shutting Down a Network Scanner

While the Cisco Meraki Dashboard is extremely powerful, we happily supported exporting of logs and integration in major event collectors, such as the NetWitness SIEM and even the Palo Alto firewall. On Thursday morning, the NOC team found a potentially malicious Macbook Pro performing vulnerability scans against the Black Hat management network. It is a balance, as we must allow trainings and demos connect to malicious websites, download malware and execute. However, there is a Code of Conduct to which all attendees are expected to follow and is posted at Registration with a QR code.

The Cisco Meraki network was exporting syslog and other information to the Palo Alto firewall, and after correlating the data between the Palo Alto Dashboard and Cisco Meraki client details page, we tracked down the laptop to the Business Hall.

We briefed the NOC management, who confirmed the scanning was violation of the Code of Conduct, and the device was blocked in the Meraki Dashboard, with the instruction to come to the NOC.

The device name and location made it very easy to determine to whom it belonged in the conference attendees.

A delegation from the NOC went to the Business Hall, politely waited for the demo to finish at the booth and had a thoughtful conversation with the person about scanning the network. 😊

Coming back to Black Hat as a NOC volunteer was an amazing experience.  While it made for long days with little sleep, I really can’t think of a better way to give back to the conference that helped jumpstart my professional career.

Meraki MR, MS, MX and Systems Manager by Paul Fidler

With the invitation extended to Cisco Meraki to provide network access, both from a wired and wireless perspective, there was an opportunity to show the value of the Meraki platform integration capabilities of Access Points (AP), switches, security appliances and mobile device management.

The first amongst this was the use of the Meraki API. We were able to import the list of MAC addresses of the Meraki MRs, to ensure that the APs were named appropriately and tagged, using a single source of truth document shared with the NOC management and partners, with the ability to update en masse at any time.

Floor Plan and Location Heatmap

On the first day of NOC setup, the Cisco team walked around the venue to discuss AP placements with the staff of the Marina Bay Sands. Whilst we had a simple Powerpoint showing approximate AP placements for the conference, it was noted that the venue team had an incredibly detailed floor plan of the venue. This was acquired in PDF and uploaded into the Meraki Dashboard; and with a little fine tuning, aligned perfectly with the Google Map.

Meraki APs were then placed physically in the venue meeting and training rooms, and very roughly on the floor plan. One of the team members then used a printout of the floor plan to mark accurately the placement of the APs. Having the APs named, as mentioned above, made this an easy task (walking around the venue notwithstanding!). This enabled accurate heatmap capability.

The Location Heatmap was a new capability for Black Hat NOC, and the client data visualized in NOC continued to be of great interest to the Black Hat management team, such as which training, briefing and sponsor booths drew the most interest.

SSID Availability

The ability to use SSID Availability was incredibly useful. It allowed ALL of the access points to be placed within a single Meraki Network. Not only that, because of the training events happening during the week, as well as TWO dedicated SSIDs for the Registration and lead tracking iOS devices (more of which later), one for initial provisioning (which was later turned off), and one for certificated based authentication, for a very secure connection.

Network Visibility

We were able to monitor the number of connected clients, network usage, the persons passing by the network and location analytics, throughout the conference days. We provided visibility access to the Black Hat NOC management and the technology partners (along with full API access), so they could integrate with the network platform.

Alerts

Meraki alerts are exactly that: the ability to be alerted to something that happens in the Dashboard. Default behavior is to be emailed when something happens. Obviously, emails got lost in the noise, so a web hook was created in SecureX orchestration to be able to consume Meraki alerts and send it to Slack (the messaging platform within the Black Hat NOC), using the native template in the Meraki Dashboard. The first alert to be created was to be alerted if an AP went down. We were to be alerted after five minutes of an AP going down, which is the smallest amount of time available before being alerted.

The bot was ready; however, the APs stayed up the entire time! 

Meraki Systems Manager

Applying the lessons learned at Black Hat Europe 2021, for the initial configuration of the conference iOS devices, we set up the Registration iPads and lead retrieval iPhones with Umbrella, Secure Endpoint and WiFi config. Devices were, as in London, initially configured using Apple Configurator, to both supervise and enroll the devices into a new Meraki Systems Manager instance in the Dashboard.

However, Black Hat Asia 2022 offered us a unique opportunity to show off some of the more integrated functionality.

System Apps were hidden and various restrictions (disallow joining of unknown networks, disallow tethering to computers, etc.) were applied, as well as a standard WPA2 SSID for the devices that the device vendor had set up (we gave them the name of the SSID and Password).

We also stood up a new SSID and turned-on Sentry, which allows you to provision managed devices with, not only the SSID information, but also a dynamically generated certificate. The certificate authority and radius server needed to do this 802.1x is included in the Meraki Dashboard automatically! When the device attempts to authenticate to the network, if it doesn’t have the certificate, it doesn’t get access. This SSID, using SSID availability, was only available to the access points in the Registration area.

Using the Sentry allowed us to easily identify devices in the client list.

One of the alerts generated with SysLog by Meraki, and then viewable and correlated in the NetWitness SIEM, was a ‘De Auth’ event that came from an access point. Whilst we had the IP address of the device, making it easy to find, because the event was a de auth, meaning 802.1x, it narrowed down the devices to JUST the iPads and iPhones used for registration (as all other access points were using WPA2). This was further enhanced by seeing the certificate name used in the de-auth:

Along with the certificate name was the name of the AP: R**

Device Location

One of the inherent problems with iOS device location is when devices are used indoors, as GPS signals just aren’t strong enough to penetrate modern buildings. However, because the accurate location of the Meraki access points was placed on the floor plan in the Dashboard, and because the Meraki Systems Manager iOS devices were in the same Dashboard organization as the access points, we got to see a much more accurate map of devices compared to Black Hat Europe 2021 in London.

When the conference Registration closed on the last day and the Business Hall Sponsors all returned their iPhones, we were able to remotely wipe all of the devices, removing all attendee data, prior to returning to the device contractor.

Meraki Scanning API Receiver by Christian Clasen

Leveraging the ubiquity of both WiFi and Bluetooth radios in mobile devices and laptops, Cisco Meraki’s wireless access points can detect and provide location analytics to report on user foot traffic behavior. This can be useful in retail scenarios where customers desire location and movement data to better understand the trends of engagement in their physical stores.

Meraki can aggregate real-time data of detected WiFi and Bluetooth devices and triangulate their location rather precisely when the floorplan and AP placement has been diligently designed and documented. At the Black Hat Asia conference, we made sure to properly map the AP locations carefully to ensure the highest accuracy possible.

This scanning data is available for clients whether they are associated with the access points or not. At the conference, we were able to get very detailed heatmaps and time-lapse animations representing the movement of attendees throughout the day. This data is valuable to conference organizers in determining the popularity of certain talks, and the attendance at things like keynote presentations and foot traffic at booths.

This was great for monitoring during the event, but the Dashboard would only provide 24-hours of scanning data, limiting what we could do when it came to long-term data analysis. Fortunately for us, Meraki offers an API service we can use to capture this treasure trove offline for further analysis. We only needed to build a receiver for it.

The Receiver Stack

The Scanning API requires that the customer stand up infrastructure to store the data, and then register with the Meraki cloud using a verification code and secret. It is composed of two endpoints:

  1. Validator

Returns the validator string in the response body

[GET] https://yourserver/

This endpoint is called by Meraki to validate the receiving server. It expects to receive a string that matches the validator defined in the Meraki Dashboard for the respective network.

  1. Receiver

Accepts an observation payload from the Meraki cloud

[POST] https://yourserver/

This endpoint is responsible for receiving the observation data provided by Meraki. The URL path should match that of the [GET] request, used for validation.

The response body will consist of an array of JSON objects containing the observations at an aggregate per network level. The JSON will be determined based on WiFi or BLE device observations as indicated in the type parameter.

What we needed was a simple technology stack that would contain (at minimum) a publicly accessible web server capable of TLS. In the end, the simplest implementation was a web server written using Python Flask, in a Docker container, deployed in AWS, connected through ngrok.

In fewer than 50 lines of Python, we could accept the inbound connection from Meraki and reply with the chosen verification code. We would then listen for the incoming POST data and dump it into a local data store for future analysis. Since this was to be a temporary solution (the duration of the four-day conference), the thought of registering a public domain and configuring TLS certificates wasn’t particularly appealing. An excellent solution for these types of API integrations is ngrok (https://ngrok.com/). And a handy Python wrapper was available for simple integration into the script (https://pyngrok.readthedocs.io/en/latest/index.html).

We wanted to easily re-use this stack next time around, so it only made sense to containerize it in Docker. This way, the whole thing could be stood up at the next conference, with one simple command. The image we ended up with would mount a local volume, so that the ingested data would remain persistent across container restarts.

Ngrok allowed us to create a secure tunnel from the container that could be connected in the cloud to a publicly resolvable domain with a trusted TLS certificate generated for us. Adding that URL to the Meraki Dashboard is all we needed to do start ingesting the massive treasure trove of location data from the Aps – nearly 1GB of JSON over 24 hours.

This “quick and dirty” solution illustrated the importance of interoperability and openness in the technology space when enabling security operations to gather and analyze the data they require to monitor and secure events like Black Hat, and their enterprise networks as well. It served us well during the conference and will certainly be used again going forward.

Check out part two of the blog, Black Hat Asia 2022 Continued: Cisco Secure Integrations, where we will discuss integrating NOC operations and making your Cisco Secure deployment more effective:

  • SecureX: Bringing Threat Intelligence Together by Ian Redden
  • Device type spoofing event by Jonny Noble
  • Self Service with SecureX Orchestration and Slack by Matt Vander Horst
  • Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar
  • Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan
  • Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).

About Black Hat

For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

The Art of Ruthless Prioritization and Why it Matters for SecOps

By Randy Kersey

The security operations center (SecOps) team sits on the front lines of a cybersecurity battlefield. The SecOps team works around the clock with precious and limited resources to monitor enterprise systems, identify and investigate cybersecurity threats, and defend against security breaches.

One of the important goals of SecOps is a faster and more effective collaboration among all personnel involved with security. The team seeks to streamline the security triage process to resolve security incidents efficiently and effectively. For this process to be optimized, we believe that ruthless prioritization is critical at all levels of alert response and triage. This ruthless prioritization requires both the processes and the supporting technical platforms to be predictive, accurate, timely, understandable for all involved, and ideally automated. This can be a tall order.

Alert Volumes Have the SecOps Team Under Siege

Most SecOps teams are bombarded with an increasing barrage of alerts each year. A recent IBM report also found that complexity is negatively impacting incident response capabilities. Those surveyed estimated their organization was using more than 45 different security tools on average and that each incident they responded to required coordination across around 19 security tools on average.

Depending on the enterprise size and industry, these tools may generate many thousands of alerts in periods ranging from hours to days, and many of them may be redundant or no value. One vendor surveyed IT professionals at the RSA conference in 2018. The survey results show that twenty-seven% of IT professional’s receive more than 1 million security alerts daily[1].

The cost and effort of reviewing all of these alerts are prohibitive for most organizations, so many are effectively deprioritized and immediately ignored. Some surveyed respondents admit to  ignoring specific categories of alerts, and some turn off the security alerts associated with the security controls that generate much of the alert traffic. However, the one alert you ignore may have resulted in a major data breach to the organization.

Tier 1 SecOps analysts have to manage this barrage of alerts. They are surrounded by consoles and monitors tracking many activities within enterprise networks. There is so much data that incident responders cannot process but a fraction of it. Alerts pour in every minute and ratchet up the activity level and the attendant stress throughout the day.

A Tier 1 SecOps analyst processes up to several hundred alerts in a day that require quick review and triage. As the alert is logged, the Tier 1 SecOps analyst usually goes through a checklist to determine further prioritization and determine if further escalation is required.  This can vary substantially depending on the automation and tools which support their efforts.

Once the alert is determined to be potentially malicious and requires follow-up it is  escalated to a Tier 2 SOC Analyst. Tier 2 SOC Analysts are primarily security investigators. Perhaps only 1% or less are escalated to a Tier 2 SOC analyst for deep investigation. Once again, the numbers can vary substantially depending on the organization and industry.

Security investigators will use a multitude of data, threat intelligence, log files, DNS activity, and much more to identify the exact nature of the potential breach and determine the best response playbook to use. In the case of a severe threat, this response and subsequent remediation must be done in the shortest possible amount of time, ideally measured in minutes if not just a very few hours.

In the most dangerous scenario that a threat actor has executed what is determined to be a zero-day attack, the SOC team works with IT, operations, and the business units to protect, isolate, and even take critical servers offline to protect the enterprise. Zero-day attacks raise the SOC to a war footing, which, if properly and rapidly executed against the team’s playbooks, can help mitigate further damage from what is otherwise previously unknown attack techniques. These require the skill and expertise of advanced security analysts to help assess and mitigate complex ongoing cyberattacks.

Given the barrage of alerts, it is essential to adopt a strategy to fit best the capabilities of your team against a priority-driven process. This allows you to optimize your response to alerts, best manage the resources on the SecOps team, and reduce the risk of a dangerous breach event.

There are several strategic views that SOC leadership can take on how to best approach prioritization. These include data driven strategies using tools like DLP, threat driven strategies to bolster defenses and shorten reaction time to threat vectors active in your industry and geography, and perhaps asset driven strategies, where certain assets will merit enhanced protection and priority driven escalation for alerts. Most organizations find that an integrated mix of these strategies addresses their overall needs.

A Data-Driven Approach to Prioritization

The first approach to prioritization, consistent with the tenets of zero trust, is to take a data-driven approach. Customer data and intellectual property are often at the center of every organization’s most protected jewels. One way to move this into focus within SecOps would be to implement Data Loss Prevention (DLP). Data loss prevention (DLP), per Gartner, may be defined as technologies that perform both content inspection and contextual analysis of data sent via messaging applications such as email and instant messaging, in motion over the network, in use on a managed endpoint device, and at rest in on-premises file servers or in cloud applications and cloud storage. These solutions execute responses based on policy and rules defined to address the risk of inadvertent or accidental leaks or exposure of sensitive data outside authorized channels.

Enterprise DLP solutions are comprehensive and packaged in agent software for desktops and servers, physical and virtual appliances for monitoring networks and email traffic, or soft appliances for data discovery. Integrated DLP works with secure web gateways (SWGs), secure email gateways (SEGs), email encryption products, enterprise content management (ECM) platforms, data classification tools, data discovery tools, and cloud access security brokers (CASBs).

A Threat-Driven Approach to Prioritization

Threat intelligence focuses on defense and triage priority from the data to external threat actors and the techniques they are most likely to utilize. Threat intelligence can give the SOC the data they need to anticipate threat actors and the Tactics, Techniques, and Procedures (TTPs) these threat actors might use. Further, threat intelligence can provide a path to recognize the often unique Incidents of Compromise (IOCs) that can uniquely identify a type of cyberattack and the threat actor that uses them. The goal, of course, is to identify and prevent these most likely attacks before they occur or stop them rapidly upon detection.

The consolation prize is also a good one. If you cannot prevent an attack, you must be able to identify an unfolding threat. You must identify the attack, break the attacker’s kill chain, and then stop the attack. Threat intelligence can also help you assess your environment, understand the vulnerabilities that would support the execution of a particular kill chain, and then let you move rapidly to mitigate these threats.

In August of 2020, researchers from Dutch and German universities[2] co-presented at the 29th Usenix conference on a survey they conducted. The survey showed that there is less overlap between threat intelligence sources than most of us would expect. This includes both open (free) and paid threat intelligence sources.  The moral of the story is that large organizations likely need a wide set of threat intelligence data from multiple sources to gain an advantage over threat actors and the attack vectors they are likely to use. And these sources must be integrated into a common dashboard where SecOps threat investigators can rapidly leverage them.

An Asset-Driven Approach to Prioritization

Of course, certain assets are more valuable than others. This can be a function of the data they may uniquely hold, and the access to network, applications, and information resources frequented by their owners, or the level of criticality of the asset’s function. For example, the chief financial officer’s laptop may be assumed to be in possession of the most sensitive data, or medical device monitor during surgery or command controller for manufacturing production. Hence they may deserve higher priority in terms of protection.

Optimize Your Prioritization Strategy with MVISION XDR

MVISION XDR provides capabilities leveraging all of these prioritization strategies: data-driven, threat-driven, and asset-driven. On top of this MVISION XDR offers predictive assessment based on global threats likely to target your organization with a local assessment of how your environment can counter the threat. This “before the attack” actionable assessment is powered by the distinct MVISION Insights empowering SOC to be more proactive and less reactive.  Here is a preview of MVISION Insights top ten threat campaigns.  Here are some key prioritization examples delivered in MVISION XDR:

Key MVISION XDR Prioritization Examples

 

Priority Strategy (ies) Capability Description Benefit & Value
Data-driven Alert based on data-sensitivity Focus on critical impact activity
Threat-driven Automatic correlated threat techniques to derive at likely next steps Gain confidence in the alert less false positives
Threat-driven View trends and threat actors targeting your organization Reduce the universe of threats and actors to those that matter
Asset-driven Tag critical assets for automated prioritization Address threats to critical assets faster

Prioritization Delivers Improved Business Value for the SecOps Team

MVISION XDR can help you implement and optimize your prioritization strategy. Your SecOps team will have the improved triage time they need with prioritized threats, predictive assessment, and proactive response, and the data awareness to make better and faster decisions. To learn more, please review our Evolve with XDR webpage or reach out to our sales team directly.

 

[1] https://www.imperva.com/blog/27-percent-of-it-professionals-receive-more-than-1-million-security-alerts-daily/

[2] https://www.usenix.org/system/files/sec20_slides_bouwman.pdf

The post The Art of Ruthless Prioritization and Why it Matters for SecOps appeared first on McAfee Blog.

The Industry Applauds MVISION XDR – Turning Raves into Benefits

By Kathy Trahan

Do you usually read what critics say before deciding to see a movie or read a book? We believe these McAfee MVISION XDR reviews were worth the wait. But rather than simply share a few top-tier analyst blurbs with you, we’d like to walk through what these insights mean to our growing set of customers and how their sec operations will evolve with greater efficiencies.

Extended Detection and Response products, better known as XDR, not only extended the capabilities of EDR platforms, but according to Gartner[1] “ XDR products may be able to reduce the complexity of security configuration and incident response to provide a better security outcome than isolated best-of-breed components.”

Rave 1: Be more proactive vs reactive

Our Enterprise Security Manager (ESM)/SecOps team briefed a top-tier analyst firm on ESM product execution and the MVISION XDR platform in particular. His reaction to our use cases? “These are great and it is useful to have examples that cut across different events, which is illustrative more so than anything. The response to the cuts across various tools, and the proactive configuration aspect with the security score type analysis, is also pretty rare in this market.”

The takeaway: Preventing an incident is much better than cleaning up after the fact. MVISION XDR powered by MVISION Insights offers a unified security posture score from endpoint to cloud, delivering a more robust and comprehensive assessment across your environment. It allows you to drill down on specifics to enhance your security.

“The vendor has stolen a march on some of its competitors, at least in the short term, with this offering. A lot of vendors are aiming to get to an offering comprising threat intel + prioritization + recommendations + automation, but few if any have actually reached that point today.” – Omdia

Rave 2: Open to easily unite security

A top-tier analyst firm mentioned that many EDR vendors today call themselves “Open XDR” vendors, but they do not offer a fully effective XDR product. The analyst sees XDR as a significant opportunity for McAfee to expand the breadth of our product portfolio.

The takeaway: A fully effective XDR product unites security controls to detect and assess comprehensively and prevent erratic movement of advanced threats. A robust product portfolio with an integrated service offering from a platform vendor with a proven track record of integrating security (McAfee) is critical to achieve this.

Rave 3: Data-aware to prioritize organizational impact

Noted by a top-tier analyst firm, only McAfee and one other offers data-awareness in the XDR offering. This XDR capability alerts the analyst that the threat impact is targeted at sensitive data.

Rave 4: Automatic analysis across the vectors accelerate investigations and response

The takeaway: Many SOCs have siloed tools that hinders their ability to detect and respond quickly and appropriately. SOC’s must prioritize threat intelligence to rapidly make critical decisions.

Rave 5: Improving the SOC

A top-tier analyst firm believes the primary segments for XDR capabilities are in the three groups to solve problems: 1) Workspace 2) Network 3) Cloud workloads. Giving hardening guidance is good for customers, so any vulnerability exposure and threat scoring are good priorities for MVISION Insights.

The takeaway: McAfee MVISION XDR provides automation that eliminates many manual tasks but more importantly, it empowers SOC analysts to prioritize the threats that matter and stay ahead of adversaries.

Rave 6: Efficiently cloud-delivered

A top-tier analyst firm likes our product direction. “Where you’re going with XDR, and with the cloud console — that’s the way to go. It feels like we have crossed the Rubicon of cloud-delivered.”

The takeaway: By going cloud-native, MVISION XDR enables more efficient, better, and faster decisions with automated investigations driven by correlation analysis across multiple vectors. We can provide unified visibility and control of threats across endpoints, networks and the cloud.

To discover why McAfee MVISION XDR earns rave industry reviews, see our resources on XDR to evolve your security operations to be more efficient and effective.

Resource: [1] Gartner Innovation Insight for Extended Detection and Response, Peter Firstbrook, Craig Lawson , 8 April 2021

 

 

 

The post The Industry Applauds MVISION XDR – Turning Raves into Benefits appeared first on McAfee Blogs.

How to Proactively Increase Your Protection Against Ransomware with Threat Intelligence

By Nicolas Stricher

As Ransomware continues to spread and target organizations around the world, it is critical to leverage threat intelligence data. And not just any threat intelligence but actionable intelligence from MVISION Insights. Fortunately, there are several steps you can take to proactively increase your Endpoint Security to help minimize damage from the next Darkside, WannaCry, Ryuk, or REvil

Which Ransomware campaigns and threat profiles are most likely going to hit you?

MVISION Insights provides near real time statistics on the prevalence of Ransomware campaigns and threat profiles detections by country, by sector and in your environment.

Above you can see that although 5ss5c is the most detected ransomware worldwide, in France Darkside and Ryuk have been the most detected campaigns in the last 10 days. You can also sort top campaigns by industry sector.

How to proactively increase your level of protection against these ransomwares?

As you can see above, MVISION Insights measures your overall Endpoint Security score and provides recommendations on which McAfee Endpoint Security features should be enabled for maximum protection.

Then, MVISION Insights assesses out-of-the-box the minimum version of your McAfee Endpoint Security AMcore content necessary to protect against each campaign. As you can see above, two devices have an insufficient coverage against the “CISA-FBI Cybersecurity Advisory on the Darkside Ransomware”. You can then use McAfee ePO to update these two devices.

Below, MVISION Insights provides a link to a KB article for the “Darkside Ransomware profile” with detailed suggestions on which McAfee Endpoint Security rules to enable in your McAfee ePO policies. First, the minimum set of rules to better protect against this ransomware campaign. Second, the aggressive set to fully block the campaign. The second one can create false positives and should only be used in major crisis situations.

How to proactively check if you have been breached?

MVISION Insights can show you whether you have unresolved detections for specific campaigns. Below you can see that you have an unresolved detection linked the “Operation Iron Ore” threat campaign.

MVISION Insights provides IOCs (Indicators of comprises) which your SOC can use with MVISION EDR to look for the presence of these malicious indicators.

If your SOC has experienced threat hunters MVISION Insights also provides information on the MITRE Tactics, Techniques and Tools linked to this threat campaign or threat profile. This data is also available via the MVISION APIs to integrate with your other SOC tools. In fact, several integrations are already available today with other vendors from the McAfee SIA partnership.

Finally, the ultimate benefit from MVISION Insights is that you can use it to show to your management whether your organization is correctly protected against the latest ransomware attacks.

In summary, you can easily leverage MVISION Insights to proactively increase your protection against ransomware by:

    • Identifying which ransomware are most likely going to hit you
    • Adapting your McAfee Endpoint Security protection against these campaigns using McAfee’s recommendations
    • Proactively checking whether you might be breached
    • Showing your protection status against these threats to your management

 

 

 

The post How to Proactively Increase Your Protection Against Ransomware with Threat Intelligence appeared first on McAfee Blogs.

Testing to Ensure Your Security Posture Never Slouches

By Naveen Palavalli

How well can you predict, prevent and respond to ever-changing cyberthreats? How do you know that your security efforts measure up? The stakes are high if this is difficult to answer and track.  Imagine if you had one place where you found a comprehensive real time security posture that tells you exactly where the looming current cyber risks are and the impact?  Let’s consider a recent and relevant cyber threat.

Take, for example, the May 7th DarkSide ransomware attack that shut down Colonial Pipeline’s distribution network. That well-publicized attack spurred considerable interest in cybersecurity assessments. Ransomware doesn’t just cost money—or embarrassment—it can derail careers. As news spread, we fielded numerous calls from executives wondering: Are my systems protected against DarkSide?

Until recently, discovering the answer to such questions has required exercises such as white hat penetration testing or the completion of lengthy or sometimes generic security posture questionnaires. And we know how that goes — your results may vary from the “norm,” sometimes quite a bit.

To empower you to ask and confidently answer the “am I protected” questions, we developed MVISION Insights Unified Posture Scoring to provide real-time assessments of your environment from device to cloud and threat campaigns targeting your industry.

With the score, you’ll know at a glance: Have you done enough to stave off the most likely risks? In general, the better controls you set for your endpoints, networks and clouds, the lower your risk of breaches and data loss—and the better your security posture score. A CISO from a large enterprise recently stated that the “most significant thing for a CISO to solve is to become confident in the security score.”

Risk and Posture

Assessing risk is about determining the likelihood of an event. A risk score considers where you’re vulnerable and based on those weaknesses how likely is it that a bad actor will exploit it? That scoring approach helps security teams determine whether to apply a specific tool or countermeasures.

However, a posture score goes a step further when it considers your current environment’s risk but also whether you’ve been able to withstand attacks. Where have you applied protections to suppress an attack? It enables you to ask: what’s the state of your defensive posture?

Security posture scoring may answer other critical questions such as:

  • What are the assets and what is their criticality (discover and classify)?
  • What are the threats (events perpetrated by threat actors in the context of the critical assets and vulnerabilities)?
  • What is the likelihood of breach (target by industry, region, other historical perspective)?
  • How vulnerable is my environment (weaknesses in the infrastructure)?
  • Can my controls counter & protect my cyber assets (mitigating controls against the vulnerabilities)?
  • What is the impact of a breach (business assessment based on CIA: confidentiality, integrity & availability)?

Knowing these answers also makes security posture scoring useful for compliance risk assessment, producing a benchmark that enables your organization to compare its industry performance and also choose which concerns to prioritize. The score can also serve as an indicator of whether your organization would be approved for cyber insurance or even how much it may have to pay.

Some organizations use security posture scoring to help prepare for security audits. But it can also be used in lieu of third-party assessments—applying recommended assessments instead of expensive penetration testing.

Scoring Points at Work

No doubt, the pandemic and working from home have exacerbated security posture challenges. According to Enterprise Strategy Group (ESG), a “growing attack surface” from cloud computing and new digital devices are complicating security posture management. So is managing “inexperienced remote workers,” who may be preyed upon by various forms of malware. This can lead not only to management headaches, says ESG, but also to “vulnerabilities and potential system compromises.”

About one year ago we released the initial version of MVISION Insights posture scoring —focused on endpoint assessments. A security score was assigned based on your preparedness to thwart looming threats and the configuration of your McAfee endpoint security products. It enabled predictive assessments based on security posture aligned to campaign-specific threat intelligence.

Customers are tired of piecing together siloed security and demand a unified security approach reflected in our MVISION XDR powered by MVISION Insights. We expanded the scoring capability to also assess cloud defenses, including your countermeasures and controls. Derived from MVISION Cloud Security Advisor, the cloud security posture is weighted average of visibility and control for IaaS, SaaS,and shadow IT. There is an option to easily pivot to MVISION Cloud Security Advisor.  The Unified Security posture score is weighted average of the endpoint and cloud security posture score delivering a more robust and comprehensive assessment with the ability to drill down on specifics to enhance your security from device to cloud. Many endpoint wanna-be XDR vendors cannot provide this critical aggregated security assessment across vectors.

Becoming more robust is what all of us must do. When organizations face the jeopardy of “Ransomware-as-a-Service” payments that may scale up to $2 million, understanding how best to manage your security posture is no longer simply a nice to have, it’s become an operational imperative.

Click here to learn more about Security Posture Scoring from a few practitioners in our LinkedIn Live session.

The post Testing to Ensure Your Security Posture Never Slouches appeared first on McAfee Blogs.

Finding Success at Each Stage of Your Threat Intelligence Journey

By Nicolas Stricher

Every week it seems there’s another enormous breach in the media spotlight. The attackers may be state-sponsored groups with extensive resources launching novel forms of ransomware. Where does your organization stand on its readiness and engagement versus this type of advanced persistent threat? More importantly, where does it want to go?

We believe that the way your organization uses threat intelligence is a significant difference maker in the success of your cybersecurity program. Just as organizations take the journey toward cyber defense excellence at their own rate of speed, some prioritize other investments ahead of threat intelligence, which may impede their progress. Actionable insights aren’t solely about speed, though fast-emerging threats require prompt intervention, they’re also about gaining quality and thoroughness. And that’s table stakes for advancing in your threat intelligence journey.

What is a Threat Intelligence program?

A Threat Intelligence program typically spans five organizational needs:

  • Plan — prepare by identifying the threats that might affect you
  • Collect — gather threat data from multiple feeds or reporting services
  • Process — ingest the data and organize it in a repository
  • Analyze — determine exposure and correlate intelligence with countermeasure capability
  • Disseminate — share the results and adjust your security defenses accordingly

When you disseminate a threat insight, it triggers different responses from various members of your security team. An endpoint administrator will want to automatically invoke counter-measures and security controls to block a threat immediately. A SOC analyst may take actions including looking for signs of a breach and also recommend ways to stiffen your defense posture.

Better threat intelligence provides you with more contextual information — that’s the key. How will this information help your company, in your particular industry, in your region of the world?

The Threat Intelligence journey comes in stages. Where is your program now?

Stage 1: Improving and adapting your protection

Within this stage most companies want to prevent the latest threats at their endpoint, network and cloud controls. They mostly depend on their security vendors to research and keep products up to date with the latest threat intelligence. However, in this stage companies also receive intelligence from other sources, including government, commercial and their own cyber defense investigations, and can use the extra intelligence to further update controls.

Stage 2: Improving the SOC and responding faster

At this stage, organizations advance beyond vendor-provided intelligence and adapt their protection by adding indicators from third-party threat feeds or from other organizational SOC processes such as malware analysis.

Within this stage, companies want to do more than prevent known threats with their tools. They want to understand the adversaries who might target them, improve detection and respond faster by prioritizing investigations.

Stage 3: Improving the Threat Intelligence program

Organizations with this goal know that their industry faces targeted threats every day and they have already invested significantly in their threat intelligence capability. At this stage they most likely have a team utilizing commercial and open-source tools as well as threat data feeds. They’re looking for specialized analysis services and access to raw data.

These organizations can proactively assess their exposure and determine how to reduce the attack surface. They apply threat intelligence to empower their threat hunting, either on a proactive or reactive basis.

Enter new actionable insights, next steps

Until recently it was difficult for security managers to know not just whether their organization has been exposed to a particular threat but whether they have a good level of protection against specific campaigns.

McAfee MVISION Insights is helpful at each stage of your threat intelligence journey because it proactively assesses your organization’s exposure to global threats, integrating with your telemetry, and prescribes how to reduce attack services before the attack occurs.  For stage one, organizations can proactively assess their exposure and determine how to reduce the attack surface. For stage two and three, organizations can apply threat intelligence to empower their threat hunting and analysis, either on a proactive or reactive basis.

 

MVISION Insights Dashboard

One way we help is by integrating data from both McAfee Threat Intelligence feeds such as our Global Threat Intelligence and Advanced Threat Defense, and also third-party services via MVISION APIs. While McAfee Global Threat Intelligence is one of the world’s largest sources of this information, with more than 1 billion global threat sensors in 120+ countries, and 54 billion queries each day, the key thing to know is that we have 500 plus McAfee researchers providing this form of threat intelligence as a service.  The idea is to help you elevate your threat intelligence at each step of your organization’s journey.

 

Check out the latest threats from a Preview of MVISION Insights.

 

 

 

The post Finding Success at Each Stage of Your Threat Intelligence Journey appeared first on McAfee Blogs.

Alert Actionability In Plain English From a Practitioner

By Jesse Netz

In response to the latest MITRE Engenuity ATT&CK® Evaluation 3McAfee noted five capabilities that are must-haves for Sec Ops and displayed in the evaluation.  This blog will speak to the alert actionability capability which is essential. This critical ability to react in the fastest possible way, as early as possible on the attack chain, while correlating, aggregating and summarizing all subsequent activity while reducing alert fatigue to allow Sec Ops touphold efficient actionability. 

 As a Sec Ops practitioner and former analyst, I can remember the days of painstakingly sifting through countless alerts to determine if any of them could be classified as an incident. It was up to me to decide if the alert were a false positive, false alarm, or something the business should take more seriously… was it something we should wake someone up in the middle of the night over? 

It’s been years since I sat on the front line, triaging the results of millions of dollars in investments installed on 100’s of 1000’s of systems worldwide. Thank goodness, times have changed. But the concept of “Alert Actionability” is still a very real aspect of SOC tooling, and it seeks to address 3 primary factors:  trustworthiness, detail, and reaction capabilities. 

Trustworthiness 

When I say “trustworthiness” I’m referring to a quality of fidelity that has two equal, yet opposing, faces of efficacy: false positives and false negatives. Now, it would be very easy for a SOC solution provider to claim that its product offers 100% visibility if it creates an alert for every process activity and artifact recorded. Sure, its coverage is present, but how actionable is the needle in a stack of needle? As a result, the vendor is likely pressured to fine tune it’s alerting and as such introduces the risk of false negatives, or actual malicious events which go undetected. In the zeal of appealing to useability requirements the false positive curve decreases but the false negative volumes have no choice but to rise. 

Resulting in a graph like this: 

The secret sauce in the vendor’s capabilities lies in its capacity to push the intersection of these as far right as possible: minimize the false positives and maximize true positives while simultaneously attempting to bring false negatives down to zeroThe better a vendor’s product can perform these non-trivial goals, the more likely it is to win your trust as a solution! And the more likely you are to trust the results you see on the dashboard.  

Endpoint Detection and Response (EDR) tools have a unique property in which they offer both telemetry and alerting. This implies that there are two goals for EDR platforms: to include event level (telemetry) visibility with automated detection and to provide alerting capabilities for triggering action and triage. With telemetry, the concept of “falsing” is negated because it’s used in a post-facto context. After the alert is constructed, the telemetry can be correlated with the alert logic to provide supporting details. Simply, for EDR telemetry, the more the better. 

Detail 

As an analyst, I remember how much I loved putting together the pieces to tell a story. Extracting key artifacts from several disparate data sources and correlating hypothesis allowed me to present a compelling case as to the conclusion of the alert’s disposition. And I knew that I needed as much detail as possible to make my case; this is just as true today. The detail needs to be easily accessible, and it’s even better when the platform provides the detail proactively. In cases where such supporting evidence may not be possible in the alerting, an analyst’s expectation is that the platform makes hunting for those details easy; I’d even venture to say, “a delight.”  

Reaction Capabilities 

Many EDR platforms on the market offer reaction capabilities to address the “Response” moniker of the acronym. How flexible those response capabilities are in the platform provides a domain of options to act in response to the alert. For example, its rather evident that once an alert is convicted, the analyst may want to block the process, or remove a file from disk. But these reactions imply that the conviction is monolithic in that the analyst is absolutely sure of her conclusion. What if the conclusion is that we simply need more data? Having a robust reaction library that allows for further investigation with routines like sending a sample to a running sandbox, interacting with a given endpoint to act as an administrator, view system logs, or check the history of network connections all empower the analyst with further investigatory options. But why stop there? Having any fixed set of reactions would be presumptive. Instead, EDR products with a dynamic library and flexible, customizable, and modular reaction platform is key as every single SOC I’ve ever worked with has unique Incident Management and Standard Operating Procedures. 

What’s Next? 

MITRE ENGINUITY™ released results for its 3rd round of ATT&CK® Evaluations in April 2021. The industry is certainly fortunate to receive such 3rd party efficacy testing in the EDR market completely free to consumers. It is incredibly important to add that the ATT&CK Evaluations should be used as a single component of your EDR evaluation program. Efficacy helps determine how fit-for-purpose the product is by answering questions like, “Will it detect a threat when I need it to?” or “Can I find what I need, when I need it?”. But practitioners realize there are also pivotal points that need to be addressed around manageability. Understanding that not alerting on everything is just as important as alerting on the right things. And giving you a plethora of alerting response capabilities helps complete the alert investigation and response actions. McAfee’s MVISION EDR embraces all of these key alert actionability factors and will help displace the manual efforts in your analytics processes. McAfee’s MVISION EDR (soon to evolve to MVISION Extended Detection & Response (XDR)provided insight through detail and reduced alert fatigue during the evaluation providing context and enrichment, resulting in a ratio of 62% analytic detections (non-telemetry detections) out of the 274-total detections. 

Check out other McAfee discussion on MITRE (see resources tab.) 

  

 

 

The post Alert Actionability In Plain English From a Practitioner appeared first on McAfee Blogs.

Miles Wide & Feet Deep Visibility of Carbanak+FIN7

By Carlos Diaz

In our last blog about defense capabilities, we outlined the five efficacy objectives of Security Operations, that are most important for a Sec Ops; this blog will focus on Visibility.

The MITRE Engenuity ATT&CK® Evaluation (Round3) focused on the emulation of Carbanak+FIN7 adversaries known for their prolific intrusions impacting financial targets which included the banking and hospitality business sectors.  The evaluation’s testing scope lasted 4 days – 3 days were focused on detection efficacy with all products set to detect/monitor mode only, and the remaining day focused on protection mode set for blocking events.  This blog showcases the breadth and depth of our fundamental visibility capabilities across the 3 days of detection efficacy.

It is important to note that while the goal of these evaluations by MITRE Engenuity is not to rank or score products, our analysis of the results found that McAfee’s blue team was able to use MVISION EDR, complemented by McAfee’s portfolio, to obtain significant visibility, achieving:

 

Scenario Evaluation Scope Visibility Outcome
Scenario – Carbanak Across all 10 Major Steps (Attack Phases) 100%
Scenario – FIN7 Across all 10 Major Steps (Attack Phases) 100%


The evaluation when tracked by Sub-steps shows McAfee having 174 sub-steps with a total 87% visibility.

Going Miles-Wide

When you seek to defend enterprises, you need to assess your portfolio and ensure it can go the distance by spanning across the endpoint and its diverse context, as well as network visibility stemming from hostile activity executed on the target system. More importantly, your portfolio must closely track the adversary across kill-chain phases (miles-wide) to keep up with their up-tempo. The more phases you track, the better you will be able to orient your defenses in real-time.

Scenario 1 – Carbanak

The Carbanak emulation consisted of an attack with 10 Major Steps (Kill Chain Phases) on day one, and our portfolio provided visibility across every phase.  In these 10 phases, MITRE conducted 96 substeps to emulate the behaviors aligned to the known TTPs attributed to the Carbanak adversary.

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results­

Scenario 2 – FIN7

The FIN7 emulation consisted of an attack with 10 Major Steps (Kill Chain Phases) on day two, and our portfolio provided visibility across every phase.  In these 10 phases, MITRE conducted 78  substeps to emulate the behaviors aligned to the known TTPs attributed to the FIN7 adversary.

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results

Going Feet-Deep

Tracking the adversary across all phases of the attack (miles-wide) is significantly strong, but to be really effective at enterprise defense, you also need to stay deep within their operating mode, and keep up with their movement within and across your systems through different approaches (feet-deep).  At McAfee, we design our visibility sensors across defensible components to anticipate where adversaries will interact with the system, consequently tracing their activities with diverse data sources (context) that enrich our portfolio.  This not only let us track their intentions, but also discover impactful outcomes as they execute hostile actions (sub-steps).

Defensible Components and Telemetry acquired during the evaluation.

If a product is configured differently you can obtain information from each Defensible Component, but this represents telemetry acquired based on the config during the evaluation (not necessarily evidence that was accepted).

Visibility By McAfee Data Sources / Defensible Components

Scenario 1 – Carbanak

Of the 96  Sub-Steps emulating Carbanak, our visibility coverage extends from more than 10 unique data sources including the automated interception of scripted source code used in the attack by our ATD sandbox integration with the DXL fabric.

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results

Scenario 2 – FIN7

Of the 78 Sub-Steps emulating FIN7, our visibility coverage extends from more than 10 unique data sources providing higher context in critical phases with Systems/Api Calls Monitoring to preserve the user’s security awareness as advanced behaviors aim for in-memory approaches conducted by the adversary.

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results

Visibility By McAfee Product

Acquiring data from sensors is fundamental, however, to be effective at security outcomes, your portfolio needs to essentially spread its deep coverage of data sources to balance the security visibility blue-teamers need as the progression of the attack is tracked through each phase.

This essential capability provides the blue-teamer a balance of contextual awareness from detection technologies (EDR and SIEM), and decisive disruption of impactful behaviors from protection products (ENS, DLP, ATD, NSP) oriented to neutralize the adversary’s actions on objectives.
In every phase of the attack, McAfee protection fused with detection products would successfully neutralize the adversary and afford blue teamers rich contextual visibility for investigations needing context before and after the block would have occurred.

Scenario 1 – Carbanak

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results

This chart clearly shows how ENS (in observe mode) would have prevented a successful attack, blocking the Initial Breach, protecting the customers from further damage. For the scope of the evaluation, it’s also important to remark how the products interacted by providing telemetry on each step.

Scenario 2 – FIN7

McAfee MITRE Engenuity ATT&CK Evaluation 3 Results

In the impactful kill-chain phase of “steal payment data”, the DLP product kicks into prevention, while being complemented by the ATD sandbox intercepting the payload that attempts to steal the information, as well as EDR having contextual information within the kill-chain for offline investigations the blue teamer needs.

Visibility Efficacy

Here, we covered the essentials of visibility and how to determine the power of having a strong telemetry foundation, not only as individual sensors or defensible components that provide information, but when analyzed and contextualized, we enable the next level of actionability required to prioritize cases with enriched detections.

Stay tuned for the next blog series explaining how detections were supported by this telemetry where we produced 274 detections that have more than 2 data sources.

The post Miles Wide & Feet Deep Visibility of Carbanak+FIN7 appeared first on McAfee Blogs.

What the MITRE Engenuity ATT&CK® Evaluations Means to SOC Teams

By Kathy Trahan

SOCwise Weighs In

When the infamous Carbanak cyberattack rattled an East European bank three years ago this month few would have guessed it would later play a starring role in the MITRE Engenuity™ enterprise evaluations of cybersecurity products from ourselves and 28 other vendors. We recently shared the results of this extensive testing and in a SOCwise discussion we turn to our SOCwise experts for insights into what this unprecedented exercise may mean for SOC teams assessing both strategy concerns and their tactical effectiveness.

Carbanak is a clever opponent known for innovative attacks on banks. FIN7 uses the similar malware and strategy of effective espionage and stealth   to target U.S. retail, restaurant and hospitality sectors, according to MITRE Engenuity™, and both were highlighted in this emulation. These notorious actors have reportedly stolen more than $1 billion worldwide over the past five years. An annual event, the four-day ATT&CK Evaluation spanned 20 major steps and 174 sub-steps of the MITRE framework.

The first thing to realize about this exercise is few enterprises could ever hope to match its scope. What do you get when you match up red and blue teams? “I have not been through an exercise like that in an organization with both the red team and blue teams operationally trying to determine what their strengths and weaknesses are,” said Colby Burkett, McAfee XDR architect, a participant in the event, on our recent SOCwise episode. “And that was fantastic.”

A lot of SOC teams conduct vulnerability assessments and penetration testing, but never emulate these types of behaviors, noted Ismael Valenzuela, McAfee’s Sr. Principal Engineer and co-host of SOCwise. And, he adds that many organizations lack the resources and skills to do purple-teaming exercises.

While our SOCwise team raved about the value of conducting broad scale purple-team exercises, they expressed concern that the emphasis on “visibility” is no more valuable than “actionability.” McAfee, which scored 87% on visibility, one of the industry’s best, turned in a remarkable 100% on prevention in the MITRE Engenuity™ evaluations.

Illuminating Visibility

When we think about visibility, we think about how much useful information we can provide to SOC analysts when an attack is underway. There may be a tsunami of attack data entering SOCs, but it’s only actionable when the data that’s presented to analysts is relevant, noted Jesse Netz, Principal Engineer at McAfee.

A well-informed SOC finds a sweet spot on an axis where the number of false positives is low enough and the true positives are high enough “where you can actually do something about it,” added Netz.

He believes that for SOC practitioners, visibility is only part of the conversation. “How actionable is the data you’re getting? How usable is the platform in which that data is being presented to you?”

For example, in the evaluation we saw McAfee’s MVISION EDR preserve actionability and reduce alert fatigue. We excelled in the five capabilities that matter most to SOC teams: time-based security, alert actionability, detection in depth, protection, and visibility.\

If you can’t do anything about the information you obtain, your results aren’t really useful in any way. In this regard, prevention also trumps visibility. “It’s great that we can see and gain visibility into what’s happening,” explained Netz. “But it’s even better at the end of the day as a security practitioner to be able to prevent it.”

Expanding the Scope

The SOCwise team overall applauded the progressively sophisticated approach taken by the MITRE Engenuity™ enterprise evaluations of cybersecurity products—now in its third year. However, our panel of experts noted that this round of testing was more about defending endpoints, rather than cloud-based operations, which are fairly central to defending today’s enterprise. They expect that focus may change in the future.

The MITRE Engenuity™ enterprise evaluations provide a lot of useful data, but they should never be the single deciding factor in a cybersecurity product purchase decision. “Use it as a component of your evaluation arsenal,” advises Netz. “It’ll help to provide kind of statistics around visibility capabilities in this latest round, including some detection capabilities as well, but be focused on the details and make sure you’re getting your information from multiple sources.”

For instance, Carbanak and FIN 7 attacks may not be relevant to your particular organization, especially if they’re centered on Cloud-based operations.

While no emulation can perfectly replicate the experience of battling real-time, zero-day threats, McAfee’s Valenzuela believes these evaluations deliver tremendous value to both our customers and our threat content engineers.

 

SOCwise

Optimize your Security Operations Center with SOCwise
Visit Now

The post What the MITRE Engenuity ATT&CK® Evaluations Means to SOC Teams appeared first on McAfee Blogs.

McAfee Proactive Security Proves Effective in Recent MITRE ATT&CK™

By Naveen Palavalli

McAfee Soars with Superior Protection Results   

Bottom Line: McAfee stopped the MITRE ATT&CK Evaluation Carbanak and FIN7 threats in their tracks within the first 15% of the major steps of the attack chain (on average), delivering on a critical security operations center (SOC) strategy: Stop the attack as early as possible.  

In April 2021, MITRE Engenuity released the results of the Carbanak and FIN7 evaluations that leveraged Tactics, Techniques, and Procedures (TTP’s) from the MITRE ATT&CK framework. McAfee and 28 other vendors tested the capabilities of our cybersecurity solutions across a wide range of attack vectors. These multi-stage simulated attacks leveraged a full range of known TTPs to execute the Carbanak and FIN7 attack campaigns. 

The Carbanak attack requires stealth and time. Threat actors count on operating undetected inside your infrastructure long enough to penetrate and own your crown jewel assets and information. They methodically step through complex custom TTPs to achieve their objectives. The sooner an attack can be detected and stopped, the lower the risk of a successful breach, damage to assets, and exfiltration of critical information.  

Shift left: Stopping Threats Before They Can Gain a Foothold 

McAfee displayed superior protection by blocking 100% across all 10 tests. On the other hand, several endpoint security providers failed to detect and block all threats. CrowdStrike, for example, was unable to block 30% of protection tests.  

Additionally, McAfee was able to block the attacks within the first 15% of attack steps on average across all testsOn the other hand, CrowdStrike allowed 50% of the attack chain steps on average to execute before blocking. The earlier in the attack chain that a threat is detected, the more likely it will be shut down before it causes damage. 

McAfee combines data and telemetry with comprehensive analytics-based detections that accelerate the pivot to defensive execution. This Time-Based Security metric determines if a blue team will have meaningful, timely, and actionable information. McAfee scores well on this metric by including specific references to MITRE Engenuity’s ATT&CK framework with centralized incident pivots to enriched telemetry, enabling faster detection, investigation, and reaction, and therefore lower exposure. Prioritizing Time-Based Security* (TBS) contributes to McAfee’s ability to block early and mitigate further damage. McAfee significantly outperformed CrowdStrike on the dimension of Time-Based Security.  

How did McAfee achieve this success in the evaluation and against such a sophisticated threat? 

Core to McAfee’s success is the alignment of products and capabilities around the ability to “shift left” in the attack cycle. Shifting left, or engaging as early as possible in the kill chain timeline, allows defenders to detect and stop an attack, minimize risk, and achieve these results at the lowest cost. 

For scenarios where threats are not blocked, McAfee provides extensive and actionable alerting and intelligence to ensure that responses and remediations are timely.  In the case of the MITRE Carbanak+FIN7 testing, McAfee demonstrated clear superiority over CrowdStrike in terms of Alert Actionability*. 

(For more information on Time-based Security and Alert Actionability, please review the following blog: SOC vs MITRE APT29 evaluation – Racing with Cozy Bear | McAfee Blogs)  

Defenders, Now is Your Time to Prevail Against Threat Actors 

Sophisticated adversaries surround us, and MITRE ATT&CK evaluations emulated their techniques and procedures. It’s time to let your teams know that with the right tools from McAfee and Shift Left best practices, intelligent defenders will prevail.  

Sneaky attackers traverse infrastructures and assets opportunistically and unpredictably. The complexity and variability in the attack chains associated with these threat actors make threats challenging to identify. McAfee will continue to evolve extended detection and response capabilities that go beyond the endpoint. The integration of these capabilities with solutions such as McAfee’s MVISION XDR enables the security operations team to benefit from unified visibility and control across the hybrid enterprise: endpoints, network, and the cloud.  

Most important is the integration of the ecosystem to fight and defeat attackers. McAfee MVISION XDR orchestrates both McAfee and non-McAfee security assets to deliver actionable cyber threat management and support both guided and automated investigations. 

As illustrated by the recent MITRE Carbanak+FIN7 protection tests, the industry recognizes the value of proactive capabilities to detect and block early, reducing reactive cyber defense efforts and damage. This dynamic enables your team to stop these sophisticated attacks earlier and more effectively. McAfee empowers your security operations teams to achieve faster and more effective results.  

To find out more about the MITRE ATT&CK Evaluation results, please reach out to sales@mcafee.com 

 

* These critical capabilities are defined by McAfee algorithms designed to maximize value to SOC and XDR needs.  Please see this McAfee MITRE blog for details on these algorithms 

Assessments of performance are McAfee’s and not those of MITRE Engenuity.  

MITRE Engenuity ATT&CK Evaluations are paid for by vendors and are intended to help vendors and end-users better understand a product’s capabilities in relation to MITRE’s publicly accessible ATT&CKⓇ framework. MITRE developed and maintains the ATT&CK knowledge base, which is based on real word reporting of adversary tactics and techniques. ATT&CK is freely available and is widely used by defenders in industry and government to find gaps in visibility, defensive tools, and processes as they evaluate and select options to improve their network defense. MITRE Engenuity makes the methodology and resulting data publicly available so other organizations may benefit and conduct their own analysis and interpretation. The evaluations do not provide rankings or endorsements.  

 

The post McAfee Proactive Security Proves Effective in Recent MITRE ATT&CK™ appeared first on McAfee Blogs.

McAfee Provides Max Cyber Defense Capabilities in MITRE’s Carbanak+FIN7 ATT&CK® Evaluation

By Craig Schmugar

Each year, MITRE Engenuity™ conducts independent evaluations of cybersecurity products to help government and industry make better decisions to combat security threats and improve industry’s threat detection capabilities. These evaluations are based on MITRE ATT&CK®, which is widely recognized as the de facto framework for tracking adversarial tactics and techniques. At McAfee we know that cybercriminals are always evolving their tradecraft, and we are committed to providing blue teams (cyber defenders) the capabilities needed to win the game. To do so, we believe in the importance of putting our security solutions through rigorous testing. To demonstrate our commitment, McAfee has participated in all MITRE Engenuity Enterprise Evaluations to date, including the previous round 1 (APT3 emulation) and round 2 (APT29 emulation). 

Today, MITRE Engenuity released the results of the Carbanak and FIN7 evaluations (round 3) that were conducted over the last few months. McAfee participated in this evaluation, along with 28 other vendors, which tested the capabilities of their cybersecurity solutions, in what has been the most comprehensive ATT&CK Evaluation to date, covering 20 major steps and 174 sub-steps.  

For the first time ever, MITRE Engenuity offered an optional extension to the detection evaluations to examine a vendor’s ability to protect against specific adversary techniques utilized by these groups. This was also the first time that the evaluations went beyond Windows systems and addressed techniques aimed at the Linux devices that are often used on networks as file servers or domain controllers. 

While it’s important to note that the goal of these ATT&CK Evaluations is not to rank or score products, our analysis of the results found that McAfee’s blue team was able to use MVISION EDR, complemented by McAfee’s portfolio, to obtain a significant advantage over the adversary, achieving: 

  • 100% visibility across the 10 major attack steps on Day 1 (Carbanak), and 100% visibility across the 10 major attack steps on Day 2 (FIN7). 
  • 100% analytic detections (any non-telemetry detection) across the 10 major attack steps on Day 1 (Carbanak), and 100% analytic detections across the 10 major attack steps on Day 2 (FIN7). 
  • 87visibility across the total of 174 sub-steps for the 2 attack scenarios. 
  • 72% detections leveraging two or more data sources for additional context and enrichment. 
  • 100% of blocking of the 10 major attack steps emulated in the protection test (Carbanak + FIN7) and blocking early in the attack cycle. 

Adversarial Emulation 

While prior emulated groups were more focused on espionage, the ATT&CK Evaluations team chose to emulate Carbanak and FIN7 due to the wide range of industries these groups target for financial gain. Both groups carry a firm reputation of using innovative tradecraft. Efficient espionage and stealth are at the forefront of their strategy, as they often rely heavily on scripting, obfuscation, “hiding in plain sight,” and fully exploiting the users behind the machine while pillaging an environment. They also leverage a unique spectrum of operational utilities, spanning both sophisticated malware as well as legitimate administration tools capable of interacting with various platforms.  

The ATT&CK Evaluation was conducted over a total of 4 days, including the protection testing. On each day a different version of the attack comprised of 10 steps was executed. On Day 1, MITRE Engenuity emulated an attack carried out by the Carbanak group to a financial institution that starts with the breach of the HR Manager’s workstation, and includes elevation of privileges, credential theft, lateral movement to the CFO’s system, collection of sensitive data on both Windows and Linux systems, and the spoofing of money transfers. On Day 2, MITRE Engenuity emulated an attack carried out by the FIN7 group against a hotel, involving the breach of the hotel manager’s system, persistence, credential theft, discovery, lateral movement to an accounting system and the skim of customer payment data. 

The McAfee blue team successfully defended against these two advanced adversaries, demonstrating the power of the McAfee portfolio, including MVISION EDR, complemented by MVISION Endpoint Security (ENS), Advanced Threat Detection (ATD), Network Security Platform (NSP), Data Loss Prevention (DLP), and Enterprise Security Manager (ESM). These products were configured following MITRE Engenuity’s standards: 

  • For the detection evaluation all ENS scanners and rules were set to report-only. 
  • For the protection evaluation ENS Attack Behavior Blocking (ABB)/Attack Surface Reduction (ASR) rules were set to block while the “Remotely creating or modifying files or folders” rule was disabled at MITRE’s request. 

During these 4 days of extensive purple teaming, McAfee demonstrated that its portfolio provides solid cyber defense across the top 5 capabilities that matter the most to any security operations team: time-based securityalert actionability, detection in depth, protection, and visibility 

Time-Based Security 

Time-Based Security (TBS) is one of the most relevant, effective, and simple security models a defender can apply.  It provides a mechanism to determine if a blue teamer would have the necessary, timely, and actionable information to effectively defend against adversarial attacks. 

Using the results of the ATT&CK Evaluation, we modeled the data following an attack timeline, grouping the techniques executed by the ATT&CK red team for Days 1 (Carbanak) and 2 (FIN7) into each of the steps (attack milestones) they employed. To represent the data for each evaluation day, we list the detection categories used by MITRE Engenuity. As Figures 1 and 2 show, during the evaluation, McAfee provided the maximum level of visibility, detection and context for every major step in the attack. An analyst that used McAfee’s products would have received a correlated and enriched threat alert for each of the steps of these advanced attacks, including references to MITRE Engenuity’s ATT&CK framework and pivoting points to enriched telemetry, enabling faster detection, investigation and reaction, and therefore resulting in reduced exposure. 

Figure 1. Time Based Security for Carbanak (Day 1) 

Figure 2. Time Based Security for FIN7 (Day 2) 

Alert-Actionability  

To be successful as a defender, it is essential to react in the fastest possible way, raising an alarm as early as possible on the attack chain, while correlating, aggregating and summarizing all subsequent activity to preserve actionability.  McAfee’s MVISION EDR preserved actionability and reduced alert fatigue during the evaluation providing context and enrichment, resulting in a ratio of 62%1 analytic detections (non-telemetry detections) out of the 274-total count of detections. This was possible due to McAfee’s strong correlation and having all telemetry tagged and labeled as close to the source as possible.  

Detection In-Depth 

Effective attack technique detection requires certain vantage points. Additional perspective improves context, correlation, and subsequently fidelity.  Having diverse data sources for every technique enables coverage quantity and quality. 

McAfee demonstrated coverage across a dozen of different data sources during the evaluation with 72% of detections utilizing two or more data sources. 

Figure 3McAfee data source diversity across 274 detections 

Protection 

For the first time in an ATT&CK Evaluation, MITRE Engenuity exercised 10 protection scenarios; a subset of the attack sequences used during the detection assessment.  McAfee demonstrated its superior protection efficacy by successfully disrupting all 10 attacks, early in the chain, before any impact occurred. Before the disruption, high context detections and telemetry was produced to alert the analyst.  

Figure 4100% blocking at every protection test  

Visibility 

Many organizations live in an alert driven world where there is not enough data to support key security operations activities, including investigations or threat hunting. During the Carbanak+FIN7 evaluation, McAfee provided visibility across all major steps of the attack, and 87% visibility of the total count of sub-steps across both days. It is worth noting that the remaining 13% does not necessarily represent blind spots, but rather that the minimum criteria selected by MITRE Engenuity was not met, according to the evaluation rules. For example, more visibility was obtained through the automated detonation of samples in our ATD sandbox, which provides additional data context to security analysts during a real attack. 

Conclusions 

At McAfee, we know how security operations work, and that’s why we designed our detection and response platform with Human Machine Teaming’ in mind. For this latest round of the MITRE Engenuity ATT&CK Evaluation, our Threat Detection Engineering and Applied Countermeasures (AC3) team have delivered 85% more visibility and over 22% more analytic detections than in the previous APT29 evaluation.  

During this evaluation, we demonstrated that McAfee delivers best-balanced defense across the top 5 capabilities that matter the most to any security operations team: time-based securityalert actionability, detection in depth, protection, and visibility. Our McAfee detection and response platform offered enhanced meaningful context across the entire attack chain, allowing cyber defenders to disrupt attacks early, before damage occurs. 

Stay tuned for upcoming details on how each of these security capabilities played a key role in the Carbanak+FIN7 evaluation as part of our ATT&CK Evaluation blog series. 

 

MITRE ATT&CK and ATT&CK are registered trademarks of the MITRE Corporation. 

The post McAfee Provides Max Cyber Defense Capabilities in MITRE’s Carbanak+FIN7 ATT&CK® Evaluation appeared first on McAfee Blogs.

SOCwise Series: A Tale of Two SOCs with Chris Crowley

By Ismael Valenzuela
coin miners

In a recent episode of McAfee’s SOCwise Series, guest security expert Chris Crowley revealed findings of his recent survey of security efforts within SOCs. His questions were designed to gain insight into all things SOC, including how SOCs can accomplish their full potential and how they assess their ability to keep up with security technology. 

Hosts Ismael Valenzuela and Michael Leland tapped into Chris’ security operations expertise as he told “A Tale of Two SOCs. 

“Chris has a tremendous experience in security operations,” Ismael said. “I always like people who have experience both in the offensive side and the defensive side. Think red, act blue, right? . . . but I think that’s very important for SOCs. Where does ‘A Tale of Two SOCs’ come from?”  

In reference to the Charles Dickens’ classic, Chris explained how survey responses fell into two categories: SOCs that had management support or those that did not. 

“It’s not just this idea of does management support us. It’s are we effectively aligned with the organization?” Chris said. And I think that is manifest in the perception of management support of not management support, right? So, I think when people working in a SOC have the sense that they’re doing good things for the organization, their perceptions is that the management is supporting them.” 

In this case, Chris explains “A Tale of Two SOCs” also relates to the compliance SOC versus the real security SOC. 

“A lot of it has to do with what are the goals when management set up to fund the SOC, right? Maybe the compliance SOC versus the SOC that’s focused on the security outcomes on defending, right?There are some organizations that are funding for basic compliance,” Chris said. [If the] law says we have to do this, we’re doing that. We’re not really going to invest in your training and your understanding and your comprehension. We’re not going to hire really great analysts. We’re just going to buy the tools that we need to buy. We’re going to buy some people to look at monitors and that’s kind of the end of it. 

One of the easiest and telling methods of assessing where an SOC sees itself in this tale is having conversations with staff. Chris recommends asking staff if they feel aligned with management and do they feel empowered? 

“If you feel like you’re being turned into a robot and you pick stuff from here and drop it over there, you’re probably in a place where management doesn’t really support you. Because they’re not using the human being’s capability of synthesis of information and that notion of driving consensus and making things work,” Chris said. “They’re looking more for people who are replaceable to put the bits in the bucket and move through.” 

Chris shared other survey takeaways including how SOCs gauge their value, metrics and tools. 

SOC INDICATORS AND PERCEIVED VALUE 

The survey included hypotheses designed to measure how organizations classify the value of a SOC: 

  • Budget – The majority of respondents did not list budget as a sign of how their organization value them 
  • Skilled Staff  Many valued the hiring of skilled workers as a sign of support for their SOC. 
  • Automation and Orchestration – The SOC teams that believed their organizations already supported through the hiring skilled staff reported their biggest challenge was implementing the automation and orchestration. 

“This showed that as SOC teams met the challenge of skilled staffing, they moved on to their next order of task: Let’s make the computers compute well,” Chris said. 

SOC METRICS 

Ismael asked about the tendency for some SOC management not to report any metrics, and those that simply reported number of incidents not reporting the right metrics. Chris reported that most people said they do provide metrics, but a stillsurprising number of people said that they don’t provide metrics at all. 

Here’s the breakdown of how respondents answered, “Do you provide metrics to your management?” 

  • Yes  69 
  • No  24 
  • We don’t know – 6 

 That roughly a third of respondents either do not report metrics or don’t know if they report metrics was telling to the survey’s author. 

In which case [metrics] obviously don’t have a central place of importance for your SOC,” Chris said. 

Regarding the most frequently used metric – number of incidents – Chris speculated that several SOCs he surveyed are attempting to meet a metric goal of zero incidents, even if it means they’re likely not getting a true reading of their cyber security effectiveness.  

You’re allowed to have zero incidents in the environment. And if you consistently meet that then you’re consistently doing a great job,” Chris said. Which is insane to me, right? Because we want to have the right number of incidents. If youactually have a cyber security problem … you should want to know about it, okay? 

Among the group of respondents who said their most common metric is informational, the desired information from their “zero incidents” metrics doesn’t actually have much bearing on the performance or the value of what the SOC is doing.

“The metrics tend to be focused on what can we easily show as opposed to what truly depicts the value that the SOC has been providing for the org,” Chris said. And at that point you have something you can show to get more funding and more support right over time. 

Chris suggests better use of metrics can truly depict the value that the SOC is providing the organization and justify the desired support it seeks. 

One which I like, which is not an easy metric to develop is actually loss prevention. If I can actually depict quantitatively, which it will not be precise, there will be some speculation in that,” Chris said. “But if I can depict quantitatively what the SOC did this month, or quarter where our efforts actually prevented or intervened in things which were going wrong and we stopped damage that’s loss prevention, right? That’s what the SOC is there for, right? If I just report, we had 13 incidents there’s not a lot of demonstration of value in that. And so always the metrics tend to be focused on what can we easily show as opposed to what truly depicts the value that the SOC has been providing for the org. “ 

SOC TOOLS 

Michael steered the discussion to the value discussion around incident metrics and their relationship with SOC capacityHow many incidents can you handle? Is it a tools issue or a people issue or a combination of both? Chris’ study also revealed subset of tools that respondents more frequently leveraged and added value to delivery of higher capacity of incident closure. 

One question on the survey asked“Do you use it? 

 “Not whether you like it or not, but do you use it? And do you use it in a way where you have full coverage or partial coverage? Because another thing about technology, and this is kind of a dirty secret in technology applications, is a lot of people buy it but actually never get it deployed fully,” Chris said. 

His survey allowed respondents to reveal their most-used technologies and to grade tools. 

The most common used technologies reported in the survey were: 

  1. SIEM 
  2. Malware Protection Systems 
  3. Next-gen Firewall 
  4. VPN 
  5. Log management  

Tools receiving the most A grades: 

  • EDR 
  • VPN 
  • Host-based Malware Protection 
  • SIEM 
  • Network Distributed Denial of Service 

Tools receiving the most F grades: 

  • Full Peak App 
  • Network-Based Application Control 
  • Artificial Intelligence 
  • TLS Intercept 

Chris pointed out that the reasoning behind the F grades may be less a case of failing and more a case of not meeting their full potential. 

“Some of these are newer in this space and some of them just feel like they’re failures for people” Chris said. Now, whether they’re technology failures or not this is what people are reporting that they don’t like in terms of the tech.  

For more findings read or download Chris Crowley’s 2020 survey here. 

Watch this entire episode of SOCwise below.

 

The post SOCwise Series: A Tale of Two SOCs with Chris Crowley appeared first on McAfee Blogs.

Why MITRE ATT&CK Matters?

By Carlos Diaz

MITRE ATT&CK enterprise is a “knowledge base of adversarial techniques”.   In a Security Operations Center (SOC) this resource is serving as a progressive framework for practitioners to make sense of the behaviors (techniques) leading to system intrusions on enterprise networks. This resource is centered at how SOC practitioners of all levels can craft purposeful defense strategies to assess the efficacy of their security investments against that knowledge base.

To enable practitioners in operationalizing these strategies, the knowledge base provides the “why and the what with comprehensive documentation that includes the descriptions and relational mappings of the behaviors observed by the execution of malware, or even when those weapons were used by known adversaries in their targeting of different victims as reported by security vendors. It goes a step further by introducing the “how” in the form of adversary emulation plans which streamline both the design of threat-models and the necessary technical resources to test those models – i.e., emulating the behavior of the adversary

For scenarios where SOCs may not have the capacity to do this testing themselves, the MITRE Corporation conducts annual evaluations of security vendors and their products against a carefully crafted adversary emulation plan, and it publishes the results for public consumption.  The evaluations can help SOC teams assess both strategy concerns and tactical effectiveness for their defensive needs as they explore market solutions.

This approach is transformative for cyber security, it provides an effective way to evolve from constraints of being solely dependent on IOC-centric or signature-driven defense models to now having a behavior-driven capability for SOCs to tailor their strategic objectives into realistic security outcomes measured through defensive efficacy goals. With a behavior-driven paradigm, the emphasis is on the value of visibility surrounding the events of a detection or prevention action taken by a security sensor – this effectively places context as the essential resource a defender must have available to pursue actionable outcomes.

Cool! So what is this “efficacy” thing all about?

I believe that to achieve meaningful security outcomes our products (defenses) must demonstrate how effective they are (efficacy) at enabling or preserving the security mission we are pursuing in our organizations. For example, to view efficacy in a SOC, let’s see it as a foundation of 5 dimensions:

Detection Gives SOC Analysts higher event actionability and alert handling efficiencies with a focus on most prevalent adversarial behaviors – i.e., let’s tackle the alert-fatigue constraint!
Prevention Gives SOC Leaders/Sponsors confidence to show risk reduction with minimized impact/severity from incidents with credible concerns – e.g., ransomware or destructive threats.
Response Gives SOC Responders a capacity to shorten the time between detection and activating the relevant response actions – i.e., knowing when and how to start containing, mitigating or eradicating.
Investigative Gives SOC Managers a capability to improve quality and speed of investigations by correlating low signal clues for TIER 1 staff and streamlining escalation processes to limited but advanced resources.
Hunting Enables SOC Hunters a capacity to rewind-the-clock as much as possible and expand the discovery across environments for high value indicators stemming from anomalous security events.

 

So how does “efficacy” relate to my SOC?

Efficacy at the Security and Technical Leadership levels confirms how the portfolio investments are expected to yield the defensive posture of our security strategy, for example, compare your investments today to any of the following:

Strategy (Investment)

Portfolio Focus

Efficacy Goals

 

Balanced Security

Ability to:
  • Focus on prevalent behaviors
  • Confidently prevent attack chains with relevant impact/severity
  • Provide alert actionability
  • Increase flexibility in response plans based on alert type and impact situation

Caveats:

  • Needs efficacy testing program with adversary emulation plans
 

Detection Focus

Ability to:
  • Focus on prevalent behaviors
  • Provide alert actionability
  • Proactively discover indicators with hunting

Caveats:

  • Requires humans
  • Minimal prevention maturity
  • Requires solid incident response expertise
  • Hard to scale to proactive phases due to prevention maturity

Prevention Focus

Ability to:
  • Confidently prevent attack chains with relevant impact/severity
  • Lean incident response plans
  • Provide alert actionability and Lean monitoring plans

Caveats:

  • Hard to implement across the business without disrupting user experience and productivity
  • Typically for regulated or low tolerance network zones like PCI systems
  • Needs high TCO for the management of prevention products

 Response Focus

Ability to:
  • Respond effectively to different scenarios identified by products or reported to the SOC

 Caveats:

  • Always reacting
  • Requires humans
  • Hard to retain work staff
  • Unable to spot prevalent behaviors
  • Underdeveloped detection
  • Underdeveloped prevention

 

MITRE ATT&CK matters as it introduces the practical sense-making SOC professionals need so they can discern attack chains versus security events through visibility of the most prevalent behaviors.

Consequently, it allows practitioners to overcome crucial limitations from the reliance on indicator-driven defense models that skew realistic efficacy goals, thereby maximizing the value of a security portfolio investment.

The post Why MITRE ATT&CK Matters? appeared first on McAfee Blogs.

Hacking Proprietary Protocols with Sharks and Pandas

By Ismael Valenzuela

The human race commonly fears what it doesn’t understand.  In a time of war, this fear is even greater if one side understands a weapon or technology that the other side does not.  There is a constant war which plagues cybersecurity; perhaps not only in cybersecurity, but in the world all around us is a battle between good and evil.  In cyber security if the “evil” side understands or pays more attention to a technology than the “good” side, we see a spike in cyber-attacks.

This course of events demands that both offensively and defensively minded “good guys” band together to remove the unknown from as much technology as possible.  One of the most common unknown pieces of technology in cybersecurity that professionals see on a regular basis are proprietary protocols running across their networks.  By using both the tactics and perspectives from red and blue teams it is possible to conquer and understand these previously unknown packets.  This strategy is exactly what we, Douglas McKee and Ismael Valenzuela, hoped to communicate in our webinar ‘Thinking Red, Acting Blue: Hacking Proprietary Protocols”.

Proprietary protocols are typically a mystery to many practitioners.  Vendors across many industries develop them for very specific purposes and technologies.  We see them in everything from the Internet of Things (IOT), to Industrial Controls Systems (ICS), to medical devices and more.   Since by its nature “proprietary” technology is not shared, there is generally no public Request for Comments (RFC) or public disclosure on how they work.  This provides an opportunity for attackers and a challenge for defenders.  Attackers are aware these networking protocols are less reviewed and therefore more susceptible to vulnerabilities, while defenders have a hard time understanding what valid or benign traffic looks like.   Unfortunately, attackers are generally more financially motivated to spend the time reversing these protocols than defenders, since the rewards can be very substantial.

During the webinar we discussed a two-prong approach to tackling these unknown protocols with the goal of a deeper understanding of this data.  A red team’s purpose may be to look for vulnerabilities, while a blue team may be more interested in detecting or flagging unusual behavior in this traffic.   We discuss how this can be accomplished through visual inspection using Wireshark to compare the traffic across multiple conversations, and we complemented this analysis with python libraries like pandas, numpy and matplotlib, for data exploration and visualization.

For example, consider the packets in the Wireshark captures side-by-side in Figure 1.   An astute reader may notice that the UDP packets are evenly spaced between each other within the same PCAP, yet differently spaced between pcaps.

In protocol analysis this can indicate the use of a status or “heartbeat” packet, which may contain some type of data where the interval it is sent is negotiated for each conversation.  We have seen this as a common trait in proprietary protocols.  This can be difficult for a cybersecurity professional to discern with a small amount of data, but could be very helpful for further analysis.  If we import the same data into pandas dataframes and we add matplotlib visualizations to our analysis, the behavior becomes much clearer as seen in Figure 2.

By using the reverse engineering perspective of a vulnerability researcher combined with the data analysis insight of a defender, we can strengthen and more quickly understand the unknown.  If this type of deep technical analysis of proprietary protocols interests you, we encourage you to check out the recording of our presentation below.  We have made all of our resources public on this topic, including pcaps and python code in a Jupyter Notebook, which can be found on Github and Binder.   It is important as an industry that we don’t give into fear of the unknown or just ignore these odd looking packets on our network, but instead lean in to understand the security challenges proprietary protocols can present and how to protect against them.

The post Hacking Proprietary Protocols with Sharks and Pandas appeared first on McAfee Blogs.

SOC Health Check: Prescribing XDR for Enterprises 

By Scott Howitt

It is near-certain the need for security across the enterprise will never cease – only increase if year-over-year trends are any indication. We constantly see headlines with repetitive buzzwords and phrases calling attention to the complexity of today’s security operations center (SOC) with calls to action to reimagine and modernize the SOC. We’re no different here at McAfee in believing this to be true.  

In order for this to happen, however, we need to update our thinking when it comes to the SOC.  

Today’s SOC truly serves as an organization’s cybersecurity brain. Breaking it down, the brain and SOC are both the ultimate central nervous system and are extremely complex. While the brain fires neurons, connects synapses, and constantly communicates in order for the body to function, the SOC similarly works as a centralized system where people, processes, and technology must be in-sync to function.The unfortunate reality is though, SOC analysts and staff do not feel empowered to act in this manner. According to the 2021 SANS Cyber Threat Intelligence Report, respondents cited several reasons for not being able to implement cybersecurity holistically across their organization, including lack of trained staff, time, funding, management buy-in, technical capabilities, and more.  

The technology that has the power to enable this synchronicity and further modernize enterprise security by taking SOC functionality to the next level is already here – Extended Detection and Response (XDR). It has the ability to provide prevention, detection, analysis, and response in a purposefully orchestrated and cooperative way, with its components operating as a whole. Think of it this way: XDR mimics the brain’s seamlessness in operation, with every element working toward the same goal of maintaining sound security posture across an entire organization.  

Put another way, the human brain has approximately 100 trillion synapses, synchronizing and directing to make it possible to walk and chew bubble gum at the very same time with seemingly no effort on the human’s end. However, if one synapse misfires or becomes compromised due to an unknown element – you might end up on the ground.  

Similarly, we’re already seeing many enterprises falter, trip, and fall. According to Ernst & Young, 59% of companies experienced a significant breach in the last twelve months – and only 26% of respondents say the SOC identified that event. These statistics show the case for XDR is clear – and that it is time to learn and reap the benefits of taking a proactive approach.   

Purposeful Analysis vs. Analysis Paralysis 

Organizations are still vulnerable to malicious actors attempting to take advantage of disparate remote workforces – and we’re seeing them get craftier, acting faster and more frequently. This is where XDR offers a pivotal differentiator by providing actionable intelligence and integrated functionality across control vectors, resulting in more proactive investigation cycles.  

When it comes to analysis, data can quickly become overwhelming, introducing an opportunity to miss critical threats or malicious intent with more manual or siloed processes. Meaningful context is crucial and no industry is exempt from needing it. 

This is where McAfee is providing the advantage with MVISION XDR powered MVISION Insights. The ability to know likely and prioritized threat campaigns based on geographical and industry prevalence – and have them correlated and assessed across your local environment – provides the situational awareness and analysis that can allow SOC teams to act before threats occur. Additionally, as endpoints only promise to increase, MVISION XDR works in conjunction with McAfee’s endpoint protection platform (EPP), increasing effectiveness with added safeguards including antivirus, encryption, data loss prevention technologies and more at the endpoint 

Think of the impact and damage that can happen without this crucial and context MVISION Insights can provide. The consequences can be dire when looking at industries that have faced extreme upheaval.  

For example, in keeping with our theme, we know the importance of essential healthcare workers and cannot be grateful enough for their contributions. But as the industry faces extreme challenges and an increase in both patient load and data, we also need to be paying close attention to how this data is being managed, who has privilege to it, and what threats exist as even this typical in-person industry shifts virtual due to our updated circumstances. Having meaningful context on potential threats will help this industry avoid added challenges so focus can remain steadfast on creating impact and positive results.  

Greater Efficiency is Essential 

Outside of the tremendous advantage of being less vulnerable to threats and breaches due to proactivity, incredible efficiencies can be gained by freeing cybersecurity staff from those previously manual tasks and management of multiple silos of solutions. The time is definitely now too – according to (ISC)², 65% of organizations already report a shortage of cybersecurity staff. 

Coupled with staff shortages and lack of skilled workers, an IBM report also found that the average time to detect and contain a data breach is 280 days. Going back to the view that the SOC serves as an organization’s cybersecurity brain – 280 days can cause massive amounts of damage if an anomaly in the brain were to occur unnoticed or unaddressed.  

For the SOC, the longer a breach goes undetected, the more information and data becomes vulnerable or leaked – leading not only to a disruption in business, but ultimately financial losses as well.  

The SOC Has a Cure 

XDR is the future of the SOC. We know that simplified, cohesive visualization and control across the entire infrastructure leads the SOC to better situational awareness – the catalyst for faster time to remediation. The improved, holistic viewpoint XDR provides across all vectors from endpoint, network, and cloud helps to eliminate mistakes and isolated endeavors across an organization’s entire IT framework.  

With AI-guided investigation, analysts have an automatic exchange of data and information to move faster from validation to decision when it comes to threats. This is promising as organizations not only tackle a shortage in cybersecurity staff, but skilled workers as well. According to the same (ISC)² survey as above, 36% of those polled cite lack of skilled or experienced staff being a top concern.  

Knowing the power of data and information, we can confidently assume that malicious actors will never stop their quest to infiltrate and extort enterprises. True to the well-known anecdote, this knowledge brings about great responsibility. Enterprises will face challenges as threats increase while talent and staff decrease – all while dealing with vendor sprawl and choice-overload across the market.  

SOC Assessment Tool

Check Your SOC Maturity Level

Time to schedule a check-up for your SOC. It may not be as healthy as you think and true to both the medical and security industries, proactivity and prevention can lead to optimized functionality.

Take the Assessment Now

 Want to learn more about McAfee’s investment in XDR and explore its approach? Check out McAfee MVISION XDR.  

The post SOC Health Check: Prescribing XDR for Enterprises  appeared first on McAfee Blogs.

Are You Ready for XDR?

By Kathy Trahan

What is your organizations readiness for the emerging eXtended Detection Response (XDR) technology? McAfee just released the first iteration of this technologyMVISION XDR. As XDR capabilities become available, organizations need to think through how to embrace the new security operations technology destined to empower detection and response capabilities. XDR is a journey for people and organizations. 

The cool thing about McAfee’s offering is the XDR capabilities is built on the McAfee platform of MVISION EDR, MVISION Insights and is extended to other McAfee products and third-party offerings.   This means — as a McAfee customer  your XDR journey has already begun. 

The core value prop behind XDR is to empower the SecOps function which is still heavily burdened with limited staff and resources while the threat landscape roars. This cry is not new. As duly noted in the book,  Ten Strategies of World-class Cybersecurity Operations Center, written quite a few moons ago:  “With the right tools, one good analyst can do the job of 100 mediocre ones.” XDR is the right tool. 

 SecOps empowerment means impacting and changing people and process in a positive manner resulting in better security outcomesOrganizations must consider and prepare for this helpful shift. Here are three key considerations organizations need to be aware of and ready for: 

The Wonder of Harmonizing Security Controls and Data Across all Vectors  

A baseline requirement for XDR is to unify and aggregate security controls and data to elevate situation awareness.  Now consider what does this mean to certain siloed functions like endpoint, network and web.  Let’s say you are analyst who typically pulls telemetry from separate control points (endpoint, network, web) moving from each tool with a login, to another tool with another login and so on. Or maybe you only have access to the endpoint tool. To gain insight into the network you emailed the network folks with artifacts you are seeing on the endpoint and ask if these is anything similar, they have seen on the edge and what they make of it. Often there is a delayed response from network folks given their priorities. And you call the web folks for their input on what they are seeing.  Enter XDR.  What if this information and insights was automatically given to you on a unified dashboard where situation awareness analysis has already begun.  This reduces the manual pivoting of copy and pasting, emailing, and phone calls.  It removes the multiple data sets to manage and the cognitive strain to make sense of it. The collection, triaging, and initial investigative analysis are automated and streamlined. This empowers the analysts to get to a quicker validation and assessment. The skilled analyst will also use  experience and human intuition to respond to the adversary, but the initial triaging, investigation, and analysis has already been doneIn addition, XDR fosters the critical collaboration between the network operations and security operations since adversary movement is erratic across the entire infrastructure  

Actionable Intelligence Fosters Proactive SecOps Efforts (MVISION XDR note-worthy distinction) 

Imagine if your SecOps gained high priority threat intelligence before the adversary hits and enters your environment. What does it mean to your daily SecOps processes and policy?  It removes a significant amount to of hunting, triaging and investigation cycles. It simply prioritizes and accelerates the investigation.  It answers the questions that matter. Any associated campaign is bubbled up immediately.  You are getting over a hundred high alerts, but one is related to a threat campaign that is likely to hit.  It removes the guess work and prioritizes SecOps efforts. It assesses your environment and the likely impact—what is vulnerable. More importantly it suggests counter measures you can take. It moves you from swimming in context to action in minutes.   

This brings the SecOps to a decision moment faster—do they have the authority to respond? Are they a participant in prevention efforts?  Note this topic is Strategy Three in the Ten Strategies of World-class Cybersecurity Operations Center where it is highly encouraged to empower SecOps to make and/or participate in such decisions.  Policies for response decisions and actions vary by organizations, the takeaway here is decision moments come faster and more often with significant research and credible context from MVISION XDR. 

Enjoy the Dance Between Security and IT  

XDR is an open, integrated platform.  So, what does it mean to people and process if all the pieces are integrated and security functions coordinate efforts? It depends on the pieces that are connected. For example, if SecOps can place a recommendation to update certain systems on the IT service system automatically it removes the necessity to login into the IT system and place a request or in some cases call or email IT (eliminating time-consuming step.)  There is a heightened need for whatif scenario policies driven by Secure Orchestration Automation Response (SOAR) solutions.  These policies are typically reflected in a manual playbook or SOAR playbook.  

Let’s consider an example, when an email phishing alert is offered the SOAR automatically (by policy/play required) compares the alert against others to see if there are commonalties worth noting. If so, the common artifacts are assigned to one analyst versus distributing separate alerts to many analysts. This streamlines the investigation and response to be more effective and less consuming. There are many more examples, but the point is when you coordinate security functions organization must think through how they want each function to act under specific circumstances—what is your policy for these circumstances. 

These are just a few areas to consider when you embrace XDR. I hope this initial discussion started you thinking about what to consider when embracing XDR. We have an online SOC audit where you can assess your SOC maturity and plan where you want to go.  Join us for a webinar on XDR readiness where experts will examine how to prepare to optimize XDR capabilities.  We also have a SOC best practices series, SOCwise that offers regular advice and tips for your SOC efforts!   

 

 

The post Are You Ready for XDR? appeared first on McAfee Blogs.

XDR – Please Explain?

By Rodman Ramezanian

SIEM, we need to talk! 

Albert Einstein once said, We cannot solve our problems with the same thinking we used when we created them. 

Security vendors have spent the last two decades providing more of the same orchestration, detection, and response capabilities, while promising different results. And as the old adage goes, doing the same thing over and over again whilst expecting different results is? Ill let you fill in the blank yourself.   

Figure 1: The Impact of XDR in the Modern SOC: Biggest SIEM challenges – ESG Research 2020

SIEM! SOAR! Next Generation SIEM! The names changed, while the same fundamental challenges remained: they all required heavy lifting and ongoing manual maintenance. As noted by ESG Research, SIEM – being a baseline capability within SOC environments  continues to present challenges to organisations by being either too costly, exceedingly resource intensive, requiring far too much expertise, and various other concerns. A common example of this is how SOC teams still must create manual correlation rules to find the bad connections between logs from different products, applications and networksToo often, these rules flooded analysts with information and false alerts and render the product too noisy to effective. 

The expanding attack surface, which now spans Web, Cloud, Data, Network and morehas also added a layer of complexity. The security industry cannot only rely on its customers analysts to properly configure a security solution with such a wide scope. Implementing only the correct configurations, fine-tuning hundreds of custom log parsers and interpreters, defining very specific correlation rules, developing necessary remediation workflows, and so much more  its all a bit too much. 

Detections now bubble up from many siloed tools, too, including Intrusion Prevention System(IPS) for network protection, Endpoint Protection Platforms (EPP) deployed across managed systems, and Cloud Application Security Broker (CASB) solutions for your SaaS applications. Correlating those detections to paint a complete picture is now an even bigger challenge. 

There is also no R in SIEM – that is, there is no inherent response built into SIEM. You can almost liken it to a fire alarm that isnt connected to the sprinklers.  

SIEMs have been the foundation of security operations for decades, and that should be acknowledged. Thankfully, theyre now being used more appropriately, i.e. for logging, aggregation, and archiving 

Now, Endpoint Detection and Response (EDR) solutions are absolutely on the right track  enabling analysts to sharpen their skills through guided investigations and streamline remediation efforts – but it ultimately suffers from a network blind spot. Similarly, network security solutions dont offer the necessary telemetry and visibility across your endpoint assets.

Considering the alternatives

Of Gartners Top 9 Security and Risk Trends for 2020Extended detection and response capabilities emerge to improve accuracy and productivity ranked as their #1 trend. They notedExtended detection and response (XDR) solutions are emerging that automatically collect and correlate data from multiple security products to improve threat detection and provide an incident response capabilityThe primary goals of an XDR solution are to increase detection accuracy and improve security operations efficiency and productivity. 

That sounds awfully similar to SIEM, so how is an XDR any different from all the previous security orchestration, detection, and response solutions? 

The answer is: An XDR is a converged platform leveraging a common ontology and unifying language. An effective XDR must bring together numerous heterogeneous signals, and return a homogenous visual and analytical representation.. XDR must clearly show the potential security correlations (or in other words, attack stories) that the SOC should focus on. Such a solution would de-duplicate information on one hand, but would emphasize the truly high-risk attacks, while filtering out the mountains of noise. The desired outcome would not require exceeding amounts of manual work; allowing SOC analysts to stop serving as an army of translators and focus on the real work  leading investigations and mitigating attacks. This normalized presentation of data would be aware of context and content, be advanced technologically, but simple for analysts to understand and act upon. 

SIEMs are data-driven, meaning they need data definitions, custom parsing rules and pre-baked content packs to retrospectively provide context. In contrast, XDR is hypothesis driven, harnessing the power of Machine Learning and Artificial Intelligence engines to analyse high-fidelity threat data from a multitude of sources across the environment to support specific lines of investigation mapped to the MITRE ATT&CK framework.  

The MITRE ATT&CK framework is effective at highlighting how bad guys do what they do, and how they do it. While traditional prevention measures are great at spot it and stop it protections, MITRE ATT&CK demonstrates there are many steps taking place in the attack lifecycle that arent obvious. These actions dont trigger sufficient alerting to generate the confidence required to support a reaction.  

XDR isnt a single product. Rather, it refers to an assembly of multiple security products (and services) that comprise a unified platform. AnXDR approach will shiftprocesses and likely merge and encouragetighter coordination between different functions likeSOC analysts, hunters, incident respondersand ITadministrators. 

The ideal XDR solution must provide enhanced detection and response capabilities across endpoints, networks, and cloud infrastructures. It needs to prioritise and predict threats that matter BEFORE the attack and prescribe necessary countermeasures allowing the organisation to proactively harden their environment. 

Figure 2: Where current XDR approaches are failing

McAfees MVISION XDR solution does just that, by empowering the SOC to do more with unified visibility and control across endpoints, network, and cloud. McAfee XDR orchestrates both McAfee and non-McAfee security assets to deliver actionable cyber threat management and support both guided and automated investigations. 

What if you could find out if you’re in the crosshairs of a top threat campaign, by using global telemetry from over 1 billion sensors that automatically tracks new campaigns according to geography and industry vertical? Wouldn’t that beinsightful? 

“Many firms want to be more proactive but do not have the resources or talent to execute. McAfee can help bridge this gap by offering organisations a global outlook across the entire threat landscape with local context to respond appropriately. In this way, McAfee can support a CISO-level strategy that combines risk and threat operations.” 

– Jon Oltsik, ESG Senior Principal Analyst and Fellow
 

But, hang on… Is this all just another ‘platform’ play 

Take a moment to consider how platform offerings have evolved over the years. Initially designed to compensate for the heterogeneity and volume of internal data sources and external threat intelligence feeds, the core objective has predominantly been to manifest data centrally from across a range of vectors in order to streamline security operations efforts. We then saw the introduction of case management capabilities. 

Over the past decade, the security industry proposed solving many of  the challenges presented in SOC contexts through integrations. You would buy products from a few different vendorswho promised it would all work together through API integration, and basically give you some form of pseudo-XDR outcomes were exploring here.  

Frankly, there are significant limitations in that approach. There is no data persistence; you basically make requests to the lowest API denominator on a one-to-one basis. The information sharing model was one-way question and answer leveraging a scheduled push-pull methodology. The other big issue was the inability to pull information in whatever form  you were limited to the API available between the participating parties, with the result ultimately only as good as the dumbest API.  

And what about the lack of any shared ontology, meaning little to no common objects or attributes? There were no shared components, such as UI/UX, incident management, logging, dashboards, policy definitions, user authentication, etc. 

What’s desperately been needed is an open underlying platform – essentially like a universal API gateway scaled across the cloud that leverages messaging fabrics like DXL that facilitate easy bi-lateral exchange between many security functions – where vendors and partner technologies create tight integrations and synergies to support specific use cases benefitting SOC ecosystems. 

Is XDR, then, a solution or product to be procured? Or just a security strategy to be adopted?Potentially, its both.Some vendors are releasing XDR solutions that complement their portfolio strengths, and others are just flaunting XDR-like capabilities.  

 Closing Thoughts

SIEMs still deliver specific outcomes to organisations and SOCswhich cannot be replaced by XDR. In fact, with XDR, a SIEM can be even more valuable. 

For most organisations, XDR will be a journey, not a destination. Their ability to become more effective through XDR will depend on their maturity and readiness toembrace all the requiredprocesses.In terms of cybersecurity maturity, if youd rate your organisation at a medium to high level, the question becomes how and when. 

Most organisations using an Endpoint Detection and Response(EDR) solution are likely quite readyto embrace XDRscapabilities. They are already investigating and resolving endpoint threats and theyre ready to expand this effort to understand how their adversaries move across their infrastructure, too. 

If youd like to know more about how McAfee addresses these challenges with MVISION XDR, feel free to reach out! 

The post XDR – Please Explain? appeared first on McAfee Blogs.

6 Best Practices for SecOps in the Wake of the Sunburst Threat Campaign

By Ismael Valenzuela
Strong passwords

1. Attackers have a plan, with clear objectives and outcomes in mind. Do you have one?

Clearly this was a motivated and patient adversary. They spent many months in the planning and execution of an attack that was not incredibly sophisticated in its tactics, but rather used multiple semi-novel attack methods combined with persistent, stealthy and well-orchestrated processes. In a world where we always need to find ways to stay even one step ahead of adversaries, how well is your SOC prepared to bring the same level of consistent, methodical and well-orchestrated visibility and response when such an adversary comes knocking at your door? 

Plan, test and continuously improve your SecOps processes with effective purple-teaming exercises. Try to think like a stealthy attacker and predict what sources of telemetry will be necessary to detect suspicious usage of legitimate applications and trusted software solutions.

2. Modern attacks abuse trust, not necessarily vulnerabilities. Bethreat focused. Do threat modeling and identify where the risks are. Leverage BCP data and think of your identity providers (AD Domain Controllers, Azure AD, etc.) as ‘crown jewels’.

Assume that your most critical assets are under attack, especially those that leverage third-party applications where elevated privileges are a requirement for their effective operation. Granting service accounts unrestricted administrative privileges sounds like a bad idea – because it is. Least-privilege access, micro segmentation and ingress/egress traffic filtering should be implemented in support of a Zero-Trust program for those assets specifically that allow outside access by a ‘trusted’ 3rd-party.

3. IOCs are becoming less useful as attackers don’t reuse them, sometimes even inside the same victim. Focus on TTPs & behaviors.

The threat research world has moved beyond atomic indicators, file hashes and watchlists of malicious IPs and domains upon which most threat intelligence providers still rely. Think beyond Indicators of Compromise. We should rely less on static lists of artifacts but instead focused on heuristics and behavioral indicators. Event-only analysis can easily identify the low-hanging fruit of commodity attack patterns, but more sophisticated adversaries are going to make it more difficult. Ephemeral C2 servers and single-use DNS entries per asset (not target enterprise) were some of the more well-planned (yet relatively simple) behaviors seen in the Sunburst attack. Monitor carefully for changes in asset configuration like logging output/location or even the absence of new audit messages in a given polling period.  

4. Beware of the perfect attack fallacy. Attackers can’t innovate across the entire attack chain. Identify places where you have more chances to detect their presence (i.e. privilege escalation, persistency, discovery, defense evasion, etc.)

All telemetry is NOT created equal. Behavioral analysis of authentication events in support of UEBA detections can be incredibly effective, but that assumes identity data is available in the event stream. Based on my experience, SIEM data typically yields only 15-20% of events that include useful identity data, whereas almost 85% of cloud access events contain this rich contextual data, a byproduct of growing IAM adoption and SSO practices. Events generated from critical assets (crown jewels) are of obvious interest to SecOps analysts for both detection and investigation, but don’t lose sight of those assets on the periphery; perhaps an RDP jump box sitting in the DMZ that also synchronizes trust with enterprise AD servers either on-premises or in the cloud. Find ways to isolate assets with elevated privilege or those running ‘trusted’ third-party applications using micro segmentation where behavioral analysis can more easily be performed. Leverage volumetric analysis of network traffic to identify potentially abnormal patterns; monitor inbound and outbound requests (DNS, HTTP, FTP, etc) to detect when a new session has been made to/from an unknown source/destination – or where the registration age of the target domain seems suspiciously new. Learn what ‘normal’ looks like from these assets by baselining and fingerprinting, so that unusual activity can be blocked or at the very least escalated to an analyst for review. 

5. Architect your defenses for visibility, detection & response to augment protection capabilities. Leverage EDR, XDR & SIEM for historical and real-time threat hunting.

The only way to gain insight into the attacker behaviors – and any chance of detecting and disrupting attacks of this style – require extensive telemetry from a wide array of sensors. Endpoint sensor grids provide high-fidelity telemetry about all things on-device but are rarely deployed on server assets and tend to be network-blind. SIEMs have traditionally been leveraged to consume and correlate data from all 3rd-party data sources, but it likely does not have the ability (or scale) to consume all EDR/endpoint events, leaving them largely endpoint-blind. As more enterprise assets and applications move to the cloud, we have yet a third source of high-value telemetry that must be available to SOC analysts for detection and investigation. Threat hunting can only effectively be performed when SecOps practitioners have access to a broad range of real-time and historical telemetry from a diverse sensor grid that spans the entire enterprise. They need the ability to look for behaviors – not just events or artifacts – across the full spectrum of enterprise assets and data. 

6. In today’s #cyberdefensegame it’s all about TIME. 

Time can be an attacker’s best offense, sometimes because of the speed with which they can penetrate, reconnoiter, locate and exfiltrate sensitive data – a proverbial ‘smash-and-grab’ looting. Hardly subtle and quickly noticed for the highly visible crime that it is. However in the case of Sunburst the adversary used time to their advantage, this time making painstakingly small and subtle changes to code in the software supply chain to weaponize a trusted application, waiting for it to be deployed across a wide spectrum of enterprises and governmental agencies, quietly performing reconnaissance on the affected asset and those around it, and leveraging low-and-slow C2 communications over a trusted protocol like DNS. Any one of these activities might easily be overlooked by even the most observant SOC. This creates an even longer detection cycle, allowing potential attackers a longer dwell time.  

This blog is a summary of the SOCwise Conversation on January 25th 2020.  Watch for the next one! 

For more information on the Sunburst attack, please visit our other resources on the subject: 

Blogs:

McAfee Knowledge-base Article (Product Coverage)

McAfee Knowledge-base Article (Insights Visibility)

 

The post 6 Best Practices for SecOps in the Wake of the Sunburst Threat Campaign appeared first on McAfee Blogs.

SOCwise Series: Practical Considerations on SUNBURST

By McAfee

This blog is part of our SOCwise series where we’ll be digging into all things related to SecOps from a practitioner’s point of view, helping us enable defenders to both build context and confidence in what they do. 

Although there’s been a lot of chatter about supply chain attacks, we’re going to bring you a slightly different perspective. Instead of talking about the technique, let’s talk about what it means to a SOC and more importantly focusing on the SUNBURST attack, where the adversary leveraged a trusted application from SolarWinds. 

Below you are going to see the riveting discussion between our very own Ismael Valenzuela and Michael Leland where they’ll talk about the supply chain hacks and the premise behind them. More importantly, why this one in particular was so successful. And lastly, they’ll cover best practices, hardening prevention, and early detection. 

Michael: Ismael, let’s start by talking a little bit about what the common types of supply chain attacks. We know from past experience that they’ve primarily been software; though, it’s not unheard of to have hardware-based supply chain attacks as well. But really, it’s about hijacking or masquerading as a vendor or a trusted supplier and objecting malicious code into trusted, authorized applications. Sometimes even hijacking the certificate to make it look legitimate. And this last one was about injecting into third party libraries. 

In relation to SUNBURST, it was a long game, right? This was an adversary long game attack where they had over 12 months to plan, stage, deploy, weaponize and reap the benefits. And we’re going to talk more about what they did, but more importantly, also how we as practitioners can leverage the sources of telemetry we have for both detection and hopefully future prevention. The first question that most people ask is, is this new and clearly this is not a new technique or tactic, but let’s talk a little bit about why this one was different. 

Ismael: Right! The most interesting piece about SolarWinds is not that much of it is a supply chain attack because as you said, it’s true. It’s not new. We’ve seen similar things in the past. I know there’s a lot of controversy around some of them like Supermicro, we and many others over the last few years and it’s difficult to prove these types of attacks. But to me, the most interesting piece is not just how it got into the environment, but we talked about malicious updates into legitimate applications. For example, we’ve seen some of that in the past with modifying code on GitHub, right? Unprotected reports, attackers, threat actors are modifying the code. 

We’re going to talk a little bit about what organizations can do to identify these but what I really want to highlight out of this is about the attackers, they have a plan right? They compromise the environment carefully, they stayed dormant for about two weeks, and after that, as we have seen in recent research, they started to deploy second stage payloads. The way they did that was very, very interesting, and its changing the game. It’s not radically new, but there’s always something new that we may have not seen before. And it’s important for defendants to understand these behaviors so they can start trying to detect them. In summary, they have a plan and we should ask ourselves if we have a plan for these type of attacks? Not only the initial vector but also what happens after that. 

Michael: Let’s take a look at the timeline (figure 1 below) and talk about the story arc of what took place. I think the important thing is, again the adversary knew long before the attack long before the weaponization of the application, long before the deployment, they had this planned out. They knew they were going after a very specific vendor. In this case, SolarWinds knew as far back as 2018, early 2019, that they had a registration domain registered for it already. And they didn’t even give it a DNS look up until almost a year later. But the code application 2019 was weaponization in 2020. We’re talking about months almost a year of time passed, and they knew very well going into it what their intent was. 

Ismael: Yep, absolutely. And as I mentioned before, even once they have the back door in place, the infamous DLL now stays dormant for two weeks. And then they start a careful reconnaissance discovery trying to find out where they are, what type of information they have around them, the users, and identity management. In some cases, we have seen them pivoting and stealing the tokens and credentials then pivoting to the cloud, all of that takes time. right? Which indicates that the attacker has a lot of knowledge on how to do these in a stealthy way. But if we think in terms of attack chains it also helps us to understand where we could have better opportunities to catch these types of activities. 

Michael: We’ve set the stage to understand kind of what exactly took place and a lot of people have talked about the methodology and the attack life cycle. But they had a plan, they weren’t specifically advanced in the way they leveraged the tools. They were very specific about leveraging multiple somewhat novice or novel methods to make use of the vulnerability. More importantly, it was the amount of effort they put into planning also the amount of time they spent trying not to get seen, right. We look at telemetry all the time, whether it’s in a SIEM tool or EDR tool, and we need those pieces of telemetry that tell us what’s happening, and they were very stealthy in the way they were leveraging the techniques. 

Let’s talk a little bit about what they did that was unique to this specific attack and then we’ll talk more about how we can better define our defenses and prevention around what we learned. 

Ismael: Yep, absolutely! And one of the interesting things that we have seen recently is how they disassociated the stage one and stage two to make sure that stage one, the backdoor/DLL wasn’t going to be detected or burnt. So once again, you were talking about the long game. They were planning, they were architecting their attack for the long game. Even if you would find an artifact from a specific machine, it would be harder for you to trace that back to the original backdoor. So they would maintain persistency in the environment for quite some time. I know that this is not new necessarily. We have been telling defenders for a long time: You need to focus on finding persistency, because attackers, they need to stay in the environment. 

We need to look at command and control but obviously these techniques are evolving. They went to great lengths to ensure that the artifacts, the indicators of compromise on each of these different systems for stage two, and at this point we know they use colon strike beacons. Each of these beacons were unique, not just for each organization, which would make sense but also for each computer within each organization. What does that mean for a SOC? Well, imagine you’re doing this and in response you find some odd behavior coming out of the machine, you look at the indicators and what are you going to do next…. scoping, right? Let’s see where else in my network. I’m seeing activity going into that domain to those IPS or those registry keys or that, you know, WMI consumer, for example. But the truth is that those indicators were not used anywhere else, not even in your environment. So that was interesting. 

Michael: Given that we don’t have specific indicators that we could attribute to something malicious in that stage, what we do know is that they’re leveraging common protocols in an uncommon way. The majority of this tactic took place from a C2 perspective through the partial exfiltration being done using DNS. To the organizations that aren’t successfully or effectively monitoring the types of DNS traffic, the DNS taking place on non-standard ports or more quarterly, the volume of DNS that’s originating from machines that don’t typically have it and volume metric analysis can tell us a lot. If in fact, there’s some heuristic value that we can leverage to detect. What else should we be thinking about in terms of the protection side of things, an abuse of trust? 

We trusted an application; we trusted a vendor. This was a clear abuse of that. Zero trust would be one methodology that can incorporate both micro-segmentation as well as explicit verification and more importantly, least trust methodology that we can ensure. I also think about the fact that we’re giving these applications rights and privileges to our environment and administrative privileges. We need to make sure that we’re monitoring both those accounts and service accounts that are being utilized by these applications; specifically, so that we can prescribe a domain, walls and barriers around what they have access to. What else can we do in terms of detection or providing visibility for these types of attacks? 

Ismael: When we’re talking about a complicated or advanced attack, I like to think in terms of frameworks like the new cybersecurity framework, for example that talks about prevention, detection, and response but also identifying the risks and assets first. If you look at it from that perspective and look at an attack chain, even though some of the aspects of these attack were very advanced, there’s always limitations from the attacker perspective. There’s no such thing as the perfect attack, so be aware of the perfect attack fallacy. There’s always something the attacker’s going to do that can help you to detect them. With that in mind, think about putting the MITRE attack behaviors, tactics and the techniques on one side of the matrix and on the other side, like NIST cybersecurity framework identify, protect, detect. 

Some of the things I would suggest is identifying the assets of risk, and I always talk about BCP. This is continuity planning. Sometimes we work in silos and we don’t leverage some of the information that can be in your organization that can point you to the crown jewel. You can’t protect everything, but you need to know what to protect and know how the information flows. For example, where are your soft spots, where are your vendors located on the network, your/their products, how do they get updated? It will be helpful for you to determine or define a defensible secure architecture that enforces it by trying to protect that…the flow of the data. 

When protection fails, it could be a firewall rule that can be any type of protection. The attempts to bypass the firewalls can be turned into detections. Visibility is very important to have across your environment, that doesn’t mean to just manage devices, it also means the network, and endpoints, and servers. Attackers are going to go after the servers, the main controllers, right? Why? Because they want to steal those credentials, those identities used somewhere else and maybe pivot to the cloud. So having enough visibility across the network is important, which means having the camera’s point to the right places. That is when EDR or XDR can come into play, product that keep that telemetry and give you visibility of what’s going on and potentially detect the attack. 

Michael: I think it’s important as we conclude our discussion to chat about the fact that telemetry can come in various flavors; more importantly, both real-time and historical telemetry that’s of significant value, not only in the detection side, but in the forensic investigation/scoping side, and understand exactly where an adversary may have landed. It’s not just having the telemetry accessible, it’s also sometimes the lack of telemetry. That’s the indicator that tells us when logging gets disabled on a device and we stop hearing from it then the SIEM starts seeing a gap in its visibility to a specific asset. That’s why combination of both real-time endpoint protection technologies deployed on both endpoints and servers, as well as the historical telemetry that we’re typically consuming in our analytics frameworks, and technologies like SIEM 

Ismael: Absolutely, and to reiterate the point of finding those places where attackers are going to be, can be spotted more easily. If you look at the whole attack chain maybe the initial vector is harder to find, but start looking at how they got privileges, their escalation, and their persistence. Michael, you mentioned cleaning logs apparently were disabling the auditing logs by using auditpol on the endpoint or creating new firewall rules on the endpoints. If you consume these events, why would somebody disable the event logging temporarily by turning it off and then back on again after some time? Well, they were doing this for a reason. 

Michael: Right. So we’re going to conclude our discussion, hopefully this was informative. Please subscribe to our Securing Tomorrow blog where you can keep up to date with all things SOC related and feel free to visit McAfee.com/SOCwise for more SOC material from our experts. 

 

The post SOCwise Series: Practical Considerations on SUNBURST appeared first on McAfee Blogs.

The Road to XDR

By Kathy Trahan

XDR (eXtended Detection and Response) is a cybersecurity acronym being used by most vendors today.  It is not a new strategy. It’s been around for a while but the journey for customers and vendors has been slow for many reasons. For McAfee, XDR has been integral to our vision, strategy and design philosophy that has guided our solution development for many years. Understanding our road to XDR can help your organization map your XDR journey.

The Building Pressure for XDR

Let’s start with why XDR?  The cry for XDR reflects where cybersecurity is today with fragmented, cumbersome and ineffective security and where folks want to go.  In my CISO conversations it is well noted that security operation centers (SOC) are struggling.  Disjointed control points and disparate tools lead to ineffective security teams.  It allows adversaries to more easily move laterally across the infrastructure undetected and moving intentionally erratic to avoid detection.  Analysts only know this if they manually connect the thousand dots which is time consuming leaving the adversaries with ample dwell time to do damage. It’s no secret. There is a lack of security expertise, and these are regularly tested.  Their investigations are cumbersome, highly manual, and riddled with blind spots. It’s nearly impossible to prioritize efforts, leaving the SOC simply buried in reactive cycles and alert fatigue.  Bottom line—SOC metrics are getting worse—while adversaries are becoming more sophisticated and creative in carrying out their mission.

XDR has the potential to be a one-stop solution to alleviating these SOC issues and improving operational inefficiencies.

XDR Options

Many cybersecurity providers are trying to offer an XDR capability of some sort. They promise to provide visibility and control across all vectors, and offer more analysis, context and automation to obtain faster and better response when reacting to a threat. Point players are limited to expertise in their domain (endpoint or network) and can’t offer a critical, proven cross-portfolio platform. After all, can your endpoint platform offer true XDR functionality it it’s not also connected to network, cloud and web?

McAfee’s long-time mantra has been Better Together. That mantra underscores our commitment to deliver comprehensive security that works cohesively across all threat vectors – device, network, web and cloud and with non-McAfee products.  Industry analysts and customers agree that McAfee is well positioned to deliver a solid XDR offering given our platform strategy and portfolio.

There is more to the McAfee XDR Story

Now, what if you had that same comprehensive XDR capability that not only offered visibility and control across the vectors, but also allows you to get ahead of adversary and empowering you to be more proactive. It could give you a heads up on threats that are likely to attack you based on global and industry trends, based on what your local environment looks like. With this highly credible prediction comes the prescribed guidance on how to counter the threat before it hits you. Imagine it also supplies prescriptive actions you can take to protect your users, data, applications and devices spanning from device to cloud. Other XDR conversations can’t take the conversation to this level of proactivity. McAfee can in our recently announced MVISION XDR.

Not only does McAfee take XDR to the next level, but it also helps you better mitigate cyber risk by enabling you to prioritize and focus on what most matters. What if your threat response was prioritized based on the impact to the organization? You need to understand what the attackers are targeting. How close are they to the most sensitive data based on the users and devices? MVISION XDR offers this context and data-awareness to focus your analysts on what counts. For example, threats that jeopardize sensitive data from a finance executive on his device will automatically be of priority versus a maybe threat on general purpose device with no data. This data-awareness is not noted well in other XDR conversations, but it is with recently announced MVISION XDR.  

Let’s look at McAfee’s journey and investment with XDR and how we got to this exceptional XDR approach.

McAfee XDR Journey

McAfee’s XDR Journey did not simply start up recently because a buzz word appeared that needed to spoke to.   As noted earlier, McAfee’s mantra “Together is Better” sets the stage for a unified security approach, which is core to the XDR promise.  McAfee recognized early on that multi-vendors security ecosystem is a key requirement to build a defense in depth security practice. OpenDXL the open-source community delivered the data exchange layer or the DXL message bus architecture. This enabled our diverse ecosystem of partners from threat intelligence platforms, to orchestration tools to use a common transport mechanism and information exchange protocol. Most enterprise security architectures will be a heterogenous mix of various security solutions. McAfee is one of the founding members of the Open CyberSecurity Alliance (OCA) where we contributed our DXL ontology – enabling participating vendors to not only communicate vital threat details but inform what to do to all connected multi-vendor security solutions.

Realizing EDR is network blind and SIEM is endpoint blind, we integrated McAfee EDR and SIEM.  McAfee continues to deliver XDR capabilities by bringing multiple telemetry sources on a platform from a single console for analytics and investigation, driving remediation decisions with automatic enforcement across the enterprise.  When you combine  MVISION XDR the first proactive, data-aware and open XDR and released MVISION Marketplace and API further supporting the open security ecosystem for XDR capabilities, organizations have a solid starting point to advance their visibility and control across their entire cyber infrastructure.

Before all the XDR hype, McAfee customers have been on the XDR path. Our customers have already gained XDR capabilities and are positioned to grow with more XDR capabilities. I encourage you to check out the video below.

 

 

 

 

 

The post The Road to XDR appeared first on McAfee Blogs.

How OCA Empowers Your XDR Journey

By Kathy Trahan

eXtended Detection & Response (XDR) has become an industry buzzword promising to take detection and response to new heights and improving security operations effectiveness. Not only are customers and vendors behind this but industry groups like Open Cybersecurity Alliance (OCA) share this same goal and there are some open projects to leverage for this effort.

XDR Promise

Let’s start with an understanding of XDR. There is a range of XDR definitions but at the end of day there are core desired capabilities and outcomes.

  • Go beyond the endpoint with advanced and automated detection and response capabilities, and cover all vectors—endpoints, networks, cloud, etc. automatically aggregating and correlating insights in a unified view.

Benefit: Remove the siloes and reduce complexity.  Empower security operations to respond and protect more quickly.

  • Enable security functions to work together to share intelligence and insights, and coordinate actions.

Benefit: Deliver faster and better security outcomes.

This requires security functions to be connected to create a shared data lake of insights and to synchronize detection and response capabilities across the enterprise.  The Open Cybersecurity Alliance (OCA) shares this vision to easily bring interoperability between security products and simplify integration across the threat lifecycle.   OCA enables this with several open source projects available to the industry.

OCA Projects Enabling XDR

Create a Simple Pathway for Security to Work Together

In order to connect security solutions a consistent and easy to use pathway is needed. Contributed by McAfee OpenDXL Ontology is a common messaging format to enable real time data exchange and allow disparate security functions to coordinate and orchestrate actions.  It builds up on other common open standards for message content (OpenC2, STIX, etc.) Vendors and organizations can use the categorized set of messages to perform actions on cybersecurity products and notifications used to signal when significant security-related events occur.  There are multiple communications modes, one to one or one to many.  In addition, there is a centralized authentication and authorization model between security functions. Some examples include but are not limited to:

  • Endpoint solution alerts all network security solutions to block a verified malicious IP and URL addresses.
  • Both endpoint and web security solutions detect suspicious behavior on certain devices calling out to a URL address. Investigation is desired but more time is needed to do so. A ticket is automatically created on the IT service desk and select devices are temporarily quarantined from the main network to minimize risk.

Sample code on OCA site demonstrates how to integrate the ontology into existing security products and related solutions. The whole mantra here is to integrate once and be able to share information with all the tools/products that are leveraging OpenDXL Ontology.

OpenDXL is the open initiative from which OpenDXL Ontology was initially derived.  The Data Exchange Layer (DXL) technology developed by McAfee is being used by 3000 organizations today and is the transport layer used to share information in near real time.  OpenDXL technology is also the foundation to McAfee’s MVISION Marketplace where organizations may easily compose their security actions and fulfill the XDR promise of working together.

One who has followed DXL may ask what makes OpenDXL onotology different from DXL.  DXL is communication bus.  OpenDXL ontology is the common language to enable easy and consistent sharing and collaboration between many different tools on the DXL pathway.

Normalize Cyber Threat Data for a Better Exchange

To optimize threat intelligence between security tools easier, one needs to homogenize the data so it may be easily read and analyzed. Contributed by IBM, STIX -Shifter is an open-source Python patterning library to normalize data across domains.  Structured Threat Information Expression (STIX™) is a language and serialization format used to exchange cyber threat intelligence (CTI). Many organizations have adopted STIX to make better sense of cyber threat intelligence.

STIX enables organizations to share CTI with one another in a consistent and machine-readable manner represented with objects and relationships stored in JavaScript Object Notation (JSON).  STIX-Shifter uses STIX Patterning to return results as STIX Observations.  This allows security communities to better understand what computer-based attacks they are most likely to see, anticipate and/or respond to those attacks faster and more effectively.  What is unique is STIX-Shifter’s ability to search for all three data types—network, file, and log.  This allows you to create complex queries and analytics across many domains like Security Information and Event Management (SIEM), endpoint, network and file levels.

STIX is designed to improve many different capabilities, such as collaborative threat analysis, automated threat exchange, automated detection and response, and more.  Here is a great Introduction to STIX-Shifter video (just under 7 minutes) to watch.

Achieve Compliance with Critical Interoperable Communication

Security Content Automation Protocol Version 2 (SCAP v2) is a data collection architecture to allow continuous real time monitoring for configuration compliance and to detect the presence of vulnerable versions of software on cyber assets.  It offers transport protocols to enable secure interoperable communication of security automation information allowing more active responses to the security postures changes as they occur.  SCAP v2 was derived from the National Institute of Standards Technology (NIST.)

To fully realize the benefits of an evolving XDR strategy, enterprises must ensure the platform they select is built atop an open and flexible architecture with a broad ecosystem of integrated security vendors. McAfee’s innovation and leadership in the Open Cybersecurity Alliance provides customers the confidence that as their security environment evolves, so too will their ability to effectively integrate all relevant technologies, the telemetry they generate and the security outcomes they provide.

If your organization aspires to XDR, the OCA projects bring the technologies to help unite your security functions.  Many vendors are leveraging the OCA in their XDR ecosystems. Leverage the projects and join OCA if you want to influence and contribute to open security working together with ease.

The post How OCA Empowers Your XDR Journey appeared first on McAfee Blogs.

SOCwise: A Security Operation Center (SOC) Resource to Bookmark

By Michael Leland

Core to any organization is managing cyber risk with a security operations function whether it be in-house or outsourced. McAfee has been and continues their commitment to protecting cyber assets. We are dedicated to empowering security operations and with this dedication comes expertise and passion. Introducing SOCwise a monthly series of blogs, podcasts and talks driven by two highly experienced and devoted security operations professionals.  This is an ongoing resource of helpful advice on SOC issues, distinct SOC functional lessons, best practices learned from a range of projects and customers and perspectives on the future of security operations.  In addition, we will invite guests to contribute to this series.

Meet the SOCwise

From Michael Leland, Technical Director of Security Operations, McAfee

From the perspective of a ‘legacy SIEM’ guy I can tell you that there’s nothing more important to a security analyst than intelligence. Notice I didn’t say ‘data’ or ‘information’ – I didn’t even say ‘threat intelligence’. I’m talking about ‘Situational Awareness’. I’m specifically talking about business, user and data context that adds critical understanding and guidance in support of making more timely, accurate or informed decisions related to a given security event. A typical SOC analyst might deal with dozens of incidents each shift – some requiring no more than a few minutes and even fewer clicks to quickly and accurately determine the risk and impact of potential malicious activities. Some incidents require much more effort to triage in hopes to understand intent, impact and attribution.

More often we find the role of SOC analyst to be one of data wrangler – asking and answering key questions of the ‘data’ to determine if an attack is evident and if so, what is the scope and impact of the adversarial engagement. Today’s modern SOC is evolving from one of centralized data collection, information dissemination and coordination of intelligence – one where each stakeholder in security was a part of the pre-determined set of expectations throughout the evaluation and implementation process – to a fully distributed cast of owners/creators (application development, operations, analysts, transformation architects, management) where the lines of authority, expectation and accountability have blurred sometimes beyond recognition.

How can a modern SOC maintain the highest levels of advanced threat detection, incident response and compliance efficacy when they may no longer have all (or sometimes even some) of the necessary context with which to turn data into intelligence? Will Security Operations Centers of the future resemble anything like the ones we built in previous years. From the massive work-from-home migration brought on by an unexpected pandemic to cloud transformation initiatives that are revolutionizing our modern enterprise, the entire premise of a SOC as we know it are being slowly eroded. These are just some of the questions we will try to answer in this blog series.

From Ismael Valenzuela, Senior Principal Engineer, McAfee

I have worked for 20 years in this industry that we once used to call, information security. During this time, I have had the opportunity to be both on the offense and the defense side of the cyber security coin, as a practitioner and as a consultant, as an architect and as an engineer, as a student as well as a SANS author & instructor. I want to believe that I have learned a few things along the way. For example, as a penetration tester and a red teamer, I have learned that there is always a way in, that prevention is ideal, and that detection is a must. As a security architect I have learned that a defensible architecture is all about the right balance between prevention, monitoring, detection and response. As an incident responder I learned that containing an adversary is all about timing, planning and strategy. As a security analyst I have learned the power of automation and of human-machine teaming, to do more analysis and less data gathering. As a threat hunter I have learned to be laser focused on adversarial behaviors, and not on vulnerabilities. And as a governance, risk and compliance consultant, that security is all about tradeoffs, about cost and benefit, about being flexible, adaptable and realizing that for most of our customers, security is not their core business, but something they do to stay in business. To summarize 20 years in a few phrases is challenging, but no one has summarized it better than Bruce Schneier in my opinion, who wrote, precisely 20 years ago: “security is a process, not a product”.

And I am sure that you will agree with me that processes have changed a lot over the last 20 years. This transformation that had already started with the adoption of Cloud and DevOps technologies it is now creating an interesting and unforeseen circumstance. Just when security operations barely found its footing, and right when it was finally coming out from under the realm of IT, garnering respect and budget to achieve desired outcomes, just when we felt that we made it, we are told to pack our things, leave the physical boundaries of the SOC and have everyone work remote.

If this didn’t introduce enough uncertainty, I read that Gartner predicts that 85% of data centers will be gone by 2025. So, I can’t help but wonder: is this the end of it? Is the SOC dead as we know it? What is the future of SecOps in this new paradigm? How will roles change?  Will developers own security in a ‘you code it, you own it’ fashion? Is it realistic to expect a fully automated SOC anytime soon?

Please join us in this new SOCwise series as Michael and I explore answers to these and more questions on the future and the democratization of SOC and SecOps.

The post SOCwise: A Security Operation Center (SOC) Resource to Bookmark appeared first on McAfee Blogs.

The Deepfakes Lab: Detecting & Defending Against Deepfakes with Advanced AI

By Sherin Mathews

Detrimental lies are not new. Even misleading headlines and text can fool a reader.  However, the ability to alter reality has taken a leap forward with “deepfake” technology which allows for the creation of images and videos of real people saying and doing things they never said or did. Deep learning techniques are escalating the technology’s finesse, producing even more realistic content that is increasingly difficult to detect.

Deepfakes began to gain attention when a fake pornography video featuring a “Wonder Woman” actress was released on Reddit in late 2017 by a user with the pseudonym “deepfakes.” Several doctored videos have since been released featuring high-profile celebrities, some of which were purely for entertainment value and others which have portrayed public figures in a demeaning light. This presents a real threat. The internet already distorts the truth as information on social media is presented and consumed through the filter of our own cognitive biases.

Deepfakes will intensify this problem significantly. Celebrities, politicians and even commercial brands can face unique forms of threat tactics, intimidation, and personal image sabotage. The risks to our democracy, justice, politics and national security are serious as well. Imagine a dark web economy where deepfakers produce misleading content that can be released to the world to influence which car we buy, which supermarket we frequent, and even which political candidate receives our vote. Deepfakes can touch all areas of our lives; hence, basic protection is essential.

How are Deepfakes Created?

Deepfakes are a cutting-edge advancement of Artificial Intelligence (AI) often leveraged by bad actors who use the technology to generate increasingly realistic and convincing fake images, videos, voice, and text. These videos are created by the superimposition of existing images, audio, and videos onto source media files by leveraging an advanced deep learning technique called “Generative Adversarial Networks” (GANs). GANs are relatively recent concepts in AI which aim to synthesize artificial images that are indistinguishable from authentic ones. The GAN approach brings two neural networks to work simultaneously: one network called the “generator” draws on a dataset to produce a sample that mimics it. The other network, known as the “discriminator”, assesses the degree to which the generator succeeded. Iteratively, the assessments of the discriminator inform the assessments of the generator. The increasing sophistication of GAN approaches has led to the production of ever more convincing and nearly impossible to expose deepfakes, and the result far exceeds the speed, scale, and nuance of what human reviewers could achieve.

McAfee Deepfakes Lab Applies Data Science Expertise to Detect Bogus Videos

To mitigate this threat, McAfee today announced the launch of the McAfee Deepfakes Lab to focus the company’s world-class data science expertise and tools on countering the  deepfake menace to individuals, organizations, democracy and the overall integrity of information across our society. The Deepfakes Lab combines computer vision and deep learning techniques to exploit hidden patterns and detect manipulated video elements that play a key role in authenticating original media files.  

To ensure the prediction results of the deep learning framework and the origin of solutions for each prediction are understandable, we spent a significant amount of time visualizing the layers and filters of our networks then added a model-agnostic explainability framework on top of the detection framework. Having explanations for each prediction helps us make an informed decision about how much we trust the image and the model as well as provide insights that can be used to improve the latter.

We also performed detailed validation and verification of the detection framework on a large dataset and tested detection capability on deepfake content found in the wild. Our detection framework was able to detect a recent deepfake video of Facebook’s Mark Zuckerberg giving a brief speech about the power of big data. The tool not only provided an accurate detection score but generated heatmaps via the model-agnostic explainability module highlighting the parts of his face contributing to the decision, thereby adding trust in our predictions.

Such easily available deepfakes reiterate the challenges that social networks face when it comes to policing manipulated content. As advancements in GAN techniques produce very realistic looking fake images, advanced computer vision techniques will need to be developed to identify and detect advanced forms of deepfakes. Additionally, steps need to be taken to defend against deepfakes by making use of watermarks or authentication trails.

Sounding the Alarm

We realize that news media do have considerable power in shaping people’s beliefs and opinions. As a consequence, their truthfulness is often compromised to maximize impact. The dictum “a picture is worth a thousand words” accentuates the significance of the deepfake phenomenon. Credible yet fraudulent audio, video, and text will have a much larger impact that can be used to ruin celebrity and brand reputations as well as influence political opinion with terrifying implications. Computer vision and deep learning detection frameworks can authenticate and detect fake visual media and text content, but the damage to reputations and influencing opinion remains.

In launching the Deepfakes Lab, McAfee will work with traditional news and social media organizations to identify malicious deepfakes videos during this crucial 2020 national election season and help combat this new wave of disinformation associated with deepfakes.

In our next blog on deepfakes, we will demonstrate our detailed detection framework. With this framework, we will be helping to battle disinformation and minimize the growing challenge of deepfakes.

To engage the services of the McAfee Deepfakes Lab, news and social media organizations may submit suspect video for analysis by sending content links to media@mcafee.com.

 

The post The Deepfakes Lab: Detecting & Defending Against Deepfakes with Advanced AI appeared first on McAfee Blogs.

IoT Security Fundamentals: IoT vs OT (Operational Technology)

By Dimitar Kostadinov

Introduction: Knowing the Notions  Industrial Internet of Things (IIoT) incorporates technologies such as machine learning, machine-to-machine (M2M) communication, sensor data, Big Data, etc. This article will focus predominantly on the consumer Internet of Things (IoT) and how it relates to Operational Technology (OT). Operational Technology (OT) is a term that defines a specific category of […]

The post IoT Security Fundamentals: IoT vs OT (Operational Technology) appeared first on Infosec Resources.


IoT Security Fundamentals: IoT vs OT (Operational Technology) was first posted on September 29, 2020 at 1:59 pm.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at darren.dalasta@infosecinstitute.com
❌