FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

What the MITRE Engenuity ATT&CK® Evaluations Means to SOC Teams

By Kathy Trahan

SOCwise Weighs In

When the infamous Carbanak cyberattack rattled an East European bank three years ago this month few would have guessed it would later play a starring role in the MITRE Engenuity™ enterprise evaluations of cybersecurity products from ourselves and 28 other vendors. We recently shared the results of this extensive testing and in a SOCwise discussion we turn to our SOCwise experts for insights into what this unprecedented exercise may mean for SOC teams assessing both strategy concerns and their tactical effectiveness.

Carbanak is a clever opponent known for innovative attacks on banks. FIN7 uses the similar malware and strategy of effective espionage and stealth   to target U.S. retail, restaurant and hospitality sectors, according to MITRE Engenuity™, and both were highlighted in this emulation. These notorious actors have reportedly stolen more than $1 billion worldwide over the past five years. An annual event, the four-day ATT&CK Evaluation spanned 20 major steps and 174 sub-steps of the MITRE framework.

The first thing to realize about this exercise is few enterprises could ever hope to match its scope. What do you get when you match up red and blue teams? “I have not been through an exercise like that in an organization with both the red team and blue teams operationally trying to determine what their strengths and weaknesses are,” said Colby Burkett, McAfee XDR architect, a participant in the event, on our recent SOCwise episode. “And that was fantastic.”

A lot of SOC teams conduct vulnerability assessments and penetration testing, but never emulate these types of behaviors, noted Ismael Valenzuela, McAfee’s Sr. Principal Engineer and co-host of SOCwise. And, he adds that many organizations lack the resources and skills to do purple-teaming exercises.

While our SOCwise team raved about the value of conducting broad scale purple-team exercises, they expressed concern that the emphasis on “visibility” is no more valuable than “actionability.” McAfee, which scored 87% on visibility, one of the industry’s best, turned in a remarkable 100% on prevention in the MITRE Engenuity™ evaluations.

Illuminating Visibility

When we think about visibility, we think about how much useful information we can provide to SOC analysts when an attack is underway. There may be a tsunami of attack data entering SOCs, but it’s only actionable when the data that’s presented to analysts is relevant, noted Jesse Netz, Principal Engineer at McAfee.

A well-informed SOC finds a sweet spot on an axis where the number of false positives is low enough and the true positives are high enough “where you can actually do something about it,” added Netz.

He believes that for SOC practitioners, visibility is only part of the conversation. “How actionable is the data you’re getting? How usable is the platform in which that data is being presented to you?”

For example, in the evaluation we saw McAfee’s MVISION EDR preserve actionability and reduce alert fatigue. We excelled in the five capabilities that matter most to SOC teams: time-based security, alert actionability, detection in depth, protection, and visibility.\

If you can’t do anything about the information you obtain, your results aren’t really useful in any way. In this regard, prevention also trumps visibility. “It’s great that we can see and gain visibility into what’s happening,” explained Netz. “But it’s even better at the end of the day as a security practitioner to be able to prevent it.”

Expanding the Scope

The SOCwise team overall applauded the progressively sophisticated approach taken by the MITRE Engenuity™ enterprise evaluations of cybersecurity products—now in its third year. However, our panel of experts noted that this round of testing was more about defending endpoints, rather than cloud-based operations, which are fairly central to defending today’s enterprise. They expect that focus may change in the future.

The MITRE Engenuity™ enterprise evaluations provide a lot of useful data, but they should never be the single deciding factor in a cybersecurity product purchase decision. “Use it as a component of your evaluation arsenal,” advises Netz. “It’ll help to provide kind of statistics around visibility capabilities in this latest round, including some detection capabilities as well, but be focused on the details and make sure you’re getting your information from multiple sources.”

For instance, Carbanak and FIN 7 attacks may not be relevant to your particular organization, especially if they’re centered on Cloud-based operations.

While no emulation can perfectly replicate the experience of battling real-time, zero-day threats, McAfee’s Valenzuela believes these evaluations deliver tremendous value to both our customers and our threat content engineers.

 

SOCwise

Optimize your Security Operations Center with SOCwise
Visit Now

The post What the MITRE Engenuity ATT&CK® Evaluations Means to SOC Teams appeared first on McAfee Blogs.

SOCwise Series: A Tale of Two SOCs with Chris Crowley

By Ismael Valenzuela
coin miners

In a recent episode of McAfee’s SOCwise Series, guest security expert Chris Crowley revealed findings of his recent survey of security efforts within SOCs. His questions were designed to gain insight into all things SOC, including how SOCs can accomplish their full potential and how they assess their ability to keep up with security technology. 

Hosts Ismael Valenzuela and Michael Leland tapped into Chris’ security operations expertise as he told “A Tale of Two SOCs. 

“Chris has a tremendous experience in security operations,” Ismael said. “I always like people who have experience both in the offensive side and the defensive side. Think red, act blue, right? . . . but I think that’s very important for SOCs. Where does ‘A Tale of Two SOCs’ come from?”  

In reference to the Charles Dickens’ classic, Chris explained how survey responses fell into two categories: SOCs that had management support or those that did not. 

“It’s not just this idea of does management support us. It’s are we effectively aligned with the organization?” Chris said. And I think that is manifest in the perception of management support of not management support, right? So, I think when people working in a SOC have the sense that they’re doing good things for the organization, their perceptions is that the management is supporting them.” 

In this case, Chris explains “A Tale of Two SOCs” also relates to the compliance SOC versus the real security SOC. 

“A lot of it has to do with what are the goals when management set up to fund the SOC, right? Maybe the compliance SOC versus the SOC that’s focused on the security outcomes on defending, right?There are some organizations that are funding for basic compliance,” Chris said. [If the] law says we have to do this, we’re doing that. We’re not really going to invest in your training and your understanding and your comprehension. We’re not going to hire really great analysts. We’re just going to buy the tools that we need to buy. We’re going to buy some people to look at monitors and that’s kind of the end of it. 

One of the easiest and telling methods of assessing where an SOC sees itself in this tale is having conversations with staff. Chris recommends asking staff if they feel aligned with management and do they feel empowered? 

“If you feel like you’re being turned into a robot and you pick stuff from here and drop it over there, you’re probably in a place where management doesn’t really support you. Because they’re not using the human being’s capability of synthesis of information and that notion of driving consensus and making things work,” Chris said. “They’re looking more for people who are replaceable to put the bits in the bucket and move through.” 

Chris shared other survey takeaways including how SOCs gauge their value, metrics and tools. 

SOC INDICATORS AND PERCEIVED VALUE 

The survey included hypotheses designed to measure how organizations classify the value of a SOC: 

  • Budget – The majority of respondents did not list budget as a sign of how their organization value them 
  • Skilled Staff  Many valued the hiring of skilled workers as a sign of support for their SOC. 
  • Automation and Orchestration – The SOC teams that believed their organizations already supported through the hiring skilled staff reported their biggest challenge was implementing the automation and orchestration. 

“This showed that as SOC teams met the challenge of skilled staffing, they moved on to their next order of task: Let’s make the computers compute well,” Chris said. 

SOC METRICS 

Ismael asked about the tendency for some SOC management not to report any metrics, and those that simply reported number of incidents not reporting the right metrics. Chris reported that most people said they do provide metrics, but a stillsurprising number of people said that they don’t provide metrics at all. 

Here’s the breakdown of how respondents answered, “Do you provide metrics to your management?” 

  • Yes  69 
  • No  24 
  • We don’t know – 6 

 That roughly a third of respondents either do not report metrics or don’t know if they report metrics was telling to the survey’s author. 

In which case [metrics] obviously don’t have a central place of importance for your SOC,” Chris said. 

Regarding the most frequently used metric – number of incidents – Chris speculated that several SOCs he surveyed are attempting to meet a metric goal of zero incidents, even if it means they’re likely not getting a true reading of their cyber security effectiveness.  

You’re allowed to have zero incidents in the environment. And if you consistently meet that then you’re consistently doing a great job,” Chris said. Which is insane to me, right? Because we want to have the right number of incidents. If youactually have a cyber security problem … you should want to know about it, okay? 

Among the group of respondents who said their most common metric is informational, the desired information from their “zero incidents” metrics doesn’t actually have much bearing on the performance or the value of what the SOC is doing.

“The metrics tend to be focused on what can we easily show as opposed to what truly depicts the value that the SOC has been providing for the org,” Chris said. And at that point you have something you can show to get more funding and more support right over time. 

Chris suggests better use of metrics can truly depict the value that the SOC is providing the organization and justify the desired support it seeks. 

One which I like, which is not an easy metric to develop is actually loss prevention. If I can actually depict quantitatively, which it will not be precise, there will be some speculation in that,” Chris said. “But if I can depict quantitatively what the SOC did this month, or quarter where our efforts actually prevented or intervened in things which were going wrong and we stopped damage that’s loss prevention, right? That’s what the SOC is there for, right? If I just report, we had 13 incidents there’s not a lot of demonstration of value in that. And so always the metrics tend to be focused on what can we easily show as opposed to what truly depicts the value that the SOC has been providing for the org. “ 

SOC TOOLS 

Michael steered the discussion to the value discussion around incident metrics and their relationship with SOC capacityHow many incidents can you handle? Is it a tools issue or a people issue or a combination of both? Chris’ study also revealed subset of tools that respondents more frequently leveraged and added value to delivery of higher capacity of incident closure. 

One question on the survey asked“Do you use it? 

 “Not whether you like it or not, but do you use it? And do you use it in a way where you have full coverage or partial coverage? Because another thing about technology, and this is kind of a dirty secret in technology applications, is a lot of people buy it but actually never get it deployed fully,” Chris said. 

His survey allowed respondents to reveal their most-used technologies and to grade tools. 

The most common used technologies reported in the survey were: 

  1. SIEM 
  2. Malware Protection Systems 
  3. Next-gen Firewall 
  4. VPN 
  5. Log management  

Tools receiving the most A grades: 

  • EDR 
  • VPN 
  • Host-based Malware Protection 
  • SIEM 
  • Network Distributed Denial of Service 

Tools receiving the most F grades: 

  • Full Peak App 
  • Network-Based Application Control 
  • Artificial Intelligence 
  • TLS Intercept 

Chris pointed out that the reasoning behind the F grades may be less a case of failing and more a case of not meeting their full potential. 

“Some of these are newer in this space and some of them just feel like they’re failures for people” Chris said. Now, whether they’re technology failures or not this is what people are reporting that they don’t like in terms of the tech.  

For more findings read or download Chris Crowley’s 2020 survey here. 

Watch this entire episode of SOCwise below.

 

The post SOCwise Series: A Tale of Two SOCs with Chris Crowley appeared first on McAfee Blogs.

Why MITRE ATT&CK Matters?

By Carlos Diaz

MITRE ATT&CK enterprise is a “knowledge base of adversarial techniques”.   In a Security Operations Center (SOC) this resource is serving as a progressive framework for practitioners to make sense of the behaviors (techniques) leading to system intrusions on enterprise networks. This resource is centered at how SOC practitioners of all levels can craft purposeful defense strategies to assess the efficacy of their security investments against that knowledge base.

To enable practitioners in operationalizing these strategies, the knowledge base provides the “why and the what with comprehensive documentation that includes the descriptions and relational mappings of the behaviors observed by the execution of malware, or even when those weapons were used by known adversaries in their targeting of different victims as reported by security vendors. It goes a step further by introducing the “how” in the form of adversary emulation plans which streamline both the design of threat-models and the necessary technical resources to test those models – i.e., emulating the behavior of the adversary

For scenarios where SOCs may not have the capacity to do this testing themselves, the MITRE Corporation conducts annual evaluations of security vendors and their products against a carefully crafted adversary emulation plan, and it publishes the results for public consumption.  The evaluations can help SOC teams assess both strategy concerns and tactical effectiveness for their defensive needs as they explore market solutions.

This approach is transformative for cyber security, it provides an effective way to evolve from constraints of being solely dependent on IOC-centric or signature-driven defense models to now having a behavior-driven capability for SOCs to tailor their strategic objectives into realistic security outcomes measured through defensive efficacy goals. With a behavior-driven paradigm, the emphasis is on the value of visibility surrounding the events of a detection or prevention action taken by a security sensor – this effectively places context as the essential resource a defender must have available to pursue actionable outcomes.

Cool! So what is this “efficacy” thing all about?

I believe that to achieve meaningful security outcomes our products (defenses) must demonstrate how effective they are (efficacy) at enabling or preserving the security mission we are pursuing in our organizations. For example, to view efficacy in a SOC, let’s see it as a foundation of 5 dimensions:

Detection Gives SOC Analysts higher event actionability and alert handling efficiencies with a focus on most prevalent adversarial behaviors – i.e., let’s tackle the alert-fatigue constraint!
Prevention Gives SOC Leaders/Sponsors confidence to show risk reduction with minimized impact/severity from incidents with credible concerns – e.g., ransomware or destructive threats.
Response Gives SOC Responders a capacity to shorten the time between detection and activating the relevant response actions – i.e., knowing when and how to start containing, mitigating or eradicating.
Investigative Gives SOC Managers a capability to improve quality and speed of investigations by correlating low signal clues for TIER 1 staff and streamlining escalation processes to limited but advanced resources.
Hunting Enables SOC Hunters a capacity to rewind-the-clock as much as possible and expand the discovery across environments for high value indicators stemming from anomalous security events.

 

So how does “efficacy” relate to my SOC?

Efficacy at the Security and Technical Leadership levels confirms how the portfolio investments are expected to yield the defensive posture of our security strategy, for example, compare your investments today to any of the following:

Strategy (Investment)

Portfolio Focus

Efficacy Goals

 

Balanced Security

Ability to:
  • Focus on prevalent behaviors
  • Confidently prevent attack chains with relevant impact/severity
  • Provide alert actionability
  • Increase flexibility in response plans based on alert type and impact situation

Caveats:

  • Needs efficacy testing program with adversary emulation plans
 

Detection Focus

Ability to:
  • Focus on prevalent behaviors
  • Provide alert actionability
  • Proactively discover indicators with hunting

Caveats:

  • Requires humans
  • Minimal prevention maturity
  • Requires solid incident response expertise
  • Hard to scale to proactive phases due to prevention maturity

Prevention Focus

Ability to:
  • Confidently prevent attack chains with relevant impact/severity
  • Lean incident response plans
  • Provide alert actionability and Lean monitoring plans

Caveats:

  • Hard to implement across the business without disrupting user experience and productivity
  • Typically for regulated or low tolerance network zones like PCI systems
  • Needs high TCO for the management of prevention products

 Response Focus

Ability to:
  • Respond effectively to different scenarios identified by products or reported to the SOC

 Caveats:

  • Always reacting
  • Requires humans
  • Hard to retain work staff
  • Unable to spot prevalent behaviors
  • Underdeveloped detection
  • Underdeveloped prevention

 

MITRE ATT&CK matters as it introduces the practical sense-making SOC professionals need so they can discern attack chains versus security events through visibility of the most prevalent behaviors.

Consequently, it allows practitioners to overcome crucial limitations from the reliance on indicator-driven defense models that skew realistic efficacy goals, thereby maximizing the value of a security portfolio investment.

The post Why MITRE ATT&CK Matters? appeared first on McAfee Blogs.

Are You Ready for XDR?

By Kathy Trahan

What is your organizations readiness for the emerging eXtended Detection Response (XDR) technology? McAfee just released the first iteration of this technologyMVISION XDR. As XDR capabilities become available, organizations need to think through how to embrace the new security operations technology destined to empower detection and response capabilities. XDR is a journey for people and organizations. 

The cool thing about McAfee’s offering is the XDR capabilities is built on the McAfee platform of MVISION EDR, MVISION Insights and is extended to other McAfee products and third-party offerings.   This means — as a McAfee customer  your XDR journey has already begun. 

The core value prop behind XDR is to empower the SecOps function which is still heavily burdened with limited staff and resources while the threat landscape roars. This cry is not new. As duly noted in the book,  Ten Strategies of World-class Cybersecurity Operations Center, written quite a few moons ago:  “With the right tools, one good analyst can do the job of 100 mediocre ones.” XDR is the right tool. 

 SecOps empowerment means impacting and changing people and process in a positive manner resulting in better security outcomesOrganizations must consider and prepare for this helpful shift. Here are three key considerations organizations need to be aware of and ready for: 

The Wonder of Harmonizing Security Controls and Data Across all Vectors  

A baseline requirement for XDR is to unify and aggregate security controls and data to elevate situation awareness.  Now consider what does this mean to certain siloed functions like endpoint, network and web.  Let’s say you are analyst who typically pulls telemetry from separate control points (endpoint, network, web) moving from each tool with a login, to another tool with another login and so on. Or maybe you only have access to the endpoint tool. To gain insight into the network you emailed the network folks with artifacts you are seeing on the endpoint and ask if these is anything similar, they have seen on the edge and what they make of it. Often there is a delayed response from network folks given their priorities. And you call the web folks for their input on what they are seeing.  Enter XDR.  What if this information and insights was automatically given to you on a unified dashboard where situation awareness analysis has already begun.  This reduces the manual pivoting of copy and pasting, emailing, and phone calls.  It removes the multiple data sets to manage and the cognitive strain to make sense of it. The collection, triaging, and initial investigative analysis are automated and streamlined. This empowers the analysts to get to a quicker validation and assessment. The skilled analyst will also use  experience and human intuition to respond to the adversary, but the initial triaging, investigation, and analysis has already been doneIn addition, XDR fosters the critical collaboration between the network operations and security operations since adversary movement is erratic across the entire infrastructure  

Actionable Intelligence Fosters Proactive SecOps Efforts (MVISION XDR note-worthy distinction) 

Imagine if your SecOps gained high priority threat intelligence before the adversary hits and enters your environment. What does it mean to your daily SecOps processes and policy?  It removes a significant amount to of hunting, triaging and investigation cycles. It simply prioritizes and accelerates the investigation.  It answers the questions that matter. Any associated campaign is bubbled up immediately.  You are getting over a hundred high alerts, but one is related to a threat campaign that is likely to hit.  It removes the guess work and prioritizes SecOps efforts. It assesses your environment and the likely impact—what is vulnerable. More importantly it suggests counter measures you can take. It moves you from swimming in context to action in minutes.   

This brings the SecOps to a decision moment faster—do they have the authority to respond? Are they a participant in prevention efforts?  Note this topic is Strategy Three in the Ten Strategies of World-class Cybersecurity Operations Center where it is highly encouraged to empower SecOps to make and/or participate in such decisions.  Policies for response decisions and actions vary by organizations, the takeaway here is decision moments come faster and more often with significant research and credible context from MVISION XDR. 

Enjoy the Dance Between Security and IT  

XDR is an open, integrated platform.  So, what does it mean to people and process if all the pieces are integrated and security functions coordinate efforts? It depends on the pieces that are connected. For example, if SecOps can place a recommendation to update certain systems on the IT service system automatically it removes the necessity to login into the IT system and place a request or in some cases call or email IT (eliminating time-consuming step.)  There is a heightened need for whatif scenario policies driven by Secure Orchestration Automation Response (SOAR) solutions.  These policies are typically reflected in a manual playbook or SOAR playbook.  

Let’s consider an example, when an email phishing alert is offered the SOAR automatically (by policy/play required) compares the alert against others to see if there are commonalties worth noting. If so, the common artifacts are assigned to one analyst versus distributing separate alerts to many analysts. This streamlines the investigation and response to be more effective and less consuming. There are many more examples, but the point is when you coordinate security functions organization must think through how they want each function to act under specific circumstances—what is your policy for these circumstances. 

These are just a few areas to consider when you embrace XDR. I hope this initial discussion started you thinking about what to consider when embracing XDR. We have an online SOC audit where you can assess your SOC maturity and plan where you want to go.  Join us for a webinar on XDR readiness where experts will examine how to prepare to optimize XDR capabilities.  We also have a SOC best practices series, SOCwise that offers regular advice and tips for your SOC efforts!   

 

 

The post Are You Ready for XDR? appeared first on McAfee Blogs.

XDR – Please Explain?

By Rodman Ramezanian

SIEM, we need to talk! 

Albert Einstein once said, We cannot solve our problems with the same thinking we used when we created them. 

Security vendors have spent the last two decades providing more of the same orchestration, detection, and response capabilities, while promising different results. And as the old adage goes, doing the same thing over and over again whilst expecting different results is? Ill let you fill in the blank yourself.   

Figure 1: The Impact of XDR in the Modern SOC: Biggest SIEM challenges – ESG Research 2020

SIEM! SOAR! Next Generation SIEM! The names changed, while the same fundamental challenges remained: they all required heavy lifting and ongoing manual maintenance. As noted by ESG Research, SIEM – being a baseline capability within SOC environments  continues to present challenges to organisations by being either too costly, exceedingly resource intensive, requiring far too much expertise, and various other concerns. A common example of this is how SOC teams still must create manual correlation rules to find the bad connections between logs from different products, applications and networksToo often, these rules flooded analysts with information and false alerts and render the product too noisy to effective. 

The expanding attack surface, which now spans Web, Cloud, Data, Network and morehas also added a layer of complexity. The security industry cannot only rely on its customers analysts to properly configure a security solution with such a wide scope. Implementing only the correct configurations, fine-tuning hundreds of custom log parsers and interpreters, defining very specific correlation rules, developing necessary remediation workflows, and so much more  its all a bit too much. 

Detections now bubble up from many siloed tools, too, including Intrusion Prevention System(IPS) for network protection, Endpoint Protection Platforms (EPP) deployed across managed systems, and Cloud Application Security Broker (CASB) solutions for your SaaS applications. Correlating those detections to paint a complete picture is now an even bigger challenge. 

There is also no R in SIEM – that is, there is no inherent response built into SIEM. You can almost liken it to a fire alarm that isnt connected to the sprinklers.  

SIEMs have been the foundation of security operations for decades, and that should be acknowledged. Thankfully, theyre now being used more appropriately, i.e. for logging, aggregation, and archiving 

Now, Endpoint Detection and Response (EDR) solutions are absolutely on the right track  enabling analysts to sharpen their skills through guided investigations and streamline remediation efforts – but it ultimately suffers from a network blind spot. Similarly, network security solutions dont offer the necessary telemetry and visibility across your endpoint assets.

Considering the alternatives

Of Gartners Top 9 Security and Risk Trends for 2020Extended detection and response capabilities emerge to improve accuracy and productivity ranked as their #1 trend. They notedExtended detection and response (XDR) solutions are emerging that automatically collect and correlate data from multiple security products to improve threat detection and provide an incident response capabilityThe primary goals of an XDR solution are to increase detection accuracy and improve security operations efficiency and productivity. 

That sounds awfully similar to SIEM, so how is an XDR any different from all the previous security orchestration, detection, and response solutions? 

The answer is: An XDR is a converged platform leveraging a common ontology and unifying language. An effective XDR must bring together numerous heterogeneous signals, and return a homogenous visual and analytical representation.. XDR must clearly show the potential security correlations (or in other words, attack stories) that the SOC should focus on. Such a solution would de-duplicate information on one hand, but would emphasize the truly high-risk attacks, while filtering out the mountains of noise. The desired outcome would not require exceeding amounts of manual work; allowing SOC analysts to stop serving as an army of translators and focus on the real work  leading investigations and mitigating attacks. This normalized presentation of data would be aware of context and content, be advanced technologically, but simple for analysts to understand and act upon. 

SIEMs are data-driven, meaning they need data definitions, custom parsing rules and pre-baked content packs to retrospectively provide context. In contrast, XDR is hypothesis driven, harnessing the power of Machine Learning and Artificial Intelligence engines to analyse high-fidelity threat data from a multitude of sources across the environment to support specific lines of investigation mapped to the MITRE ATT&CK framework.  

The MITRE ATT&CK framework is effective at highlighting how bad guys do what they do, and how they do it. While traditional prevention measures are great at spot it and stop it protections, MITRE ATT&CK demonstrates there are many steps taking place in the attack lifecycle that arent obvious. These actions dont trigger sufficient alerting to generate the confidence required to support a reaction.  

XDR isnt a single product. Rather, it refers to an assembly of multiple security products (and services) that comprise a unified platform. AnXDR approach will shiftprocesses and likely merge and encouragetighter coordination between different functions likeSOC analysts, hunters, incident respondersand ITadministrators. 

The ideal XDR solution must provide enhanced detection and response capabilities across endpoints, networks, and cloud infrastructures. It needs to prioritise and predict threats that matter BEFORE the attack and prescribe necessary countermeasures allowing the organisation to proactively harden their environment. 

Figure 2: Where current XDR approaches are failing

McAfees MVISION XDR solution does just that, by empowering the SOC to do more with unified visibility and control across endpoints, network, and cloud. McAfee XDR orchestrates both McAfee and non-McAfee security assets to deliver actionable cyber threat management and support both guided and automated investigations. 

What if you could find out if you’re in the crosshairs of a top threat campaign, by using global telemetry from over 1 billion sensors that automatically tracks new campaigns according to geography and industry vertical? Wouldn’t that beinsightful? 

“Many firms want to be more proactive but do not have the resources or talent to execute. McAfee can help bridge this gap by offering organisations a global outlook across the entire threat landscape with local context to respond appropriately. In this way, McAfee can support a CISO-level strategy that combines risk and threat operations.” 

– Jon Oltsik, ESG Senior Principal Analyst and Fellow
 

But, hang on… Is this all just another ‘platform’ play 

Take a moment to consider how platform offerings have evolved over the years. Initially designed to compensate for the heterogeneity and volume of internal data sources and external threat intelligence feeds, the core objective has predominantly been to manifest data centrally from across a range of vectors in order to streamline security operations efforts. We then saw the introduction of case management capabilities. 

Over the past decade, the security industry proposed solving many of  the challenges presented in SOC contexts through integrations. You would buy products from a few different vendorswho promised it would all work together through API integration, and basically give you some form of pseudo-XDR outcomes were exploring here.  

Frankly, there are significant limitations in that approach. There is no data persistence; you basically make requests to the lowest API denominator on a one-to-one basis. The information sharing model was one-way question and answer leveraging a scheduled push-pull methodology. The other big issue was the inability to pull information in whatever form  you were limited to the API available between the participating parties, with the result ultimately only as good as the dumbest API.  

And what about the lack of any shared ontology, meaning little to no common objects or attributes? There were no shared components, such as UI/UX, incident management, logging, dashboards, policy definitions, user authentication, etc. 

What’s desperately been needed is an open underlying platform – essentially like a universal API gateway scaled across the cloud that leverages messaging fabrics like DXL that facilitate easy bi-lateral exchange between many security functions – where vendors and partner technologies create tight integrations and synergies to support specific use cases benefitting SOC ecosystems. 

Is XDR, then, a solution or product to be procured? Or just a security strategy to be adopted?Potentially, its both.Some vendors are releasing XDR solutions that complement their portfolio strengths, and others are just flaunting XDR-like capabilities.  

 Closing Thoughts

SIEMs still deliver specific outcomes to organisations and SOCswhich cannot be replaced by XDR. In fact, with XDR, a SIEM can be even more valuable. 

For most organisations, XDR will be a journey, not a destination. Their ability to become more effective through XDR will depend on their maturity and readiness toembrace all the requiredprocesses.In terms of cybersecurity maturity, if youd rate your organisation at a medium to high level, the question becomes how and when. 

Most organisations using an Endpoint Detection and Response(EDR) solution are likely quite readyto embrace XDRscapabilities. They are already investigating and resolving endpoint threats and theyre ready to expand this effort to understand how their adversaries move across their infrastructure, too. 

If youd like to know more about how McAfee addresses these challenges with MVISION XDR, feel free to reach out! 

The post XDR – Please Explain? appeared first on McAfee Blogs.

6 Best Practices for SecOps in the Wake of the Sunburst Threat Campaign

By Ismael Valenzuela
Strong passwords

1. Attackers have a plan, with clear objectives and outcomes in mind. Do you have one?

Clearly this was a motivated and patient adversary. They spent many months in the planning and execution of an attack that was not incredibly sophisticated in its tactics, but rather used multiple semi-novel attack methods combined with persistent, stealthy and well-orchestrated processes. In a world where we always need to find ways to stay even one step ahead of adversaries, how well is your SOC prepared to bring the same level of consistent, methodical and well-orchestrated visibility and response when such an adversary comes knocking at your door? 

Plan, test and continuously improve your SecOps processes with effective purple-teaming exercises. Try to think like a stealthy attacker and predict what sources of telemetry will be necessary to detect suspicious usage of legitimate applications and trusted software solutions.

2. Modern attacks abuse trust, not necessarily vulnerabilities. Bethreat focused. Do threat modeling and identify where the risks are. Leverage BCP data and think of your identity providers (AD Domain Controllers, Azure AD, etc.) as ‘crown jewels’.

Assume that your most critical assets are under attack, especially those that leverage third-party applications where elevated privileges are a requirement for their effective operation. Granting service accounts unrestricted administrative privileges sounds like a bad idea – because it is. Least-privilege access, micro segmentation and ingress/egress traffic filtering should be implemented in support of a Zero-Trust program for those assets specifically that allow outside access by a ‘trusted’ 3rd-party.

3. IOCs are becoming less useful as attackers don’t reuse them, sometimes even inside the same victim. Focus on TTPs & behaviors.

The threat research world has moved beyond atomic indicators, file hashes and watchlists of malicious IPs and domains upon which most threat intelligence providers still rely. Think beyond Indicators of Compromise. We should rely less on static lists of artifacts but instead focused on heuristics and behavioral indicators. Event-only analysis can easily identify the low-hanging fruit of commodity attack patterns, but more sophisticated adversaries are going to make it more difficult. Ephemeral C2 servers and single-use DNS entries per asset (not target enterprise) were some of the more well-planned (yet relatively simple) behaviors seen in the Sunburst attack. Monitor carefully for changes in asset configuration like logging output/location or even the absence of new audit messages in a given polling period.  

4. Beware of the perfect attack fallacy. Attackers can’t innovate across the entire attack chain. Identify places where you have more chances to detect their presence (i.e. privilege escalation, persistency, discovery, defense evasion, etc.)

All telemetry is NOT created equal. Behavioral analysis of authentication events in support of UEBA detections can be incredibly effective, but that assumes identity data is available in the event stream. Based on my experience, SIEM data typically yields only 15-20% of events that include useful identity data, whereas almost 85% of cloud access events contain this rich contextual data, a byproduct of growing IAM adoption and SSO practices. Events generated from critical assets (crown jewels) are of obvious interest to SecOps analysts for both detection and investigation, but don’t lose sight of those assets on the periphery; perhaps an RDP jump box sitting in the DMZ that also synchronizes trust with enterprise AD servers either on-premises or in the cloud. Find ways to isolate assets with elevated privilege or those running ‘trusted’ third-party applications using micro segmentation where behavioral analysis can more easily be performed. Leverage volumetric analysis of network traffic to identify potentially abnormal patterns; monitor inbound and outbound requests (DNS, HTTP, FTP, etc) to detect when a new session has been made to/from an unknown source/destination – or where the registration age of the target domain seems suspiciously new. Learn what ‘normal’ looks like from these assets by baselining and fingerprinting, so that unusual activity can be blocked or at the very least escalated to an analyst for review. 

5. Architect your defenses for visibility, detection & response to augment protection capabilities. Leverage EDR, XDR & SIEM for historical and real-time threat hunting.

The only way to gain insight into the attacker behaviors – and any chance of detecting and disrupting attacks of this style – require extensive telemetry from a wide array of sensors. Endpoint sensor grids provide high-fidelity telemetry about all things on-device but are rarely deployed on server assets and tend to be network-blind. SIEMs have traditionally been leveraged to consume and correlate data from all 3rd-party data sources, but it likely does not have the ability (or scale) to consume all EDR/endpoint events, leaving them largely endpoint-blind. As more enterprise assets and applications move to the cloud, we have yet a third source of high-value telemetry that must be available to SOC analysts for detection and investigation. Threat hunting can only effectively be performed when SecOps practitioners have access to a broad range of real-time and historical telemetry from a diverse sensor grid that spans the entire enterprise. They need the ability to look for behaviors – not just events or artifacts – across the full spectrum of enterprise assets and data. 

6. In today’s #cyberdefensegame it’s all about TIME. 

Time can be an attacker’s best offense, sometimes because of the speed with which they can penetrate, reconnoiter, locate and exfiltrate sensitive data – a proverbial ‘smash-and-grab’ looting. Hardly subtle and quickly noticed for the highly visible crime that it is. However in the case of Sunburst the adversary used time to their advantage, this time making painstakingly small and subtle changes to code in the software supply chain to weaponize a trusted application, waiting for it to be deployed across a wide spectrum of enterprises and governmental agencies, quietly performing reconnaissance on the affected asset and those around it, and leveraging low-and-slow C2 communications over a trusted protocol like DNS. Any one of these activities might easily be overlooked by even the most observant SOC. This creates an even longer detection cycle, allowing potential attackers a longer dwell time.  

This blog is a summary of the SOCwise Conversation on January 25th 2020.  Watch for the next one! 

For more information on the Sunburst attack, please visit our other resources on the subject: 

Blogs:

McAfee Knowledge-base Article (Product Coverage)

McAfee Knowledge-base Article (Insights Visibility)

 

The post 6 Best Practices for SecOps in the Wake of the Sunburst Threat Campaign appeared first on McAfee Blogs.

SOCwise Series: Practical Considerations on SUNBURST

By McAfee

This blog is part of our SOCwise series where we’ll be digging into all things related to SecOps from a practitioner’s point of view, helping us enable defenders to both build context and confidence in what they do. 

Although there’s been a lot of chatter about supply chain attacks, we’re going to bring you a slightly different perspective. Instead of talking about the technique, let’s talk about what it means to a SOC and more importantly focusing on the SUNBURST attack, where the adversary leveraged a trusted application from SolarWinds. 

Below you are going to see the riveting discussion between our very own Ismael Valenzuela and Michael Leland where they’ll talk about the supply chain hacks and the premise behind them. More importantly, why this one in particular was so successful. And lastly, they’ll cover best practices, hardening prevention, and early detection. 

Michael: Ismael, let’s start by talking a little bit about what the common types of supply chain attacks. We know from past experience that they’ve primarily been software; though, it’s not unheard of to have hardware-based supply chain attacks as well. But really, it’s about hijacking or masquerading as a vendor or a trusted supplier and objecting malicious code into trusted, authorized applications. Sometimes even hijacking the certificate to make it look legitimate. And this last one was about injecting into third party libraries. 

In relation to SUNBURST, it was a long game, right? This was an adversary long game attack where they had over 12 months to plan, stage, deploy, weaponize and reap the benefits. And we’re going to talk more about what they did, but more importantly, also how we as practitioners can leverage the sources of telemetry we have for both detection and hopefully future prevention. The first question that most people ask is, is this new and clearly this is not a new technique or tactic, but let’s talk a little bit about why this one was different. 

Ismael: Right! The most interesting piece about SolarWinds is not that much of it is a supply chain attack because as you said, it’s true. It’s not new. We’ve seen similar things in the past. I know there’s a lot of controversy around some of them like Supermicro, we and many others over the last few years and it’s difficult to prove these types of attacks. But to me, the most interesting piece is not just how it got into the environment, but we talked about malicious updates into legitimate applications. For example, we’ve seen some of that in the past with modifying code on GitHub, right? Unprotected reports, attackers, threat actors are modifying the code. 

We’re going to talk a little bit about what organizations can do to identify these but what I really want to highlight out of this is about the attackers, they have a plan right? They compromise the environment carefully, they stayed dormant for about two weeks, and after that, as we have seen in recent research, they started to deploy second stage payloads. The way they did that was very, very interesting, and its changing the game. It’s not radically new, but there’s always something new that we may have not seen before. And it’s important for defendants to understand these behaviors so they can start trying to detect them. In summary, they have a plan and we should ask ourselves if we have a plan for these type of attacks? Not only the initial vector but also what happens after that. 

Michael: Let’s take a look at the timeline (figure 1 below) and talk about the story arc of what took place. I think the important thing is, again the adversary knew long before the attack long before the weaponization of the application, long before the deployment, they had this planned out. They knew they were going after a very specific vendor. In this case, SolarWinds knew as far back as 2018, early 2019, that they had a registration domain registered for it already. And they didn’t even give it a DNS look up until almost a year later. But the code application 2019 was weaponization in 2020. We’re talking about months almost a year of time passed, and they knew very well going into it what their intent was. 

Ismael: Yep, absolutely. And as I mentioned before, even once they have the back door in place, the infamous DLL now stays dormant for two weeks. And then they start a careful reconnaissance discovery trying to find out where they are, what type of information they have around them, the users, and identity management. In some cases, we have seen them pivoting and stealing the tokens and credentials then pivoting to the cloud, all of that takes time. right? Which indicates that the attacker has a lot of knowledge on how to do these in a stealthy way. But if we think in terms of attack chains it also helps us to understand where we could have better opportunities to catch these types of activities. 

Michael: We’ve set the stage to understand kind of what exactly took place and a lot of people have talked about the methodology and the attack life cycle. But they had a plan, they weren’t specifically advanced in the way they leveraged the tools. They were very specific about leveraging multiple somewhat novice or novel methods to make use of the vulnerability. More importantly, it was the amount of effort they put into planning also the amount of time they spent trying not to get seen, right. We look at telemetry all the time, whether it’s in a SIEM tool or EDR tool, and we need those pieces of telemetry that tell us what’s happening, and they were very stealthy in the way they were leveraging the techniques. 

Let’s talk a little bit about what they did that was unique to this specific attack and then we’ll talk more about how we can better define our defenses and prevention around what we learned. 

Ismael: Yep, absolutely! And one of the interesting things that we have seen recently is how they disassociated the stage one and stage two to make sure that stage one, the backdoor/DLL wasn’t going to be detected or burnt. So once again, you were talking about the long game. They were planning, they were architecting their attack for the long game. Even if you would find an artifact from a specific machine, it would be harder for you to trace that back to the original backdoor. So they would maintain persistency in the environment for quite some time. I know that this is not new necessarily. We have been telling defenders for a long time: You need to focus on finding persistency, because attackers, they need to stay in the environment. 

We need to look at command and control but obviously these techniques are evolving. They went to great lengths to ensure that the artifacts, the indicators of compromise on each of these different systems for stage two, and at this point we know they use colon strike beacons. Each of these beacons were unique, not just for each organization, which would make sense but also for each computer within each organization. What does that mean for a SOC? Well, imagine you’re doing this and in response you find some odd behavior coming out of the machine, you look at the indicators and what are you going to do next…. scoping, right? Let’s see where else in my network. I’m seeing activity going into that domain to those IPS or those registry keys or that, you know, WMI consumer, for example. But the truth is that those indicators were not used anywhere else, not even in your environment. So that was interesting. 

Michael: Given that we don’t have specific indicators that we could attribute to something malicious in that stage, what we do know is that they’re leveraging common protocols in an uncommon way. The majority of this tactic took place from a C2 perspective through the partial exfiltration being done using DNS. To the organizations that aren’t successfully or effectively monitoring the types of DNS traffic, the DNS taking place on non-standard ports or more quarterly, the volume of DNS that’s originating from machines that don’t typically have it and volume metric analysis can tell us a lot. If in fact, there’s some heuristic value that we can leverage to detect. What else should we be thinking about in terms of the protection side of things, an abuse of trust? 

We trusted an application; we trusted a vendor. This was a clear abuse of that. Zero trust would be one methodology that can incorporate both micro-segmentation as well as explicit verification and more importantly, least trust methodology that we can ensure. I also think about the fact that we’re giving these applications rights and privileges to our environment and administrative privileges. We need to make sure that we’re monitoring both those accounts and service accounts that are being utilized by these applications; specifically, so that we can prescribe a domain, walls and barriers around what they have access to. What else can we do in terms of detection or providing visibility for these types of attacks? 

Ismael: When we’re talking about a complicated or advanced attack, I like to think in terms of frameworks like the new cybersecurity framework, for example that talks about prevention, detection, and response but also identifying the risks and assets first. If you look at it from that perspective and look at an attack chain, even though some of the aspects of these attack were very advanced, there’s always limitations from the attacker perspective. There’s no such thing as the perfect attack, so be aware of the perfect attack fallacy. There’s always something the attacker’s going to do that can help you to detect them. With that in mind, think about putting the MITRE attack behaviors, tactics and the techniques on one side of the matrix and on the other side, like NIST cybersecurity framework identify, protect, detect. 

Some of the things I would suggest is identifying the assets of risk, and I always talk about BCP. This is continuity planning. Sometimes we work in silos and we don’t leverage some of the information that can be in your organization that can point you to the crown jewel. You can’t protect everything, but you need to know what to protect and know how the information flows. For example, where are your soft spots, where are your vendors located on the network, your/their products, how do they get updated? It will be helpful for you to determine or define a defensible secure architecture that enforces it by trying to protect that…the flow of the data. 

When protection fails, it could be a firewall rule that can be any type of protection. The attempts to bypass the firewalls can be turned into detections. Visibility is very important to have across your environment, that doesn’t mean to just manage devices, it also means the network, and endpoints, and servers. Attackers are going to go after the servers, the main controllers, right? Why? Because they want to steal those credentials, those identities used somewhere else and maybe pivot to the cloud. So having enough visibility across the network is important, which means having the camera’s point to the right places. That is when EDR or XDR can come into play, product that keep that telemetry and give you visibility of what’s going on and potentially detect the attack. 

Michael: I think it’s important as we conclude our discussion to chat about the fact that telemetry can come in various flavors; more importantly, both real-time and historical telemetry that’s of significant value, not only in the detection side, but in the forensic investigation/scoping side, and understand exactly where an adversary may have landed. It’s not just having the telemetry accessible, it’s also sometimes the lack of telemetry. That’s the indicator that tells us when logging gets disabled on a device and we stop hearing from it then the SIEM starts seeing a gap in its visibility to a specific asset. That’s why combination of both real-time endpoint protection technologies deployed on both endpoints and servers, as well as the historical telemetry that we’re typically consuming in our analytics frameworks, and technologies like SIEM 

Ismael: Absolutely, and to reiterate the point of finding those places where attackers are going to be, can be spotted more easily. If you look at the whole attack chain maybe the initial vector is harder to find, but start looking at how they got privileges, their escalation, and their persistence. Michael, you mentioned cleaning logs apparently were disabling the auditing logs by using auditpol on the endpoint or creating new firewall rules on the endpoints. If you consume these events, why would somebody disable the event logging temporarily by turning it off and then back on again after some time? Well, they were doing this for a reason. 

Michael: Right. So we’re going to conclude our discussion, hopefully this was informative. Please subscribe to our Securing Tomorrow blog where you can keep up to date with all things SOC related and feel free to visit McAfee.com/SOCwise for more SOC material from our experts. 

 

The post SOCwise Series: Practical Considerations on SUNBURST appeared first on McAfee Blogs.

❌