FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

How to Interpret the 2023 MITRE ATT&CK Evaluation Results

By The Hacker News
Thorough, independent tests are a vital resource for analyzing provider’s capabilities to guard against increasingly sophisticated threats to their organization. And perhaps no assessment is more widely trusted than the annual MITRE Engenuity ATT&CK Evaluation.  This testing is critical for evaluating vendors because it’s virtually impossible to evaluate cybersecurity vendors based on their own

Identity Threat Detection and Response: Rips in Your Identity Fabric

By The Hacker News
Why SaaS Security Is a Challenge In today's digital landscape, organizations are increasingly relying on Software-as-a-Service (SaaS) applications to drive their operations. However, this widespread adoption has also opened the doors to new security risks and vulnerabilities. The SaaS security attack surface continues to widen. It started with managing misconfigurations and now requires a

How to Apply MITRE ATT&CK to Your Organization

By The Hacker News
Discover all the ways MITRE ATT&CK can help you defend your organization. Build your security strategy and policies by making the most of this important framework. What is the MITRE ATT&CK Framework? MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a widely adopted framework and knowledge base that outlines and categorizes the tactics, techniques, and procedures (TTPs)

MITRE Unveils Top 25 Most Dangerous Software Weaknesses of 2023: Are You at Risk?

By Ravie Lakshmanan
MITRE has released its annual list of the Top 25 "most dangerous software weaknesses" for the year 2023. "These weaknesses lead to serious vulnerabilities in software," the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said. "An attacker can often exploit these vulnerabilities to take control of an affected system, steal data, or prevent applications from working." The list is

Protecting your business with Wazuh: The open source security platform

By The Hacker News
Today, businesses face a variety of security challenges like cyber attacks, compliance requirements, and endpoint security administration. The threat landscape constantly evolves, and it can be overwhelming for businesses to keep up with the latest security trends. Security teams use processes and security solutions to curb these challenges. These solutions include firewalls, antiviruses, data

Feds warn about right Royal ransomware rampage that runs the gamut of TTPs

By Paul Ducklin
Wondering which cybercrime tools, techniques and procedures to focus on? How about any and all of them?

Threat hunting with MITRE ATT&CK and Wazuh

By The Hacker News
Threat hunting is the process of looking for malicious activity and its artifacts in a computer system or network. Threat hunting is carried out intermittently in an environment regardless of whether or not threats have been discovered by automated security solutions. Some threat actors may stay dormant in an organization's infrastructure, extending their access while waiting for the right

Re-Focusing Cyber Insurance with Security Validation

By The Hacker News
The rise in the costs of data breaches, ransomware, and other cyber attacks leads to rising cyber insurance premiums and more limited cyber insurance coverage. This cyber insurance situation increases risks for organizations struggling to find coverage or facing steep increases. Some Akin Gump Strauss Hauer & Feld LLP's law firm clients, for example, reported a three-fold increase in insurance

Get Comprehensive Insights into Your Network with Secure Analytics and MITRE Mappings

By Claudio Lener

A deep dive into the latest updates from Secure Network and Cloud Analytics that show Cisco’s leadership in the Security Industry.

The year 2022 has been rather hectic for many reasons, and as the World undergoes its various challenges and opportunities, We At Cisco Security have buckled up and focused on improving the World in the way which we know best: by making it more Secure.

In an increasingly vulnerable Internet environment, where attackers rapidly develop new techniques to compromise organizations around the world, ensuring a robust security infrastructure becomes ever more imperative. Across the Cisco Security Portfolio, Secure Network Analytics (SNA) and Secure Cloud Analytics (SCA) have continued to add value for their customers since their inception by innovating their products and enhancing their capabilities.

In the latest SNA 7.4.1 release, four core features have been added to target important milestones in our roadmap. As a first addition, SNA has widely expanded on its Data Store deployment options by introducing the single node Data Store; supporting existing Flow Collector (FC) and new Data Store expansion by the Manager; and the capacity to mix and match virtual and physical appliances to build a Data Store deployment.

The SNA Data Store started as a simple concept, and while it maintained its simplicity, it became increasingly more robust and performant over the recent releases. In essence, it represents a new and improved database architecture design that can be made up of virtual or physical appliances to provide industry leading horizontal scaling for telemetry and event retention for over a year. Additionally, the Flow Ingest from the Flow Collectors is now separate from the data storage, which allows them to now scale to 500K + Flows Per Second (FPS). With this new database design, are now optimized for performance, which has improved across all metrics by a considerable amount.

For the second major addition, SNA now supports multi-telemetry collection within a single deployment. Such data encompasses network telemetry, firewall logging, and remote worker telemetry. Now, Firewall logs can be stored on premises with the Data Store, making data available to the Firepower Management Center (FMC) via APIs to support remote queries. From the FMC, users can pivot directly to the Data Store interface and look at detailed events that optimize SecOps workflows, such as automatically filtering on events of interest.

On the topic of interfaces, users can now benefit from an intelligent viewer which provides all Firewall data. This feature allows to select custom timeframes, apply unique filters on Security Events, create custom views based on relevant subsets of data, visualize trends from summary reports, and finally to export any such view as a CSV format for archiving or further forensic investigations.

With respect to VPN telemetry, the AnyConnect Secure Mobility Client can now store all network traffic even if users are not using their VPN in the given moment. Once a VPN connection is restored, the data is then sent to the Flow Collector, and, with a Data Store deployment, off-network flow updates can bypass FC flow caches which allow NVM historical data to be stored correctly.

Continuing down the Data Store journey (and, what a journey indeed), users can now monitor and evaluate its performance in a simple and intuitive way. This is achieved with charts and trends directly available in the Manager, which can now support traditional non-Data Store FCs and one singular Data Store. The division of Flow Collectors is made possible by SNA Domains, where a Data Store Domain can be created, and new FCs added to it when desired. This comes as part of a series of robust enhancements to the Flow Collector, where the FC can now be made up of a single image (NetFlow + sFlow) and its image can be switched between the two options. As yet another perk of the new database design, any FC can send its data to the Data Store.

As it can be seen, the Data Store has been the star of the latest SNA release, and for obvious good reasons. Before coming to an ending though, it has one more feature up its sleeve: Converged Analytics. This SNA feature brings a simplified, intuitive and clear analytics experience to Secure Network Analytics users. It comes with out- of-the-box detections mapped to MITRE with clearly defined tactics and techniques, self-taught baselining and graduated alerting, and the ability to quiet non-relevant alerts, leading to more relevant detections.

This new Analytics feature is a strong step forward to give users the confidence of network security awareness thanks to an intuitive workflow and 43 new alerts. It also gives them a deep understanding of each alert with observations and mappings related to the industry-standard MITRE tactics and techniques. When you think it couldn’t get any better, the Secure Network and Cloud Analytics teams have worked hard to add even more value to this release, and ensured the same workflows, functionality and user experience could be further available in the SCA portal. Yes, this is the first step towards a more cohesive experience across both SNA and SCA, where users of either platform will start to benefit from more consistent outcomes regardless of their deployment model. As some would say, it’s like a birthday coming early.

Pivoting to Secure Cloud Analytics, as per Network sibling, the product got several enhancements over the last months of development. The core additions revolve around additional detections and context, as well as usability and integration enhancements, including those in Secure Cloud Insights. In parallel with SNA’s Converged Analytics, SCA benefits from detections mapped to the MITRE ATT&CK framework. Additionally, several detections underwent algorithm improvements, while 4 new ones were added, such as Worm Propagation, which was native to SNA. Regarding the backbone of SCA’s alerts, a multitude of new roles and observations were added to the platform, to further optimize and tune the alerts for the users.

Additionally, alerts now offer a pivot directly to AWS’ load balancer and VPC, as well as direct access to Azure Security Groups, to allow for further investigation through streamlined workflows. The two Public Cloud Providers are now also included in coverage reports that provide a gap analysis to gain insight as to what logs may have potentially gone missing.

Focusing more on the detection workflows, the Alert Details view also got additional information pertaining to device context which gives insight into hostnames, subnets, and role metrics. The ingest mechanism has also gotten more robust thanks to data now coming from Talos intelligence feed and ISE, shown in the Event Viewer for expanded forensics and visibility use cases.

While dealing with integrations, the highly requested SecureX integration can now be enabled in 1 click, with no API keys needed and a workflow that is seamless across the two platforms. Among some of the other improvements around graphs and visualizations, the Encrypted Traffic widget now allows an hourly breakdown of the data, while the Event Viewer now displays bi-directional session traffic, to bring even greater context to SCA flows.

In the context of pivots, as a user is navigating through devices that, for example, have raised an alert, they will now also see the new functionality to pivot directly into the Secure Cloud Insights (SCI) Knowledge Graph, to learn more about how various sources are connected to one another. Another SCI integration is present within the Device Outline of an Alert, to gain more posture context, and as part of a configuration menu, it’s now possible to run cloud posture assessments on demand, for immediate results and recommendations.

With this all said, we from the Secure Analytics team are extremely excited about the adoption and usage of these features so that we can keep on improving the product and iterating to solve even more use cases. As we look ahead, the World has never needed more than now a comprehensive solution to solve one of the most pressing problems in our society: cyber threats in the continuously evolving Internet space. And Secure Analytics will be there, to pioneer and lead the effort for a safe World.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Trend Micro’s Top Ten MITRE Evaluation Considerations

By Trend Micro

The introduction of the MITRE ATT&CK evaluations is a welcomed addition to the third-party testing arena. The ATT&CK framework, and the evaluations in particular, have gone such a long way in helping advance the security industry as a whole, and the individual security products serving the market.

The insight garnered from these evaluations is incredibly useful.  But let’s admit, for everyone except those steeped in the analysis, it can be hard to understand. The information is valuable, but dense. There are multiple ways to look at the data and even more ways to interpret and present the results (as no doubt you’ve already come to realize after reading all the vendor blogs and industry articles!) We have been looking at the data for the past week since it published, and still have more to examine over the coming days and weeks.

The more we assess the information, the clearer the story becomes, so we wanted to share with you Trend Micro’s 10 key takeaways for our results:

1. Looking at the results of the first run of the evaluation is important:

  • Trend Micro ranked first in initial overall detection. We are the leader in detections based on initial product configurations. This evaluation enabled vendors to make product adjustments after a first run of the test to boost detection rates on a re-test. The MITRE results show the final results after all product changes. If you assess what the product could detect as originally provided, we had the best detection coverage among the pool of 21 vendors.
  • This is important to consider because product adjustments can vary in significance and may or may not be immediately available in vendors’ current product. We also believe it is easier to do better, once you know what the attacker was doing – in the real world, customers don’t get a second try against an attack.
  • Having said that, we too took advantage of the retest opportunity since it allows us to identify product improvements, but our overall detections were so high, that even removing those associated with a configuration change, we still ranked first overall.

  • And so no one thinks we are just spinning… without making any kind of exclusions to the data at all, and just taking the MITRE results in their entirety, Trend Micro had the second highest detection rate, with 91+% detection coverage.

2. There is a hierarchy in the type of main detections – Techniques is most significant

  • There is a natural hierarchy in the value of the different types of main detections.
    • A general detection indicates that something was deemed suspicious but it was not assigned to a specific tactic or technique.
    • A detection on tactic means the detection can be attributed to a tactical goal (e.g. credential access).
    • Finally, a detection on technique means the detection can be attributed to a specific adversarial action (e.g. credential dumping).
  • We have strong detection on techniques, which is a better detection measure. With the individual MITRE technique identified, the associated tactic can be determined, as typically, there are only a handful of tactics that would apply to a specific technique. When comparing results, you can see that vendors had lower tactic detections on the whole, demonstrating a general acknowledgement of where the priority should lie.
  • Likewise, the fact that we had lower general detections compared to technique detections is a positive. General detections are typically associated with a signature; as such, this proves that we have a low reliance on AV.
  • It is also important to note that we did well in telemetry which gives security analysts access to the type and depth of visibility they need when looking into detailed attacker activity across assets.


https://attackevals.mitre.org/APT29/detection-categories.html 

3. More alerts does not equal better alerting – quite the opposite

  • At first glance, some may expect one should have the same number of alerts as detections. But not all detections are created equal, and not everything should have an alert (remember, these detections are for low level attack steps, not for separate attacks.)
  • Too many alerts can lead to alert fatigue and add to the difficulty of sorting through the noise to what is most important.
  • When you consider the alerts associated with our higher-fidelity detections (e.g. detection on technique), you can see that the results show that Trend Micro did very well at reducing the noise of all of the detections into a minimal volume of meaningful/actionable alerts.

4. Managed Service detections are not exclusive

  • Our MDR analysts contributed to the “delayed detection” category. This is where the detection involved human action and may not have been initiated automatically.
  • Our results shows the strength of our MDR service as one way for detection and enrichment. If an MDR service was included in this evaluation, we believe you would want to see it provide good coverage, as it demonstrates that the team is able to detect based on the telemetry collected.
  • What is important to note though is that the numbers for the delayed detection don’t necessarily mean it was the only way a detection was/could be made; the same detection could be identified by other means. There are overlaps between detection categories.
  • Our detection coverage results would have remained strong without this human involvement – approximately 86% detection coverage (with MDR, it boosted it up to 91%).

5. Let’s not forget about the effectiveness and need for blocking!

  • This MITRE evaluation did not test for a product’s ability to block/protect from an attack, but rather exclusively looks at how effective a product is at detecting an event that has happened, so there is no measure of prevention efficacy included.
  • This is significant for Trend, as our philosophy is to block and prevent as much as you can so customers have less to clean up/mitigate.

6. We need to look through more than the Windows

  • This evaluation looked at Windows endpoints and servers only; it did not look at Linux for example, where of course Trend has a great deal of strength in capability.
  • We look forward to the expansion of the operating systems in scope. Mitre has already announced that the next round will include a linux system.

7. The evaluation shows where our product is going

  • We believe the first priority for this evaluation is the main detections (for example, detecting on techniques as discussed above). Correlation falls into the modifier detection category, which looks at what happens above and beyond an initial detection.
  • We are happy with our main detections, and see great opportunity to boost our correlation capabilities with Trend Micro XDR, which we have been investing in heavily and is at the core of the capabilities we will be delivering in product to customers as of late June 2020.
  • This evaluation did not assess our correlation across email security; so there is correlation value we can deliver to customers beyond what is represented here.

8. This evaluation is helping us make our product better

  • The insight this evaluation has provided us has been invaluable and has helped us identify areas for improvement and we have initiate product updates as a result.
  • As well, having a product with a “detection only” mode option helps augment the SOC intel, so our participation in this evaluation has enabled us to make our product even more flexible to configure; and therefore, a more powerful tool for the SOC.
  • While some vendors try to use it against us, our extra detections after config change show that we can adapt to the changing threat landscape quickly when needed.

9. MITRE is more than the evaluation

  • While the evaluation is important, it is important to recognize MITRE ATT&CK as an important knowledge base that the security industry can both align and contribute to.
  • Having a common language and framework to better explain how adversaries behave, what they are trying to do, and how they are trying to do it, makes the entire industry more powerful.
  • Among the many things we do with or around MITRE, Trend has and continues to contribute new techniques to the framework matrices and is leveraging it within our products using ATT&CK as a common language for alerts and detection descriptions, and for searching parameters.

10. It is hard not to get confused by the fud!

  • MITRE does not score, rank or provide side by side comparison of products, so unlike other tests or industry analyst reports, there is no set of “leaders” identified.
  • As this evaluation assesses multiple factors, there are many different ways to view, interpret and present the results (as we did here in this blog).
  • It is important that individual organizations understand the framework, the evaluation, and most importantly what their own priorities and needs are, as this is the only way to map the results to the individual use cases.
  • Look to your vendors to help explain the results, in the context that makes sense for you. It should be our responsibility to help educate, not exploit.

The post Trend Micro’s Top Ten MITRE Evaluation Considerations appeared first on .

Getting ATT&CKed By A Cozy Bear And Being Really Happy About It: What MITRE Evaluations Are, and How To Read Them

By Greg Young (Vice President for Cybersecurity)

Full disclosure: I am a security product testing nerd*.

 

I’ve been following the MITRE ATT&CK Framework for a while, and this week the results were released of the most recent evaluation using APT29 otherwise known as COZY BEAR.

First, here’s a snapshot of the Trend eval results as I understand them (rounded down):

91.79% on overall detection.  That’s in the top 2 of 21.

91.04% without config changes.  The test allows for config changes after the start – that wasn’t required to achieve the high overall results.

107 Telemetry.  That’s very high.  Capturing events is good.  Not capturing them is not-good.

28 Alerts.  That’s in the middle, where it should be.  Not too noisy, not too quiet.  Telemetry I feel is critical whereas alerting is configurable, but only on detections and telemetry.

 

So our Apex One product ran into a mean and ruthless bear and came away healthy.  But that summary is a simplification and doesn’t capture all the nuance to the testing.  Below are my takeaways for you of what the MITRE ATT&CK Framework is, and how to go about interpreting the results.

 

Takeaway #1 – ATT&CK is Scenario Based

The MITRE ATT&CK Framework is intriguing to me as it mixes real world attack methods by specific adversaries with a model for detection for use by SOCs and product makers.  The ATT&CK Framework Evaluations do this but in a lab environment to assess how security products would likely handle an attack by that adversary and their usual methods.  There had always been a clear divide between pen testing and lab testing and ATT&CK was kind of mixing both.  COZY BEAR is super interesting because those attacks were widely known for being quite sophisticated and being state-sponsored, and targeted the White House and US Democratic Party.  COZY BEAR and its family of derivatives use backdoors, droppers, obfuscation, and careful exfiltration.

 

Takeaway #2 – Look At All The Threat Group Evals For The Best Picture

I see the tradeoffs as ATT&CK evals are only looking at that one scenario, but that scenario is very reality based and with enough evals across enough scenarios a narrative is there to better understand a product.  Trend did great on the most recently released APT/29/COZY BEAR evaluation, but my point is that a product is only as good as all the evaluations. I always advised Magic Quadrant or NSS Value Map readers to look at older versions in order to paint a picture over time of what trajectory a product had.

 

Takeaway #3 – It’s Detection Focused (Only)

The APT29 test like most Att&ck evals is testing detection, not prevention nor other parts of products (e.g. support).  The downside is that a product’s ability to block the attacks isn’t evaluated, at least not yet.  In fact blocking functions have to be disabled for parts of the test to be done.  I get that – you can’t test the upstairs alarm with the attack dog roaming the downstairs.  Starting with poor detection never ends well, so the test methodology seems to be focused on ”if you can detect it you can block it”.  Some pen tests are criticized that a specific scenario isn’t realistic because A would stop it before B could ever occur.  IPS signature writers everywhere should nod in agreement on that one. I support MITRE on how they constructed the methodology because there has to be limitations and scope on every lab test, but readers too need to understand those limitations and scopes.  I believe that the next round of tests will include protection (blocking) as well, so that is cool.

 

Takeaway #4 – Choose Your Own Weather Forecast

Att&ck is no magazine style review.  There is no final grade or comparison of products.  To fully embrace Att&ck imagine being provided dozens of very sound yet complex meteorological measurements and being left to decide on what the weather will be. Or have vendors carpet bomb you with press releases of their interpretations.  I’ve been deep into the numbers of the latest eval scores and when looking at some of the blogs and press releases out there they almost had me convinced they did well even when I read the data at hand showing they didn’t.  I guess a less jaded view is that the results can be interpreted in many ways, some of them quite creative.  It brings to mind the great quote from the Lockpicking Lawyer review “the threat model does not include an attacker with a screwdriver”.

 

Josh Zelonis at Forrester provides a great example of the level of work required to parse the test outcomes, and he provides extended analysis on Github here that is easier on the eyes than the above.  Even that great work product requires the context of what the categories mean.  I understand that MITRE is taking the stance of “we do the tests, you interpret the data” in order to pick fewer fights and accommodate different use cases and SOC workflows, but that is a lot to put on buyers. I repeat: there’s a lot of nuance in the terms and test report categories.

 

If, in the absence of Josh’s work, if I have to pick one metric Detection Rate is likely the best one.  Note that Detection rate isn’t 100% for any product in the APT29 test, because of the meaning of that metric.  The best secondary metrics I like are Techniques and Telemetry.  Tactics sounds like a good thing, but in the framework it is lesser than Techniques, as Tactics are generalized bad things (“Something moving outside!”) and Techniques are more specific detections (“Healthy adult male Lion seen outside door”), so a higher score in Techniques combined with a low score in Tactics is a good thing.  Telemetry scoring is, to me, best right in the middle.  Not too many alerts (noisy/fatiguing) and not too few (“about that lion I saw 5 minutes ago”).

 

Here’s an example of the interpretations that are valuable to me.  Looking at the Trend Micro eval source page here I get info on detections in the steps, or how many of the 134 total steps in the test were detected.  I’ll start by excluding any human involvement and exclude the MSSP detections and look at unassisted only.  But the numbers are spread across all 20 test steps, so I’ll use Josh’s spreadsheet shows 115 of 134 steps visible, or 85.82%.  I do some averaging on the visibility scores across all the products evaluated and that is 66.63%, which is almost 30% less.  Besides the lesson that the data needs gathering and interpretation, it highlights that no product spotted 100% across all steps and the spread was wide. I’ll now look at the impact of human involvement add in the MSSP detections and the Trend number goes to 91%.  Much clinking of glasses heard from the endpoint dev team.  But if I’m not using an MSSP service that… you see my point about context/use-case/workflow.  There’s effectively some double counting (i.e. a penalty, so that when removing MSSP it inordinately drops the detection ) of the MSSP factor when removing it in the analyses, but I’ll leave that to a future post.  There’s no shortage of fodder for security testing nerds.

 

Takeaway #5 – Data Is Always Good

Security test nerdery aside, this eval is a great thing and the data from it is very valuable.  Having this kind of evaluation makes security products and the uses we put them to better.  So dig into ATT&CK and read it considering not just product evaluations but how your organization’s framework for detecting and processing attacks maps to the various threat campaigns. We’ll no doubt have more posts on APT29 and upcoming evals.

 

*I was a Common Criteria tester in a place that also ran a FIPS 140-2 lab.  Did you know that at Level 4 of FIPS a freezer is used as an exploit attempt? I even dipped my toe into the arcane area of Formal Methods using the GYPSY methodology and ran from it screaming “X just equals X!  We don’t need to prove that!”. The deepest testing rathole I can recall was doing a portability test of the Orange Book B1 rating for MVS RACF when using logical partitions. I’m never getting those months of my life back. I’ve been pretty active in interacting with most security testing labs like NSS and ICSA and their schemes (that’s not a pejorative, but testing nerds like to use British usages to sound more learned) for decades because I thought it was important to understand the scope and limits of testing before accepting it in any product buying decisions. If you want to make Common Criteria nerds laugh point out something bad that has happened and just say “that’s not bad, it was just mistakenly put in scope”, and that will then upset the FIPS testers because a crypto boundary is a very real thing and not something real testers joke about.  And yes, Common Criteria is the MySpace of tests.

The post Getting ATT&CKed By A Cozy Bear And Being Really Happy About It: What MITRE Evaluations Are, and How To Read Them appeared first on .

❌