Join the guided tour outside the Security Operations Center, where we’ll discuss real time network traffic of the RSA Conference, as seen in the NetWitness platform. Engineers will be using Cisco S… Read more on Cisco Blogs
We’re excited to announce Cisco Secure Client, formerly AnyConnect, as the new version of one of the most widely deployed security agents. As the unified security agent for Cisco Secure, it addresses common operational use cases applicable to Cisco Secure endpoint agents. Those who install Secure Client’s next-generation software will benefit from a shared user interface for tighter and simplified management of Cisco agents for endpoint security.
Now, with Secure Client, you gain improved secure remote access, a suite of modular security services, and a path for enabling Zero Trust Network Access (ZTNA) across the distributed network. The newest capability is in Secure Endpoint as a new module within the unified endpoint agent framework. Now you can harness Endpoint Detection & Response (EDR) from within Secure Client. You no longer need to deploy and manage Secure Client and Secure Endpoint as separate agents, making management more effortless on the backend.
Within Device Insights, Secure Client lets you deploy, update, and manage your agents from a new cloud management system inside SecureX. If you choose to use cloud management, Secure Client policy and deployment configuration are done in the Insights section of Cisco SecureX. Powerful visibility capabilities in SecureX Device Insights show which endpoints have Secure Client installed in addition to what module versions and profiles they are using.
The emphasis on interoperability of endpoint security agents helps provide the much-needed visibility and simplification across multiple Cisco security solutions while simultaneously reducing the complexity of managing multiple endpoints and agents. Application and data visibility is one of the top ways Secure Client can be an important part of an effective security resilience strategy.
Visit our homepage to see how Secure Client can help your organization today.
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels
Instagram
Facebook
Twitter
LinkedIn
Google’s Android operating system has been a boon for the average consumer. No other operating system has given so much freedom to developers and hardware manufacturers to make quality devices at reasonable prices. The number of Android phones in the world is astounding. That success comes with a price, however.
A recent report from our own McAfee Mobile Research team has found malicious apps with hundreds of thousands of downloads in the Google Play store. This round of apps poses as simple wallpaper, camera filters, and picture editing, but they hide their nature till after they’ve been installed on your device.
Figure 1. Infected Apps on Google Play
On the bright side, Google Play performs a review for every app to ensure that they are legitimate, safe, and don’t contain malware before they’re allowed on the Play store. However, enterprising criminals regularly find ways to sneak malware past Google’s security checks.
Figure 2. Negative reviews on Google Play
When developers upload their apps to the Play store for approval, they have to send supporting documents that tell Google what the app is, what it does and what age group it’s intended for. By sending Google a “clean” version of their app, attackers can later get their malicious code into the store via a future update where it sits and waits for someone to download it. Once installed, the app contacts a remote server, controlled by the attackers, so it can download new parts of the app that Google has never seen. You can think of it as a malware add-on pack that installs itself on your device without you realizing it. By contacting their own server for the malware files, attackers sneak around Google security checks and can put anything they want on your device.
The current round of malware we’re seeing hijack your SMS messages so they can make purchases through your device, without your knowledge. Through a combination of hidden functionality and abuse of permissions like the ability to read notifications, that simple looking wallpaper app can send subscription requests and confirm them as if it were you. These apps will regularly run up large bills through purchasing subscriptions to premium rate services. The more troubling part is how they can read any message that you receive, possibly exposing your personal information to attackers.
To start, a comprehensive and cross-platform solution like McAfee Total Protection can help detect threats like malware and alerts you if your devices have been infected. I’d also like to share some tips our Research team has shared with me.
Before you hit that install button, take a good look at an app’s reviews. Do they look like they were written by real people? Do the account names of the reviewers make sense? Are people leaving real feedback, or are the majority of comments things like, “Works great. Loved it.” with no other information?
Scammers can easily generate fake reviews for an app to make it look like people are engaging with the developers. Look out for vague reviews that don’t mention the app or what it does, nothing but five-star reviews, and generic sounding account names like, “girl345834”. They’re probably bots, so be wary.
Search for the app developers’ company and see if they have a website. Having a website doesn’t guarantee an app is legitimate, but it’s another good indicator of how trustworthy a company’s app is. Through their website, you should be able to find out where their team is based, or at least some personal information about the company. If they’re hiding that information, or there’s no site at all, that might be a good sign to try a different app.
A lot of malicious apps offer features that your phone already provides, like a flashlight or photo viewer. Unless there’s a very specific reason why you need a separate app to do something your device already does, it’s not recommended to use a third-party app. Especially if it’s free.
App permissions must be clearly stated on the app’s page in order to get into the Google Play store. They’re found near the bottom of the page, along with developer information. Check the permissions every app asks for before you install it and ask yourself if they make sense. For example, a photo editor doesn’t need access to your contacts list, and wallpapers don’t need to have access to your location data. If the permissions don’t make sense for the type of app, steer clear.
Mobile devices are vulnerable to malware and viruses, just like your computer. By installing McAfee protection to your mobile device, you can secure your mobile data, protect your privacy, and even find lost devices.
Android is one of the most popular operating systems on the planet, which means the rewards for creating malware for Android devices are well worth it. It’s unlikely that Android malware is going away any time soon, so staying safe means being cautious with the things you install on your devices.
You can protect yourself by installing McAfee Total Protection on your mobile device and reading the permissions apps ask for when you install them. There’s no good reason for a wallpaper app to have SMS permissions, but that request should ring some alarm bells that something isn’t right and stop you from installing it.
The post Fraudulent Apps that Automatically Charge you Money Spotted in Google Play appeared first on McAfee Blogs.
Introduction In order to keep your AWS environment secure while allowing your users to properly utilize resources, you must ensure that users are correctly created with proper permissions. Also, you must monitor your environment to ensure that unauthorized access does not occur and accounts are up to date. User Account Creation and Management AWS IAM […]
The post AWS User Management appeared first on Infosec Resources.
Full disclosure: I am a security product testing nerd*.
I’ve been following the MITRE ATT&CK Framework for a while, and this week the results were released of the most recent evaluation using APT29 otherwise known as COZY BEAR.
First, here’s a snapshot of the Trend eval results as I understand them (rounded down):
91.79% on overall detection. That’s in the top 2 of 21.
91.04% without config changes. The test allows for config changes after the start – that wasn’t required to achieve the high overall results.
107 Telemetry. That’s very high. Capturing events is good. Not capturing them is not-good.
28 Alerts. That’s in the middle, where it should be. Not too noisy, not too quiet. Telemetry I feel is critical whereas alerting is configurable, but only on detections and telemetry.
So our Apex One product ran into a mean and ruthless bear and came away healthy. But that summary is a simplification and doesn’t capture all the nuance to the testing. Below are my takeaways for you of what the MITRE ATT&CK Framework is, and how to go about interpreting the results.
Takeaway #1 – ATT&CK is Scenario Based
The MITRE ATT&CK Framework is intriguing to me as it mixes real world attack methods by specific adversaries with a model for detection for use by SOCs and product makers. The ATT&CK Framework Evaluations do this but in a lab environment to assess how security products would likely handle an attack by that adversary and their usual methods. There had always been a clear divide between pen testing and lab testing and ATT&CK was kind of mixing both. COZY BEAR is super interesting because those attacks were widely known for being quite sophisticated and being state-sponsored, and targeted the White House and US Democratic Party. COZY BEAR and its family of derivatives use backdoors, droppers, obfuscation, and careful exfiltration.
Takeaway #2 – Look At All The Threat Group Evals For The Best Picture
I see the tradeoffs as ATT&CK evals are only looking at that one scenario, but that scenario is very reality based and with enough evals across enough scenarios a narrative is there to better understand a product. Trend did great on the most recently released APT/29/COZY BEAR evaluation, but my point is that a product is only as good as all the evaluations. I always advised Magic Quadrant or NSS Value Map readers to look at older versions in order to paint a picture over time of what trajectory a product had.
Takeaway #3 – It’s Detection Focused (Only)
The APT29 test like most Att&ck evals is testing detection, not prevention nor other parts of products (e.g. support). The downside is that a product’s ability to block the attacks isn’t evaluated, at least not yet. In fact blocking functions have to be disabled for parts of the test to be done. I get that – you can’t test the upstairs alarm with the attack dog roaming the downstairs. Starting with poor detection never ends well, so the test methodology seems to be focused on ”if you can detect it you can block it”. Some pen tests are criticized that a specific scenario isn’t realistic because A would stop it before B could ever occur. IPS signature writers everywhere should nod in agreement on that one. I support MITRE on how they constructed the methodology because there has to be limitations and scope on every lab test, but readers too need to understand those limitations and scopes. I believe that the next round of tests will include protection (blocking) as well, so that is cool.
Takeaway #4 – Choose Your Own Weather Forecast
Att&ck is no magazine style review. There is no final grade or comparison of products. To fully embrace Att&ck imagine being provided dozens of very sound yet complex meteorological measurements and being left to decide on what the weather will be. Or have vendors carpet bomb you with press releases of their interpretations. I’ve been deep into the numbers of the latest eval scores and when looking at some of the blogs and press releases out there they almost had me convinced they did well even when I read the data at hand showing they didn’t. I guess a less jaded view is that the results can be interpreted in many ways, some of them quite creative. It brings to mind the great quote from the Lockpicking Lawyer review “the threat model does not include an attacker with a screwdriver”.
Josh Zelonis at Forrester provides a great example of the level of work required to parse the test outcomes, and he provides extended analysis on Github here that is easier on the eyes than the above. Even that great work product requires the context of what the categories mean. I understand that MITRE is taking the stance of “we do the tests, you interpret the data” in order to pick fewer fights and accommodate different use cases and SOC workflows, but that is a lot to put on buyers. I repeat: there’s a lot of nuance in the terms and test report categories.
If, in the absence of Josh’s work, if I have to pick one metric Detection Rate is likely the best one. Note that Detection rate isn’t 100% for any product in the APT29 test, because of the meaning of that metric. The best secondary metrics I like are Techniques and Telemetry. Tactics sounds like a good thing, but in the framework it is lesser than Techniques, as Tactics are generalized bad things (“Something moving outside!”) and Techniques are more specific detections (“Healthy adult male Lion seen outside door”), so a higher score in Techniques combined with a low score in Tactics is a good thing. Telemetry scoring is, to me, best right in the middle. Not too many alerts (noisy/fatiguing) and not too few (“about that lion I saw 5 minutes ago”).
Here’s an example of the interpretations that are valuable to me. Looking at the Trend Micro eval source page here I get info on detections in the steps, or how many of the 134 total steps in the test were detected. I’ll start by excluding any human involvement and exclude the MSSP detections and look at unassisted only. But the numbers are spread across all 20 test steps, so I’ll use Josh’s spreadsheet shows 115 of 134 steps visible, or 85.82%. I do some averaging on the visibility scores across all the products evaluated and that is 66.63%, which is almost 30% less. Besides the lesson that the data needs gathering and interpretation, it highlights that no product spotted 100% across all steps and the spread was wide. I’ll now look at the impact of human involvement add in the MSSP detections and the Trend number goes to 91%. Much clinking of glasses heard from the endpoint dev team. But if I’m not using an MSSP service that… you see my point about context/use-case/workflow. There’s effectively some double counting (i.e. a penalty, so that when removing MSSP it inordinately drops the detection ) of the MSSP factor when removing it in the analyses, but I’ll leave that to a future post. There’s no shortage of fodder for security testing nerds.
Takeaway #5 – Data Is Always Good
Security test nerdery aside, this eval is a great thing and the data from it is very valuable. Having this kind of evaluation makes security products and the uses we put them to better. So dig into ATT&CK and read it considering not just product evaluations but how your organization’s framework for detecting and processing attacks maps to the various threat campaigns. We’ll no doubt have more posts on APT29 and upcoming evals.
*I was a Common Criteria tester in a place that also ran a FIPS 140-2 lab. Did you know that at Level 4 of FIPS a freezer is used as an exploit attempt? I even dipped my toe into the arcane area of Formal Methods using the GYPSY methodology and ran from it screaming “X just equals X! We don’t need to prove that!”. The deepest testing rathole I can recall was doing a portability test of the Orange Book B1 rating for MVS RACF when using logical partitions. I’m never getting those months of my life back. I’ve been pretty active in interacting with most security testing labs like NSS and ICSA and their schemes (that’s not a pejorative, but testing nerds like to use British usages to sound more learned) for decades because I thought it was important to understand the scope and limits of testing before accepting it in any product buying decisions. If you want to make Common Criteria nerds laugh point out something bad that has happened and just say “that’s not bad, it was just mistakenly put in scope”, and that will then upset the FIPS testers because a crypto boundary is a very real thing and not something real testers joke about. And yes, Common Criteria is the MySpace of tests.
The post Getting ATT&CKed By A Cozy Bear And Being Really Happy About It: What MITRE Evaluations Are, and How To Read Them appeared first on .