FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Hackers Increasingly Using Browser Automation Frameworks for Malicious Activities

By Ravie Lakshmanan
Cybersecurity researchers are calling attention to a free-to-use browser automation framework that's being increasingly used by threat actors as part of their attack campaigns. "The framework contains numerous features which we assess may be utilized in the enablement of malicious activities," researchers from Team Cymru said in a new report published Wednesday. "The technical entry bar for the

380K Kubernetes API Servers Exposed to Public Internet

By Elizabeth Montalbano
More than 380,000 of the 450,000-plus servers hosting the open-source container-orchestration engine for managing cloud deployments allow some form of access.

Russia Is Being Hacked at an Unprecedented Scale

By Matt Burgess
From “IT Army” DDoS attacks to custom malware, the country has become a target like never before.

McAfee Enterprise SSE: Named a Leader In 2022 Gartner Magic Quadrant for SSE

By Gee Rittenhouse

Companies continue to accelerate their digital transformation and hybrid work strategies with security remaining top of mind. For a growing number of enterprises, the solution has been the deployment of a Security Service Edge (SSE). Introduced as a market category by Gartner, per our view we believe SSE is the consolidation of Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), and Zero Trust Network Access (ZTNA) within a single, cloud-delivered solution for securing access to web, cloud, and private applications from any corner of the world, mitigating user and cloud threats, and protecting sensitive cloud data at rest, in transit, or in use.

Recognizing the significant role SSE is filling in cybersecurity, Gartner® has published its first ever Magic Quadrant™ report for SSE. We are honored to announce that the McAfee Enterprise SSE Portfolio has been recognized as a Leader for its solution MVISION Unified Cloud Edge (UCE) in the report, positioned rightmost for “Completeness of Vision.” Our cloud-native platform is architected for the SSE market and boasts a next-gen SWG, and the industry’s first data-aware ZTNA solution, empowering our customers in their cloud and network transformations. It was also recognized as a Leader for Gartner Magic Quadrant for Cloud Access Security Brokers Leader for four successive years 2017–2020.

2022 Gartner Magic Quadrant for Security Service Edge (Source: Gartner)

In 2021, McAfee Enterprise SSE made several updates and additions to its MVISION UCE solution, strengthening its position as an industry expert, including:

  • Highly innovative Remote Browser Isolation (RBI) technology integrated with MVISION UCE for advanced threat protection, data security and visibility through unified policies.
  • Full-featured data security portfolio, including native integration of Enterprise DLP for unified data protection and incident management across cloud, web, private apps and endpoints.
  • Extensive Cloud Security Posture Management (CSPM) capabilities, including Shift Left scanning to detect and correct misconfigurations and drift early in the CI/CD pipeline.
  • Support of SaaS Security Posture Management (SSPM) for continuous assessment of SaaS security landscape and remediating misconfigurations.
  • Presence backed by worldwide sales and support, along with a massively upgraded cloud footprint.
  • Includes comprehensive solutions, such as RBI for risky websites, across all the pricing tiers at no additional cost.
  • Rapidly expanding CASB Connect Program, which allows cloud service providers or partners to build lightweight API connections to the MVISION Cloud, leading several new service providers to adopt MVISION Cloud.

As a companion report to the Magic Quadrant, Gartner has also published its Critical Capabilities report for SSE, which shares deep insights into the product capabilities of each vendor based on a specific set of use cases. The below use cases are included in this year’s SSE Critical Capabilities Report:

  1. Secure Web and Cloud Usage
  2. Detect and Mitigate Threats
  3. Connect and Secure Remote Workers
  4. Identify and Protect Sensitive Information

MVISION UCE received the highest score across all four use cases, paving way for the SSE market in features and functionality. We believe our rich heritage in DLP, strong CSPM/SSPM, and deep usage of the MITRE ATT&CK framework have been the key contributors towards our #1 position across use cases in the Critical Capabilities report.

We are extremely proud of the recognition for our vision and product innovation. Our singular goal is to build a more secure world. To learn more about how Gartner assessed the market and the MVISION UCE solution, download your copy of the report here.

You can also join our webinar on March 9, 2022, for a deep dive into why McAfee Enterprise SSE is a Leader in the 2022 Gartner Magic Quadrant for SSE.

Click here for a free demo of the MVISION UCE solution.

Gee Rittenhouse
CEO, McAfee Enterprise SSE Portfolio

Gartner Disclaimer: Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from McAfee.
Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Gartner “Magic Quadrant for Security Service Edge” (SSE), John Watts, Craig Lawson, Charlie Winckless, Aaron McQuaid, 15 February 2022
Gartner “Critical capabilities for Security Service Edge” (SSE), John Watts, Craig Lawson, Charlie Winckless, Aaron McQuaid, 15 February 2022

As of 28, January 2022, McAfee Enterprise is now the McAfee Enterprise SSE Portfolio.

 

 

 

 

 

The post McAfee Enterprise SSE: Named a Leader In 2022 Gartner Magic Quadrant for SSE appeared first on McAfee Blog.

Cloud Security: Don’t wait until your next bill to find out about an attack!

By Paul Ducklin
Cloud security is the best sort of altruism: you need to do it to protect yourself, but you help to protect everyone else at the same time.

Global Technology Provider Looks to MVISION Unified Cloud Edge

By McAfee Enterprise

With the acceleration of cloud migration initiatives—partly arising the need to support a remote workforce during the pandemic and beyond—enterprises are finding that this transformation has introduced new operational complexities and security vulnerabilities. Among these are potential misconfigurations, poorly secured interfaces, Shadow IT (access to unauthorized applications), and an increasing number of connected devices and users. To navigate these challenges, enterprises are relying on managed service providers to monitor and protect their cloud environment.

To better serve its customers and secure its own environments, one global technology provider decided to expand its existing on-premises data loss protection (DLP) and web protection with a comprehensive and robust cloud security strategy based on solutions from the  MVISION™ portfolio of solutions. Already a long-time user of McAfee Enterprise on-premises solutions, the global technology provider not only secured its internal cloud infrastructure consisting of more than 5,000 endpoints across over 30 locations worldwide, they also applied the same approach to the millions of endpoints they manage for more than 10,000 customers.

Evolving a Modern Cloud Security Approach

A primary objective for the global technology provider is securing data in the cloud in Software-as-a-Service (SaaS) applications (Microsoft Office 365, OneDrive, Salesforce, and others) and Infrastructure-as-a-Service (IaaS) platforms (Microsoft Azure, Amazon Web Services, Google Cloud Platform).

As a first step in its cloud journey, the global technology provider evaluated a number of cloud access security brokers (CASB) solutions. Ultimately, they decided to implement MVISION Cloud for AWS, Office 365, and Shadow IT. In addition to providing comprehensive visibility into cloud app usage, these solutions help with compliance; data loss prevention (DLP) by monitoring the movement of sensitive and confidential data content traveling to or from the cloud, within the cloud, and cloud to cloud; and detection and remediation of threats primarily through user and entity behavior analytics (UEBA).

Moving to a Consolidated Cloud Security Fabric

But the global technology provider didn’t stop there. When we rolled out MVISION Unified Cloud Edge, the global technology provider tested it and enthusiastically adopted it. As defined by Gartner, MVISION Unified Cloud Edge is an industry-leading example of Secure Access Service Edge (SASE), a security framework that brings together network connectivity and security into a single, cloud-delivered solution that supports business transformation, edge computing, and workforce mobility.

The global technology provider has reaped multiple advantages from this implementation across its own internal environment and for its customers.

Key advantage #1: Management ease and less overhead

MVISION Unified Cloud Edge combines multiple capabilities under one umbrella: CASB functionality with web proxy and DLP with a single administrative hub,  ePolicy Orchestrator® (ePO™) for streamlined management.

MVISION Unified Cloud Edge capabilities and the ease of integration with the McAfee Enterprise ecosystem has made life easier for the global technology provider’s team, saving time and resources. Now they can set consistent policies from device to cloud and provide users with accelerated and secure access to the tools they use every day, such as Box, Dropbox, and others. As the information security operations manager points out: “A single management console reduces overhead as does being able to set policies that we can sync and apply to multiple data sources on multiple cloud solutions, without having to recreate rules.”

Key advantage #2: Data protection policies in the cloud

MVISION Unified Cloud Edge has also enabled the global technology provider to further boost its cloud data protection. For example, it can detect data that is improperly managed and stored. Now the organization can apply their existing on-premises data policies to the cloud. For example, they can prevent certain user behaviors that may put both corporate and customer cloud data at risk. These include copying data to cloud apps or USBs, printing it, taking screen captures, accessing risky websites, and uploading data to unauthorized websites.

Key advantage #3: Improved control over apps

To create a more secure internal environment, MVISION Unified Cloud Edge has been invaluable for the global security provider. They have a better handle on the applications that are being used across their company. The solution also provides risk scores for the cloud apps that are being used to help steer users away from Shadow IT and toward using only authorized apps. When employees propose new apps to help them do their jobs better, the IT security team can check the security of these apps against requirements and make any necessary modifications to ensure compliance.

“The Shadow IT CASB automatically blocks all cloud services that are deemed high risk, both at our on-premises Web Gateway and the built-in cloud web gateway manager. So, when users attempt to use an unsanctioned SaaS application, they see a message explaining that the app is not safe,” notes the information security operations manager.

The global technology provider also sells SaaS solutions to its clients. With MVISION Unified Cloud Edge, the global technology provider can protect data on any newly sanctioned SaaS applications at no extra cost.”

A resounding endorsement

After a successful experience with McAfee Enterprise overall and specifically with the implementation of MVISION Unified Cloud Edge, the information security operation manager recommends the solution to any organization beginning or in the midst of migrating to the cloud.

“I would advise other companies thinking about their cloud transformation journey to seriously consider MVISION Unified Cloud Edge . . . It has a very user-friendly interface and does so much out of the box,” he asserts. “The level of granularity in policy setting lets you do things you don’t think possible or are much easier to accomplish than you realize. . . I don’t think any other vendor offers such a complete package.”

The post Global Technology Provider Looks to MVISION Unified Cloud Edge appeared first on McAfee Blog.

McAfee Enterprise Continues to be a Leader in CASB and Cloud Security

By Naveen Palavalli

Cloud Security Gateways (CSGs) are one of the hottest and most sought-after technologies in the market today, driven by the adoption of cloud services for business transformation and the acceptance of hybrid workforce policies. CSGs, also commonly known as Cloud Access Security Brokers (CASBs), are responsible for enforcing security policies to protect cloud-hosted corporate assets from advanced threats, while enabling seamless and secure access to these assets from any location and device.  

We have witnessed an exponential growth in cloud usage in the past two years, primarily driven by remote workforce adoption. Based on the data collected by our research team from millions of connected McAfee Enterprise users across the globe, the overall usage of enterprise cloud services spiked by 50% across all industries, while the collaboration services witnessed an increase of up to 600% in usage. This led to an astonishing 630% increase in external attacks on the cloud accounts. Taking all these factors and trends into consideration, CSGs have become a highly essential element of any organization’s cloud security strategy, playing the most critical role for enabling data protection, threat prevention and compliance in the cloud. 

McAfee Enterprise continues to innovate in the cloud security space with a laser-focused strategy towards empowering our customers with the best-in-class cloud security solution. MVISION Cloud, recognized as the industry’s leading CSG solution, has become a vital part of enterprise security, allowing organizations to safely migrate to the cloud while protecting their “crown jewels” – the data. A huge testament to our cybersecurity vision is the IDC MarketScape Worldwide Cloud Security Gateways 2021 Vendor Assessment (Doc # US48334521, November 2021), and we are proud to announce that McAfee Enterprise has been recognized as a leader in the report. 

According to the report, “McAfee has a strong ecosystem of security solutions, including Secure Web Gateway, CSG, and endpoint security that it can integrate to enable customers in their data loss prevention, User Behavior Analytics, XDR, and threat prevention goals. McAfee has focused on providing robust protection and DLP, with the scale and speed necessary to support large user bases.” 

McAfee Enterprise’s multi-vector data protection capabilities go beyond the cloud to uniquely discover and protect sensitive assets on managed endpoints, in-network shares, and on-premises databases, enabling full scope of data protection from device-to-cloud. The industry-leading data protection and threat protection capabilities are tightly integrated with a unified policy framework that allows policy enforcement, data classification and incident management from a centralized console, reducing the cost and complexity of managing hybrid IT deployments, while improving the user experience. 

Figure 1: McAfee Enterprise Multi-Vector Data Protection 

MVISION Cloud is an integral component of our Unified Cloud Edge (UCE) solution, and together with McAfee Enterprise’s Next-Gen Secure Web Gateway (SWG) and MVISION Private Access (ZTNA) delivers the industry’s most comprehensive Security Services Edge (SSE) solution – the security element of the Secure Access Service Edge (SASE) framework. With McAfee Enterprise’s DLP technology being the common denominator across all the core SSE components, organizations can seamlessly utilize a unified, data-centric framework for centralized visibility and control over their entire digital footprint, while riding on an accelerated path for digital transformation and workplace mobility. 

Figure 2: MVISION Unified Cloud Edge (UCE) 

Our mission towards building a unified security platform for protecting data from device-to-cloud and defending against advanced threats and adversaries has established McAfee Enterprise as a leader in cybersecurity across multiple forums, and the 2021 IDC MarketScape report is another distinguished feather in our decorated cap. 

The post McAfee Enterprise Continues to be a Leader in CASB and Cloud Security appeared first on McAfee Blog.

Qualities of a Highly Available Cloud

By Mani Kenyan

With the widespread adoption of hybrid work models across enterprises for promoting flexible work culture in a post pandemic world, ensuring critical services are highly available in the cloud is no longer an option, but a necessity. McAfee Enterprise’s MVISION Unified Cloud Edge (UCE) is designed to maximize performance, minimize latency, and deliver 99.999% SLA guaranteed resiliency, offering blazing fast connectivity to cloud applications from any location and causing no service degradation, even when the usage of cloud services spiked 600% during the COVID-19 pandemic, as reported in our Cloud Adoption and Risk Report (Work From Home Edition). This blog shares details on how MVISION UCE is architected to enable uninterrupted access to corporate resources to meet the demands of the hybrid workforce.

MVISION UCE, our data-centric, cloud-native Security Service Edge (SSE) security platform, derives its capabilities from McAfee Enterprise’s industry leading Secure Web Gateway and Enterprise Data Protection solutions. However, this is not a lift and shift of capabilities to the cloud, which would have made it prone to service outages and impossible to have the flexibility that is needed to meet the demands of SSE. Instead, the best of breed functionality was purposefully reconstructed for SSE, using a microservices architecture that can scale elastically, and built on a platform-neutral stack that can run on bare metal and public cloud, equally effectively. A hallmark of the architecture is that the cloud is a single global fabric where service instances are spread throughout the globe. Users automatically access the best instance of any service through policy configuration.

What other alternatives are out there? We have seen some cloud services replicated in each region of their presence. While this makes controlling resources and data simple, and keeps everything within a boundary, such an approach loses out on the flexibility needed to scale on demand and reduced latency on access. With UCE, each point of presence (POP) is part of the global fabric, yet at the same time, fully featured with all services housed within the POP. This avoids the need to send traffic back and forth between various services located at different locations, a phenomenon known as traffic hairpin.

By default, user traffic gets processed at the POP closest to their physical location, regardless of where the user may be. A user may work at their office in New York 90% of the time and travel to UK occasionally. When the user connects to MVISON UCE, they are connected to New York POP when they are at office, and the POP in London if they are in a UK hotel while traveling. This is a big advantage if you think about it. User’s traffic does not need to trombone from the hotel in UK, to the POP in New York and back to a server in London. MVISION UCE’s out-of-the-box traffic routing scheme favors low latency. This does not mean that the customer cannot override this policy and force the traffic to be processed at the New York POP. They might do so if there is a compliance need to process all traffic at a certain location. Many customers have a need to store logs in a certain geography even though traffic processing may occur anywhere on the globe. MVISION UCE architecture decouples log storage from traffic processing and lets the customer choose their log storage geography based on criteria that customers define.

One of the key considerations while choosing a SSE vendor would be how much latency the service adds to user’s requests. Significant latency can negatively affect user experience and could be a deterrent to product adoption. With 85 POPs strategically placed around the globe providing low latency access to customers, UCE POPs have direct peering with the biggest SaaS vendors like Microsoft, Google, Akamai, and Salesforce to further reduce latency. In addition, MVISION UCE POPs peer with many ISPs around the globe, enabling high bandwidth and low latency connectivity end to end, from the customer’s network to UCE and from UCE to the destination server.

With thousands of peering partners growing every day, over 70% of traffic served by MVISION UCE uses peering links in some geographies. The whitepaper, How Peering POPs Make Negative Latency Possible, shares details about a study conducted by McAfee Enterprise to measure the efficacy of these peering relationships. This paper is proof that UCE customers experience faster response times going through our POPs than they would usually get by going directly through their Internet Service Providers. UCE follows a living partnership model when it comes to peering, with thousands of peering relationships in production. We are committed to keeping the latency to a minimum.

You may be wondering what the secret sauce is for achieving a reliability of five 9s or higher in MVISION UCE. Several items play a crucial role in preventing unplanned service degradation.

  1. Redundantly provisioned components that allow for one or more instances to pick up the work when one of them goes down. Unexpected system failures and interruptions do occur in the real world and having a good architecture that detects failures early and reroutes the traffic to another suitable instance is paramount to maintaining availability. A combination of client redirection, server-side redirection, along with deep application state tracking, is used to seamlessly bypass a failed spot. The global nature of the fabric allows for multiple simultaneous failures without causing a local outage.
  2. State of the art automation and deployment infrastructure is key to localize issues, maintain redundancy, and react automatically when issues are found. Containerized workloads over Kubernetes are the foundation of the cloud infrastructure in MVISION UCE, which facilitates fast recovery, canary rollouts of software, and elastic scaling of the infrastructure in case of peak demand. This is combined with an extensive automation and monitoring framework that monitors the customer’s experience and alerts the operations team of any localized or global service degradation.
  3. Ability to scale up on demand at a global scale. We are not talking about scale out within a POP here. Many times, physical data centers have a hard limit on resources and sometimes it takes several months to add new servers and resources at a physical site. We are talking about bursting out to newly provisioned POPs when the traffic demands, in a matter of hours. Through extensive automation and intelligent traffic routing, a new MVISION UCE POP can be deployed in public cloud quickly and start absorbing load, providing the needed cushion to avoid traffic peaks that could otherwise cause service degradation when usage patterns change. This capability allowed MVISION UCE to successfully handle increasing demand when customer VPNs could not handle the load created by dramatically increased remote work due to the pandemic last year.

At McAfee Enterprise, security is not an afterthought. From the start, the architecture was designed with zero trust in mind. Services are segmented from one another and follow the least privileged principle when resources need to be shared between services. Industry standard protocols and methodologies are used to enforce user and identity access management (UAM/IAM). Strong role-based access controls (RBAC) across the platform keep customer’s data separate and provide self-defense when a service is compromised. None of these features matter if the software is vulnerable. McAfee Enterprise follows one of the strictest Software Development Life Cycle (SDLC) processes in the industry to eliminate known vulnerabilities and threats in our software as it is written.

Another aspect of security that is gaining momentum these days is data privacy. This is at the forefront of all feature designs in MVISION UCE. Usually, data privacy means tokenization or anonymization of customer private data stored in MVISION UCE, be it logs or other metadata. At McAfee Enterprise, we strive to take this a step further. We do not want to retrieve private data from the customer environment if it can be avoided. For example, to evaluate a policy that involves customer premise data, UCE can offload the evaluation to a component on the customer premise. Case in point, McAfee Client Proxy (MCP) that is installed on user’s machine can perform a policy evaluation and avoid sending private data to the cloud. The McAfee Enterprise cloud leverages the results of the evaluation to complete the policy execution. Where this is not possible, private data is anonymized at the earliest entry point in the cloud to minimize data leaks.

Last but not the least, a chain is only as strong as its weakest link, and physical data center security must also be considered. Global partners are selected only after careful evaluation of their facilities and infrastructure that will host our data centers, while other vendors in this space are working with a larger set of less rigorously qualified regional partners to increase their presence. The McAfee Enterprise approach provides the necessary guard rails against supply chain attacks that our customers demand.

There are other architectural gems hidden within UCE and thus failing to mention them would make this article incomplete. First, the policy engine is exposed in the form of code with which the customer can construct complex policies without being constrained by what UI provides. If you are a user of MVISON UCE, you can see this in action by enabling “Code View” in the Web Policy tree. If you do not like the way policy nodes are ordered in the tree or the evaluations made by default, you can take complete control and process the traffic in any manner you wish. By the way, the policy is so flexible that one can write a policy to process traffic in one region and store logs in another region.

Second, policy evaluation can be distributed across various components which allows its evaluation at the earliest point in the network. This avoids hauling all traffic to the cloud to apply policy. For example, if a sensitive document needs to be blocked due to data protection rules, the DLP agent running on the user’s machine can block it instead of hauling the traffic to cloud for classification and blocking. This strategy reduces load on the cloud and consequently increases the scale at which we can process requests.

Lastly, all services are automated and require no manual intervention to provision a customer unlike other vendors that require a support ticket to provision some features. Independent of where your account has been provisioned and where your preferred UI console resides, polices that you author are stored in a global policy system that is synchronized to all POPs around the world, giving you the flexibility to process traffic anywhere in the world.

To conclude, all clouds are not built equally. Architecture of a cloud is a matter of choice and tradeoffs. MVISON UCE implements a global cloud and puts customers in the driver’s seat through programmatic policies, that are secure, scalable, and highly available.

To learn more about how MVISION UCE can help ensure your critical services are highly available in the cloud, watch this short video or visit our MVISION UCE page to get started.

The post Qualities of a Highly Available Cloud appeared first on McAfee Blog.

Legendary Entertainment Relies on MVISION CNAPP Across Its Multicloud Environment

By McAfee Enterprise

Becoming a cloud first company is an exciting and rewarding journey, but it’s also fraught with difficulties when it comes to securing an entire cloud estate. Many forwarding-thinking companies that have made massive investments in migrating their infrastructure to the cloud are facing challenges with respect to their cloud-native applications. These range from inconsistent security across cloud properties to lack of visibility into the public cloud infrastructure where cloud-native applications are hosted—and more. All of these issues can create vulnerabilities in a sprawling attack surface that can be potentially exploited by cybercriminals.

Legendary Entertainment is a global media company with multiple divisions including film, television, digital studios, and comics. Under the guidance of Dan Meacham, VP of Global Security and Corporate Operations and CSO/CISO, the multi-billion dollar organization transitioned from on-premises data centers to the cloud in 2012.

Meacham points out that it’s been a source of great pride for his security and IT teams to always be “on top of the latest and greatest” technology trends—and migration to the cloud is no exception. That’s why his interest was sparked when he learned about the rollout of the MVISION security product line early in the migration process. Its cloud-native, open architecture was exactly the right fit for Legendary Entertainment’s environment.

The challenges of securing a multi-cloud environment

As a cloud-first organization, Legendary Entertainment encountered challenges that are common to many companies that have migrated their workloads, applications, and data assets to the cloud. At first, the organization attempted to rely on security services natively provided by the individual cloud service providers: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Wasabi for cloud storage. As Meacham notes, “The security from one vendor doesn’t trickle over to the others. They all have different security controls, so our cloud security was not uniform, and security management was complicated.”

Lack of visibility

In their disparate multicloud environment spanning several cloud service providers, it became time-consuming and difficult to monitor and assess the security posture of applications and workloads, such as which systems needed patching or contained critical vulnerabilities.

Inconsistent security policies

With multiple management consoles required for its many cloud environments, applying and enforcing uniform security policy across their cloud estate was nearly impossible without investing a lot of time, effort, and resources.

Risky Shadow IT

Another problem in Legendary Entertainment’s early adoption of cloud-first was shadow IT, where employees or contractors enrolled in cloud collaboration platforms that were not authorized by IT. Although the shadow IT platforms were not connected to core systems, they made it more difficult to tightly monitor data which sometimes caused cloud-enabled applications to violate security policies. It is understandable that teams with a cloud-first mindset would embrace innovation and new collaborative experiences to accomplish goals faster. However, some of the shadow IT application has weak or no security controls – resulting the opportunities for external collaborator accounts to be compromised or have mis-managed privileges.

Unacceptable levels of risk

With high-profile data breaches in the entertainment industry in recent headlines, Legendary Entertainment was concerned about its level of risk and exposure, especially since it has valuable intellectual property such as scripts and marketing strategy plans for film releases among its holdings. The requirement for stronger security has been a boardroom-level conversation at digital media companies since the Sony Pictures hack and other vendor supply chain and workflow hacks. Attacks now extend beyond data leaks and can have far reaching business disruptions across an entire supply chain.

How MVISION CNAPP creates a consistent, compliant cloud security posture

By deploying  MVISION™ Cloud Native Application Protection Platform (MVISION CNAPP), Legendary Entertainment addressed all of these challenges at once. This unique solution prioritizes alerts and defends against the latest cloud threats and vulnerabilities. MVISION CNAPP combines granular application and data context with cloud security posture management and cloud workload protection in a single-console solution.

Unparalleled visibility

MVISION CNAPP provides Legendary Entertainment with broad and deep visibility across its entire infrastructure. It discovers all their cloud assets, including compute resources, containers, and storage and provides continuous visibility into vulnerabilities and security posture for applications and workloads running across multiple clouds.

Thanks to MVISION CNAPP, Meacham’s team can write, apply, and enforce security policies in a consistent fashion for the entire cloud estate. As Meacham points out, policy is continually checked so his team can correct any misconfigurations, disable services, or remove escalated privileges until corrections are made in alignment with internal compliance rules. And in many cases, the remediation can be automated internally in MVISION CNAPP or through workflow initiations.

“MVISION CNAPP gives me manageability and security uniformity for all our cloud platforms so that I can elevate the level of security and make it consistent across the board. Now that I have visibility into all our cloud assets from a high level, I can look at how current controls and configurations compare to our best practices, industry best practices, and to the best practices of peers who are using the same product. Without MVISION CNAPP, management is one to one, whereas with MVISION CNAPP, it’s one to many,” explains Meacham.

The Cloud Security Posture Management (CSPM) component of MVISION CNAPP provides Legendary Entertainment with on-demand scanning, which looks at all services used in the public cloud and checks their security settings against internal benchmarks. “This gives us a security posture score and provides feedback on what we can do to bring ourselves back into compliance,” observes Meacham. “If someone changes a configuration, we get an alert right away. And if it’s not in alignment with policy, we can roll it back to the previous settings. MVISION CNAPP also helps us remediate policy exceptions by clearly stating the risks, instances impacted, and the necessary step by step actions needed for resolution.”

Banishing Shadow IT

MVISION CNAPP also ensures that Legendary Entertainment’s developers operate in a secure environment by alerting the security team when their actions violate security policies or increase the risk of a data breach. This effectively puts a halt to Shadow IT.

“MVISION CNAPP helps me keep my system administrators and developers accountable for what they are doing. We can make sure that they are consistent in how they execute, deploy, and build things. Configuration policies, on-demand scans, and different types of checks in MVISION CNAPP can help force that compliance. I am able to keep tabs on my developers to make sure they are operating according to these guidelines in any platform,” remarks Meacham.

Risk reduction through contextual entitlements

MVISION CNAPP reduces risk associated with operating in the cloud, enabling Legendary Entertainment to run mission-critical applications and develop blockbuster movies such as “The Dark Knight Rises” and “Dune” securely across a heterogenous multicloud environment. The solution also enables contextual entitlements so that users can be identified and assigned selective access to and permissions for applications and resources based on the security profile of the devices they are using at any given time.

Data protection with user and entity behavior analytics (UEBA)

Legendary Entertainment leverages MVISION CNAPP’s data loss prevention (DLP) capabilities to monitor activity in cloud data stores in order to help prevent data breaches. Unusual or suspicious activity or unauthorized movement of data transit is tracked and flagged immediately by leveraging built-in UEBA capabilities.

“If I see 2,000 files change in 30 seconds, that’s a huge red flag indicating ransomware or some other type of attack. The solution’s monitoring tool detects suspicious behavior and immediately brings that to our awareness. If we see something like that happening on multiple platforms, we know that immediate action is required. The UEBA capability is invaluable for identifying external collaborators who may have compromised accounts, which we find on a regular basis.”

Learn more

If you are looking for a simple-to-manage, high-visibility solution to secure your multicloud environment against the latest threats and vulnerabilities such as ChaosDB, take a look at MVISION CNAPP. For more information, visit: https://www.mcafee.com/enterprise/en-us/solutions/mvision-cnapp.html.

 

The post Legendary Entertainment Relies on MVISION CNAPP Across Its Multicloud Environment appeared first on McAfee Blog.

Realize Your SASE Vision with Security Service Edge and McAfee Enterprise

By Sadik Al-Abdulla

Many people are excited about Gartner’s Secure Access Service Edge (SASE) framework and the cloud-native convergence of networks and security. While originally proposed as fully unified architecture delivering network and security capabilities, the reality soon dawned that enterprise transition to a complete SASE model would be a decade long journey due to factors such as existing investments, operational silos (customer), and vendor consolidation. Consequently, Gartner introduced a new two-vendor approach to SASE that brought together a highly converged WAN Edge Infrastructure platform alongside a highly converged security platform – known as Security Service Edge (SSE).

Figure 1: SASE convergence.

SSE brings together Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), and Zero Trust Network Access (ZTNA) to secure access to the web, cloud services, and private applications, resulting in reduced risk, cost and complexity. McAfee Enterprise has long been a proponent of this approach: we embarked on a project to build the industry’s best SASE security solution over three years ago, introduced our MVISION Unified Cloud Edge solution in early 2020, and have since continued to innovate and set the standard for the Security Service Edge space.

How Did We Get Here?

The fundamental problem that SSE sets out to solve is that enterprises must adequately secure their personnel and their data. This became increasingly difficult as digital transformation spurred widespread cloud adoption and empowered remote and mobile workers. Just a few short years ago we would talk about remote access for short periods of time due to travel, and typically for a small proportion of the workforce. Today we speak in the context of COVID-19 and a vast, permanent “Work From Anywhere” (WFA) cultural shift. Supporting this shift is an accelerated migration into the cloud, where the vast majority of workloads and applications will soon reside.

All of this has taken down the walls that formed the perimeter we relied on heavily in the past. Today our people and our data are outside of that perimeter but inside of cloud applications. Cloud applications run from many locations, sometimes around the globe. Yet our objectives must remain the same. We still must secure our people, we must secure our devices, and we must secure our data on any device, at any time, using any service.

Secure web gateways were one of the gatekeepers to the old perimeter, fundamentally appliances that existed at the border of a network. Cloud access security brokers (CASB) were fundamentally built to secure the inside of cloud services. Virtual Private Networks (VPNs) enabled you to securely interconnect offices and remote users onto a single network. Managing these technologies separately became increasingly problematic as the boundaries between networks, the web, and the cloud began to blur. Organizational policies and compliance requirements must be translated to the administrative setup of a specific vendor’s management consoles. At first pass, this results in more errors in the implementation of these policies. Maintenance is difficult as policy changes must be rolled out and implemented within multiple vendor management interfaces. And when you position these traditional technologies against the problem statement of a “perimeterless” world, they fail. The logical answer to these problems is to converge these technologies together and bring them to the cloud.

The Power of Unification

For more than 3 years, McAfee Enterprise has invested deeply into a unified policy framework. We’ve unified threat engines, data engines. We’ve built a unified user experience and unified administrative experience to deliver against that promise of cloud native security.

A closely integrated SSE infrastructure can address the management challenges of setting up policies in multiple vendor management interfaces by deeply integrating security controls to reduce overhead, complexity, and cost, while increasing performance. But looking at the competitive landscape, this has proven to be easier said than done. Many fall short with it comes to securing data within the cloud, but McAfee Enterprise’s industry-leading Multi-Vector Data Protection capabilities make it incredibly easy to keep data safe no matter where it resides, with unified data classification, policy enforcement, and incident management.

Figure 2: McAfee Enterprise Multi-Vector Data Protection.

Other vendors grew up in the cloud but fall short when it comes to connecting to the private resources all large enterprises still use today. Some vendors are attempting to build-out the entire SSE product set from scratch, perhaps as part of a larger SASE offering. Most of the functions present baseline functional capability and the considerable instability of a complex and very new product.

The McAfee Enterprise Security Service Edge Vision

McAfee Enterprise has planned and executed a strategy for several years that takes MVISION Unified Cloud Edge’s complete set of SSE converged security services and then tie them closely to other highly integrated network services such as those offered by SD-WAN vendors to implement SASE. This approach enables most large enterprises the ability to leverage the majority of the technology partners they have to pull a SASE architecture together using much of the technology infrastructure they already have in place.

Figure 3: Enable secure access to web, cloud, and private apps with MVISION Unified Cloud Edge.

The increased efficiency of an integrated environment reduces the investment in administration, enhances the precision of policy enforcement, and improves the speed with which security control processes can be applied to data and activity in one single pass, improving security efficiency and efficacy. This earlier published blog demonstrates how our integration of Remote Browser Isolation (RBI) greatly improves security protection in a seamless, cost-effective manner.

Figure 4: MVISION Unified Cloud Edge threat protection stack with integrated Remote Browser Isolation.

The convergence and integration of cloud security technologies such as SWG, CASB, ZTNA, DLP, RBI and FWaaS substantially enhance operations, reduce cost, minimize errors, and enable more precise enforcement of organizational policy and management. Expenses are lower as experts in the administration and management of separate security controls are no longer required.

In conclusion, McAfee Enterprise has delivered the best and most rapid path to a comprehensive integrated SSE offering available in the market. Our Unified Cloud Edge (UCE) architecture completes that vision of unified and completely integrated policy management today. MVISION UCE is the security fabric that delivers data and threat protection to any location so you can enable fast and secure direct-to-internet access for your distributed workforce. This results in a transformation to a cloud-delivered SSE that converges with connectivity to reduce cost and complexity while increasing the speed and flexibility of your workforce.

Click here to see more about how MVISION Unified Cloud Edge can get you on the fastest route to SASE, or visit the MVISION UCE homepage to learn more and contact us to get started on your journey.

The post Realize Your SASE Vision with Security Service Edge and McAfee Enterprise appeared first on McAfee Blog.

What to Expect from the Next Generation of Secure Web Gateways

By Thyaga Vasudevan

After more than a century of technological innovation since the first units rolled off Henry Ford’s assembly lines, automobiles, and transportation bear little in common with the Model T era. This evolution will continue as society finds better ways to achieve the outcome of moving people from point A to point B.

While Secure Web Gateways (SWGs) have operated on a far more compressed timetable, a similarly drastic evolution has taken place. SWGs are still largely focused on ensuring users are protected from unsafe or non-compliant corners of the internet, but the transition to a cloud and remote-working world has created new security challenges that the traditional SWG is no longer equipped to handle. It’s time for the next generation of SWGs that can empower users to thrive safely in an increasingly decentralized and dangerous world.

How We Got Here

The SWG actually started out as a URL filtering solution that enabled organizations to ensure that employees’ web browsing complied with corporate internet access policy.

URL filtering then transitioned to proxy servers sitting behind corporate firewalls. Since proxies terminate traffic coming from users and complete the connection to the desired websites, security experts quickly saw the potential to perform more thorough inspection than just comparing URLs to existing blacklists. By incorporating anti-virus and other security capabilities, the “Secure Web Gateway” became a critical part of modern security architectures. However, the traditional SWG could only play this role if it was the chokepoint for all internet traffic, sitting at the edge of every corporate network perimeter and having remote users “hairpin” back through that network via VPN or MPLS links.

Next-Generation SWG

The transition to a cloud and remote-working world has put new burdens on the traditional perimeter-based SWG. Users can now directly access IT infrastructure and connected resources from virtually any location from a variety of different devices, and many of those resources no longer reside within the network perimeter on corporate servers.

This remarkable transformation also expands the requirements for data and threat protection, leaving security teams to grapple with a number of new sophisticated threats and compliance challenges. Unfortunately, traditional SWGs haven’t been able to keep pace with this evolving threat landscape, resulting in an inefficient architecture that fails to deliver the potential of the distributed workforce.

Just about every major breach now involves sophisticated multi-level web components that can’t be stopped by a static engine. The traditional SWG approach has been to coordinate with other parts of the security infrastructure, including malware sandboxes. But as threats have become more advanced and complex, doing this has resulted in slowing down performance or letting threats get through. This is where Remote Browser Isolation (RBI) brings in a paradigm shift to advanced threat protection. When RBI is implemented as an integral component of SWG traffic inspection, it can deliver real-time, zero-day protection against ransomware, phishing attacks, and other advanced malware so that even the most sophisticated threats can’t get through, without hindering the browsing experience.

Another issue with most traditional SWGs is that they aren’t able to sufficiently protect data as it flows from distributed users to cloud apps, due to lacking advanced data protection and cloud app intelligence. Without Data Loss Prevention (DLP) technology that is advanced enough to understand the nature of cloud apps and to keep up with evolved safety demands, organizations can find data protection gaps in their SWG solutions that keep them vulnerable to risks.

Finally, there is the question of cloud applications. While cloud applications operate on the same internet as traditional websites, they function in a fundamentally different way that traditional SWGs simply can’t understand. Cloud Access Security Brokers (CASBs) are designed to provide visibility and control over cloud applications, and if the SWG doesn’t have access to a comprehensive CASB application database and sophisticated CASB controls, it is effectively blind to the cloud. It’s only a cloud-aware SWG with integrated CASB functionality that can extend data protection to all websites and cloud applications, empowering organizations and their users to be better protected against advanced threats.

What we need from Next-Gen SWGs

Fig. Next Generation Secure Web Gateway Capabilities

A next-gen SWG should help simplify the implementation of Secure Access Service Edge (SASE) architecture and help accelerate secure cloud adoption. At the same time, it needs to provide advanced threat protection, unified data control, and efficiently enable a remote and distributed workforce.

Here are some of the use cases:

  • Enable a remote workforce with a direct-to-cloud architecture that delivers 99.999% availability. As countries and states slowly came out of the shelter-in-place orders, many organizations indicated that supporting a remote and distributed workforce will likely be the new norm. Keeping remote workers productive, data secured, and endpoints protected can be overwhelming at times. A next-gen SWG should provide organizations with the scalability and security to support today’s remote workforce and distributed digital ecosystem. A cloud-native architecture helps ensure availability, lower latency, and maintain user productivity from wherever your team is working. A true cloud-grade service should offer five nines (99.999%) availability consistently.
  • Reduce administrative complexity and lower cost – Today, with increased cloud adoption, more than 80% of traffic is destined for the internet. Backhauling internet traffic to a traditional “Hub and Spoke” architecture which requires expensive MPLS links can be very costly. Network slows to a halt as traffics spikes, and VPN for remote workers have proven to be ineffective. A next-gen SWG should support the SASE framework and provide a direct-to-cloud architecture that lowers the total operating costs by reducing the need for expensive MPLS links. With a SaaS delivery model, next-gen SWGs remove the need to deploy and maintain hardware infrastructure reducing hardware and operating costs, while increasing performance, reliability, and scalability.
  • Lock down your data, not your business – More than 95% of companies today use cloud services, yet only 36% of companies can enforce DLP rules in the cloud at all. Additionally, most traditional SWGs are not able to sufficiently protect data as it flows from distributed users to cloud applications, due to the lack of advanced data protection and cloud app intelligence. A next-gen SWG should offer a more effective way to enforce protection with built-in Data Loss Prevention templates and in-line data protection workflows to prevent restricted data from flowing out of the organization. A device-to-cloud data protection offers comprehensive data visibility and consistent controls across endpoints, users, clouds, and networks. With built-in DLP technology, next-gen SWGs ensure organizations remain compliant with corporate security policies, as well as industry and government regulations.
  • Defend against known and unknown threats – As the web continues to grow and evolve, web-born malware attacks also grow and evolve, beyond the protection that traditional SWGs can provide. Ransomware, phishing, and other advanced web-based threats are putting users and endpoints at risk. A next-gen SWG should feature the most advanced integrated security controls, including global threat intelligence and sandboxing, so that even the most sophisticated threats can’t get through. A next-gen SWG with threat protection solutions that work together is able to ensure consistent policies, data protection, and visibility across isolated and non-isolated traffic. A next-gen SWG should also include integrated Remote Browser Isolation to prevent unknown threats from ever reaching the endpoints.

SWGs have clearly come a long way from just being URL filtering devices to the point where they are essential to furthering the safe and accelerated adoption of the cloud. But we need to push the proverbial envelope a lot further. Digital transformation demands nothing less.

 

The post What to Expect from the Next Generation of Secure Web Gateways appeared first on McAfee Blog.

How MVISION CNAPP Helps Protect Against ChaosDB

By Rich Vorwaller

Attackers have made it known that Microsoft is clearly in their cross hairs when it comes to potential targets. Just last month the US Justice Department disclosed that Solorigate continues to comprise security when they confirmed over 80% of Microsoft email accounts were breached across four different federal prosecutors offices. In August Microsoft released another security patch (the second of two) for PrintNightmare, which allows remote attackers system level escalation of all Windows clients and servers. Since Microsoft still has the dominate market share for desktop OS, email/office services, along with the second largest market share in cloud computing, any security vulnerability found within the Microsoft ecosystem has cascading effects across the board.

Based on this, we wanted to let our customers know our response to the latest Microsoft security vulnerability. On August 12, Microsoft confirmed a security vulnerability dubbed ChaosDB whereby attackers can download, delete, or modify all data stored within the Azure Cosmos DB service. In response to the vulnerability Microsoft has since disabled the feature that can be exploited and notified potentially affected customers. However, according to the research team that identified the vulnerability they believe the actual number of customers affected is much higher and has the potential to expose thousands of companies dating back to 2019.

Cosmos DB is Microsoft’s fully managed NoSQL database service hosted on Azure which boasts customers such as Mars, Mercedes Benz, and Chipotle. The ChaosDB vulnerability affects customers that use the Jupyter Notebook feature. This built-in feature allows customers to share data visualizations and narrative text based on the data stored in Cosmos DB. Unfortunately, the Jupyter Notebook feature has been enabled by default for customers since February 2021, and fixing the vulnerability is no easy task. Because the vulnerability exposes public keys that can be used to access other Cosmos databases, the resolution requires that customers manually rotate their Cosmos DB primary keys – which are typically long-lived keys and used across multiple services or applications.

For customers using Cosmos DB, we highly recommend following Microsoft’s guidance and rotate their keys, but we also recognize that business can’t stop and unless you’ve automated key rotation, that task may take time and coordination across multiple teams. This blog will help provide some assistance on how one of our newest services can help identify and mitigate ChaosDB.

MVISION Cloud Native Application Protection Platform (CNAPP) is a new service we launched this year that provides complete visibility and security into services and applications built on top of cloud native solutions. MVISION CNAPP helps customers secure the underlying platform like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud used to build applications but also provides complete build and runtime protection for applications using virtual machines, Docker, and Kubernetes.

As part of this service, MVISION CNAPP has a feature called the custom policy builder. The custom policy builder is a great way for customers to audit services across their entire cloud environment in real time to identify risky configurations but can also be used to curate a specific policy to the customer’s unique environment based on several API properties.

How does the custom policy builder work? Once MVISION CNAPP is connected to a customer’s AWS, Azure, or GCP account, the custom policy builder will list all the supported services within each cloud platform. Along with all the supported services, the custom policy builder will also list all the available API attributes for each of those services – attributes that customers can use as triggers for creating security incidents and automatic responses. A good example of the capability would be “if MVISION CNAPP identifies a public Amazon S3 bucket, performs a scan to on the bucket objects to identify any sensitive data and alerts teams via a SNS notification.” When new vulnerabilities like ChaosDB hit the wire, the custom policy builder is purpose built to help customers identify and understand their risk to anything new.

So how can CNAPP help identify if you’re at risk for ChaosDB? Essentially, you’ll want to answer three questions to understand your risk:

  • Are we using Cosmos DB?
  • If so, do our Cosmos databases have unrestricted access?
  • If an attacker did have access to our Cosmos DB keys, what level of access would they have with those keys?

To find answers to these questions, I’ll show how you can create several custom policies using the MVISION CNAPP custom policy builder, but you can combine and mix these rules based on your needs.

In the first example, I’m going to answer the first two questions to see if we’re running Cosmos DB and if the service has unrestricted network access. Under the MVISION CNAPP menu I’ll click on Policy | Configuration Audit | Actions | Create Policy. From there I’ll give my policy a name and select Microsoft Azure | Next. The custom policy builder will automatically prepopulate all the available services in Azure when I click on Select Resource Type. Select Azure Cosmos DB and the custom policy builder will now show me all the available API attributes for that service. Start typing for the string of properties.publicNetworkAccess with a statement of equals to Enabled with a severity level you assign. Click Test Rule and the custom policy builder will check if you’re running any Cosmos DBs that allow access from any source.

Figure 1: Custom Policy Builder Screenshot

If the results of the custom policy show any incidents where Cosmos DB has unrestricted access, you’ll want to immediately change that setting by Configuring an IP firewall in Azure Cosmos DB.

Now let’s see if we have any Cosmos databases where we haven’t set firewall rules. These rules can be based on a set of IP addresses or private end points and should have been set when you created the DBs, but let’s confirm. You’ll follow the same steps as before but select the following criteria for the policy using AND statements:

  • ipRangeFilter equals to not set
  • virtualNetworksRules is not set
  • privateEndpointConnections is not set

Figure 2: Custom Policy Builder Screenshot 2

If you see any results from the custom policy, you’ll want to review the IP address and endpoints to make sure you’re familiar with access from those sources. If you’re not familiar with those sources or the sources are too broad, follow Configuring an IP firewall in Azure Cosmos DB to make the necessary changes.

Finally, let’s show how MVISION CNAPP can audit to see what is possible if your keys were exposed. In general, database keys are issued out to applications so they can access data. Rarely would you issue keys to make configuration changes or write changes to your database services. If you granted keys that can make changes, you may have issued an overly permissive key. Eventually you’ll want to regenerate those keys, but in the meantime let’s identify if the keys can make write changes.

We’ll follow the same procedure as before but use the properties.disableKeyBasedMetadataWriteAccess equals to false

Figure 3: Custom Policy Builder Screenshot 3

Like in the previous examples, if you find any results here that show you’ve issued keys that can make write changes, you’ll want to disable the feature by following Disable key based metadata write access.

Our custom policy builder is just one of the many features we’ve introduced with MVISION CNAPP. I invite you to check out the solution by visiting http://mcafee.com/CNAPP for more information or request a demo at https://mcafee.com/demo.

The post How MVISION CNAPP Helps Protect Against ChaosDB appeared first on McAfee Blog.

Remote Browser Isolation: The Next Great Security Technology is Finally Attainable

By Tony Frum

Security professionals and technologists old enough to remember renting movies at Blockbuster on Friday nights likely also remember a time when the internet was a new phenomenon full of wonder and promise.  These same individuals probably view it through a more skeptical lens seeing it now as a cesspool of malware and great risk.  It’s also widely understood that no web security solution can offer perfect protection against the metaphorical minefield that is the internet.  This last statement, however, is being challenged by a new technology that is grasping at the title of perfect web security.  This mythical technology is Remote Browser Isolation, or RBI, and it can be argued that it does, in fact, provide its users with invincibility against web-based threats.

Remote Browser Isolation changes the playbook on web security in one very fundamental way: it doesn’t rely on detecting threats.  When a user tries to browse to a website, the RBI solution instantiates an ephemeral browser in a remote datacenter which loads all the requested content.  The RBI solution then renders the website into a dynamic visual stream that enables the user to see and safely interact with it.

Figure 1: How Remote Browser Isolation works.

User behavior can be controlled at a granular level, preventing uploads, downloads, and even copy & paste using the local clipboard.  When properly configured, absolutely none of the content from the requested site is loaded on the local client.  For this reason, it can be argued that it’s literally impossible for malware to be delivered to the local client.  Of course, the RBI solution’s ephemeral browser instance may be compromised, but it will be fully isolated from the organization’s valuable assets and data, rendering the attack harmless.  As soon as the user closes their local browser tab, the ephemeral browser is destroyed.

The value of this cannot be overstated.  The world is increasingly conducting its affairs through web browsers, and the challenge of detecting threats continues to increase at an exponential rate.  While there is great efficacy and value in the threat intelligence and malware detection capabilities of web security solutions today, the “cat & mouse” game being played with cybercriminals means that they’re simply never going to offer perfect protection.  Attackers often use zero-day threats coupled with domains registered perhaps within the past few minutes to compromise their victims, and these methods will too often succeed in circumventing any detection-based security measures.  The game-changing efficacy of RBI and the fact its inception was actually more than 10 years ago should bring an obvious question to mind – If it’s so great, why doesn’t every organization in the world use RBI today?  There are a few relevant answers to this, but one rises above all the rest: cost.

RBI’s method of instantiating remote web browsers for all users precludes the possibility of any implementation that is not expensive to deliver.  Consider the size of a modern enterprise, the number of users, the number of web browser tabs an average user keeps open, and then consider the amount of memory and CPU consumed by each of those tabs.  To mirror these resources in a remote datacenter will always be a costly proposition.  For this reason, many RBI solutions on the market today may literally consume the entire security budget allocated for each licensed user.  As prevalent as web-based threats are today and as effective as RBI’s protection may be, no security organization can dedicate most or all of their security budget to a single technology or even a single threat vector.

To better understand the cost problem and how it may be solved, let’s take a closer look at the two most common use cases for RBI.  The first and most common use case is handling uncategorized sites or sites with unknown risk, known as selective isolation.  As mentioned before, attackers will often use a site that was registered very recently to deliver their web-based threats to victims.  Therefore, organizations often want to block any site that has not been categorized by their web security vendor.  However, the problem is that many legitimate sites can be uncategorized resulting in unnecessary blocking that may impact business.  Managing such a policy is very tedious, and the user experience tends to suffer greatly.  RBI is an ideal solution to this problem where you can grant users access to these sites while maintaining a high level of security.  This situation calls for a selective use of RBI where trusted sites are filtered through more traditional means while only the unknown or high-risk sites are isolated.

The other common need for RBI is various groups of high-risk users.  Consider C-level executives who have access to highly sensitive information relating to business strategies, intellectual property, and other information that must remain private.  Another common example is IT administrators who have elevated privileges that could be devastating if their accounts were compromised.  In these scenarios, organizations may look to isolate all of the traffic for these users including even sites that are trusted.  Typically, this full isolation approach is reserved for only a subset of users who pose a particularly high risk if compromised.

In light of these two use cases, selective isolation and full isolation, let’s take a closer look at the cost of this invincibility-granting technology.  Let’s consider a hypothetical organization, Brycin International, who has a total of 10,000 users.  Brycin has identified 400 users who either have access to critical data or have elevated permissions and therefore require full-time isolation.  We will assume a street price of $100 per user for full time isolation totaling $40,000 for these users.  This seems like a reasonable cost considering the elevated risk a compromise would represent for any one of these users.  Brycin would also like to leverage selective isolation for the rest of the user population, or 9,600 users.  Some solutions may require purchasing a full license, but most offer a discounted license for selective isolation.  We will assume a generous discount of 60%, resulting in a total cost of $40 per user or $384,000 for the rest of the organization.  This gives us a total price tag of $424,000 for Brycin, or an average cost of $42.40 per user.

Not only is this a steep cost for our 10,000-user enterprise, but the cost does not at all align with the value or the cost to deliver the solution.  The 9,600 selective isolation users may represent 96% of the user population, but when you consider the fact that only a small percentage of their web traffic will actually be isolated – state-of-the-art web threat security stacks can detect as much as 99% of all threats, leaving 1% of all traffic to be isolated – they generate perhaps less than 20% of the isolated web traffic.  The full isolation users, while a minority of the license count, will represent the bulk of the isolated web traffic – a little more than 80%.  However, despite the fact that selective isolation users are responsible for such a small share of all isolated traffic and given the generous 60% discounted licensing, they are still by far the largest expense at over 90% of the total solution cost!  This ratio of cost to value simply will not align with the budget and goals of most security organizations.

Figure 2: The disproportionate relationship between RBI users, traffic load, and solution cost.

McAfee Enterprise has now upended this unfortunate paradigm by incorporating remote browser isolation technology natively into our MVISION Unified Cloud Edge platform.  McAfee Enterprise offers two licensing options for RBI: RBI for Risky Web and Full Isolation.  RBI for Risky Web uses an algorithm built by McAfee Enterprise to automatically trigger browser isolation for any site McAfee Enterprise determines to be potentially malicious.  This is designed to address the most common use case, selective isolation, and it is included at no additional cost for any Unified Cloud Edge customer.  Additionally, Full Isolation licenses can be purchased as an add-on for any users that require isolation at all times.  These Full Isolation licenses allow you to create your own policy dictating which sites are isolated or not for these users.

Now, let’s revisit Brycin International’s cost to deliver enterprise-wide RBI if they chose McAfee Enterprise.  As we saw earlier, despite the fact the selective isolation users generated less than 20% of the traffic, they represented over 90% of the total cost of the solution.  With McAfee Enterprise’s licensing model, these users would not require any additional licenses at all, reducing this cost to zero!  Now, Brycin only has to consider the Full Isolation add-on licenses for their 400 high-risk users, or $40,000 – this is now the entire cost for the enterprise-wide RBI deployment.  While $100 per user still may exceed the per-user security budget for Brycin, it is now diluted by the total user population, reducing the per-user cost of the RBI deployment from $42.40 to only $4.  This is a tremendous reduction in cost for equal or greater value, making RBI much more likely to fit into Brycin’s budget and overall security plans.

This may beg the question, “How can McAfee Enterprise do this?”  In short, as one of the most mature security vendors in the world, McAfee Enterprise has the most powerful threat intelligence and anti-malware capabilities in the market today.  McAfee Enterprise’s Global Threat Intelligence service leverages over 1 billion threat sensors around the world reducing the unknowns to an extremely small fraction of all web traffic.  In addition, its heuristics-based anti-malware technology is able to detect many zero-day malware variants.  More uniquely, the Gateway Anti-Malware engine offers inline, real-time, emulation-based sandboxing using behavioral analysis to identify never-before seen threats based on their behavior.  After analyzing the combined effectiveness of these technologies, we found that only a small percentage of web traffic could not be confidently identified as either safe or malicious – roughly 0.5%. This made the cost of delivering selective RBI for Risky Web something that could be easily absorbed without any additional cost to our customers.

Remote Browser Isolation is an absolute paradigm shift in how we can protect our most critical assets against web-based threats today.  While the benefits are tremendous, cost has been a significant barrier preventing this powerful defense from becoming a ubiquitous technology.  McAfee Enterprise has broken down this barrier by leveraging our superior threat intelligence to reduce the cost of delivering RBI and then passing this savings on to our customers.

Remote Browser Isolation

Remove the risk and enjoy worry-free web browsing with McAfee’s RBI.

View Now

The post Remote Browser Isolation: The Next Great Security Technology is Finally Attainable appeared first on McAfee Blog.

SASE, Cloud Threats and MITRE

By Thyaga Vasudevan

As you know, McAfee Enterprise’s MVISION Unified Cloud Edge (UCE) was the first of all the SASE vendors to implement the MITRE ATT&CK Framework for Cloud last year. An important aspect of Gartner’s SASE Framework is the ability for effective Threat Protection and Resolution in the Cloud. MVISION UCE takes this to the next level – the product takes a multi-layered approach to cloud threat investigation that can speed your time to detect adversary activity in your cloud services, identify gaps, and implement targeted changes to your policy and configuration.

As a quick refresher, the MITRE Att&CK Matrix represents the relationship between attacker Tactics and Techniques:

  • Tactics. A tactic describes the objective, or why the adversaries are performing the attack. In the ATT&CK Matrix, the table header represents tactics.
  • Technique. A technique describes how adversaries achieve their tactical objectives. For example, what are the various technical ways performed by attackers to achieve the goal? In the ATT&CK Matrix, the table cell represents techniques.

This Dashboard is available within the MVISION Cloud console by accessing the Dashboards > MITRE Dashboard link

Ever since the launch of this truly differentiated product offering, we have seen a tremendous amount of interest and adoption of this feature within our existing customers. Over the past few months, we have continued to make significant enhancements as part of our MITRE Dashboard.

In this post, I shall summarize some of the significant highlights that we have introduced in the past few releases:

Executive Summary Section

The Executive Summary displays an at-a-glance view of the current count of Threats, Anomalies, Incidents, types of incidents, and Detected Techniques with severity.

Flexible Filters

To suit the needs of the different teams that would be using the MVISION Dashboard, we now have the ability to filter the MITRE Dashboard by using a variety of facets:

  • Service Name. The name of the cloud service.
  • Threat Type. The name of the threat type.
  • Status. The MITRE Threat statuses available are:
    • Executed Threat. Threats that caused risk to your cloud service security.
    • Potential Threat. Threats that have the potential to cause risk to your cloud service security. It is recommended to look into the Potential Threats to reduce the impending risk.
  • Top 20 Users. Top 20 users who are impacted by the attacks.

Detected Techniques – Risk and Drilldown

When an incident is detected for a technique in MVISION Cloud, a severity is computed. The detected techniques are categorized based on the severity of the incidents. Each detected technique is interactive and leads to more detailed explanations.

To view the details of the detected techniques:

  1. Click any technique on the ATT&CK Matrix table to view the Technique Cloud Card. For example, you can click one of the techniques under the Initial Access category such as Trusted Relationship to learn how an attacker gained access to an organization’s third-party partners’ account and shows the details of compromised Connected Apps.
  2. Next, click the Connected Apps Mini Card to view an extended cloud card that displays the restricted details of Connected Apps.
  3. Then click the link to the specific restricted Connected App to see an extended view of the compromised Connected Apps incident.
  4. Info severity details allow you to investigate and apply a remediation action. As a remediation action, select and assign the Owner and Status from the menu.

With McAfee Enterprise, threat investigation isn’t just for one environment – it is for all of your environments, from cloud to endpoint to your analytics platforms. With MVISION CloudMVISION EDR, and MVISION Insights, your enterprise has an extended detection and response (XDR) platform for the heterogenous attacks you face today.

 

MITRE ATT&CK® as a Framework for Cloud Threat Investigation

Want to learn more about how you can leverage MITRE ATT&CK to extend your detection and response capabilities to the cloud?

Download Now

The post SASE, Cloud Threats and MITRE appeared first on McAfee Blog.

Business Results and Better Security with MVISION Cloud for Microsoft Dynamics 365

By Thyaga Vasudevan

We are in the midst of digital transformation to the cloud – these cloud services fuel transformative projects for businesses, empowering employees with powerful tools to do their jobs better and more efficiently. This cloud transformation has meant that a large portion of enterprise data now resides and is being accessed outside of the network perimeter and beyond the reach of traditional data security controls.  

MVISION™ Unified Cloud Edge is the SASE security fabric between an organizations’s workforce and their resources that enables fast direct-to-internet access by eliminating the need to route traffic through their data center for security. Data and threat protection are performed at every control point in a single pass to reduce the cost of security and simplify your management.  

Sensitive data uploaded to CRM has also put the service on the radar of IT security teams. Based on the Market share report Microsoft Dynamics 365 is amongst the top CRM vendors  

Data in Dynamics 365 represents anything from proprietary business information to sensitive customer data. The high volume and value of this data have made Dynamics 365 security a top priority for companies embarking on cloud security projects.  

Microsoft provides a host of security features for enterprise customers at the infrastructure and software level, but customers default lack many controls around data and user account security. For example, only 36 percent of cloud customers say they can enforce data loss prevention in the cloud. These are key security controls for organizations using Dynamics 365, especially those who upload regulated data to the service.  

MVISION Cloud for Dynamics 365, part of McAfee’s Unified Cloud Edge offering is a comprehensive solution, which allows enterprises to enforce security controls for data in Dynamics 365. It addresses four areas of Dynamics security: 

  • Visibility: Receive insights into usage analytics, user groups, privileges, and data content, both monitored dynamically and through an on-demand scan. This allows enterprises to evaluate the types of data and users within Dynamics and understand their unique risks.  
  • Compliance: Consistently enforce existing and new policies with cloud DLP for structured and unstructured data. Multitier remediation options and match highlighting allow for a positive user experience and efficient evaluation of policy violations from security teams.  

By applying cloud DLP policy you can find any sensitive info stored in Dynamics entity in near real-time. Configure via UI the entities where you know sensitive content is posted by the user and do near real time DLP. Also, you can integrate your Endpoint DLP engine policy and use the single console of MVISION to leverage the DLP policy defined on Dynamics entities. In addition, before the Auditor finds any sensitive information stored in data at rest, you can run on-demand scans on Dynamics 365 entities/attachments to ensure there is no sensitive data in Dynamics entities. MVISION Cloud Compliance scan applies to entities (structured data) and attachments (unstructured data) in Dynamics. Data privacy can save organizations from massive compliance fines, a negative public image, and loss of customer trust. Dynamics 365 data privacy is just as important as security, MVISION Cloud compliance scan ensures GDPR compliance for data at rest. In all cases whenever out of compliance is found, then raise incidents and takes remedial action for complete visibility. Malware scan to detect malware on any Dynamics 365 attachments if there is Malware. This scan could be Near Real-Time and also could be on data at rest.  

  • Threat Protection: Monitor threats from a Dynamics 365 security operations center (SOC) based on insights from user behavior analytics. Machine learning algorithms identify account compromises, insider threats, high-risk privileged users, and more. Mapping to the MITRE framework gives visibility and insights into whether Microsoft Dynamics services are used w.r.t tools and techniques for data compromise and exfiltration. 
  • Data Security: Enforce security controls based on transaction context including the user, device, and data. Block high-risk downloads in real-time.

MVISION Cloud for Dynamics 365 can act as an additional control point between enterprise users and the cloud to provide enhanced analytics into cloud usage, detect threats from insiders, compromised accounts, and privileged users, enforce compliance policies with DLP, and contextual access controls. Additionally, detecting intentional or inadvertent threats from employees or third parties, enforcing granular access controls based on parameters such as role, device, data, and location, and enforcing DLP policies. 

Finally, the MVISION platform provides benefits that a single-point solution for one cloud service cannot satisfy. A single point for cloud control removes gaps in policy enforcement. Visibility into all cloud traffic allows MVISION to correlate activity occurring across multiple cloud services, identifying high-risk users and cloud-to-cloud threats. And MVISION offers integrations with on-premises security tools to extend existing security policies to the cloud and feed cloud threats into SIEM solutions. Consolidating all cloud security data in one tool is the best way to capture a holistic view of cloud risk, in a snapshot and as it changes over time. 

Moving to the cloud does not have to be a trade-off between business results and security. Improved security in the cloud is a reality for companies that have embraced a cloud-native security approach. Using MVISION Cloud for Dynamics 365 can make data safer than ever before while empowering business teams to become more efficient and dynamic.  

For more information or to test out MVISION Cloud for Dynamics please visit us at:  https://www.mcafee.com/enterprise/en-us/solutions/mvision/marketplace.html 

The post Business Results and Better Security with MVISION Cloud for Microsoft Dynamics 365 appeared first on McAfee Blogs.

Introducing MVISION Private Access

By Shishir Singh

Enabling Zero Trust Access with End-to-end Data Security and Continuous Risk Assessment

The current business transformation and remote workforce expansion require zero trust access to corporate resources, with end-to-end data security and continuous risk assessment to protect applications and data across all locations – public clouds, private data centers, and user devices.  MVISION Private Access is the industry’s first truly integrated Zero Trust Network Access solution that enables blazing fast, granular “Zero Trust” access to private applications and provides best-in-class data security with leading data protection, threat protection, and endpoint protection capabilities, paving the way for accelerated Secure Access Service Edge (SASE) deployments.

We are currently operating in a world where enterprises are borderless, and the workforce is increasingly distributed. With an increasing number of applications, workloads and data moving to the cloud, security practitioners today face a wide array of challenges while ensuring business continuity, including:

  • How do I plan my architecture and deploy assets across multiple strategic locations to reduce network latency and maintain a high-quality user experience?
  • How do I keep a tight control over devices connecting from any location in the world?
  • How do I ensure proper device authorization to prevent over-entitlement of services?
  • How do I maintain security visibility and control as my attack surface increases due to the distributed nature of data, users, and devices?

Cloud-based Software-as-a-Service (SaaS) application adoption has exploded in the last decade, but most organizations still rely heavily on private applications hosted in data centers or Infrastructure-as-a-Service) IaaS environments. To date Virtual Private Networks (VPN) have been a quick and easy fix for providing remote users access to sensitive internal applications and data. However, with remote working becoming the new normal and organizations moving towards cloud-first deployments, VPNs are now challenged with providing secure connectivity for infrastructures they weren’t built for, leading to bandwidth, performance, and scalability issues. VPNs also introduce the risk of excessive data exposure, as any remote user with valid login keys can get complete access to the entire internal corporate network and all the resources within.

Enter Zero Trust Network Access, or ZTNA! Built on the fundamentals of “Zero Trust”, ZTNAs deny access to private applications unless the user identity is verified, irrespective of whether the user is located inside or outside the enterprise perimeter. Additionally, in contrast to the excessive implicit trust approach adopted by VPNs, ZTNAs enable precise, “least privileged” access to specific applications based upon the user authorization.

We are pleased to announce the launch of MVISION Private Access, an industry-leading Zero Trust Network Access solution with integrated Data Loss Prevention (DLP) and Remote Browser Isolation (RBI) capabilities. With MVISION Private Access, organizations can enable fast, ubiquitous, direct-to-cloud access to private resources from any remote location and device, allow deep visibility into user activity, enforce data protection over the secure sessions to prevent data misuse or theft, isolate private applications from potentially risky user devices, and perform security posture assessment of connecting devices, all from a single, unified platform.

Why does ZTNA matter for remote workforce security and productivity?

Here are the key capabilities offered by ZTNA to provide secure access for your remote workforce:

  • Direct-to-app connectivity: ZTNA facilitates seamless, direct-to-cloud and direct-to-datacenter access to private applications. This eliminates unnecessary traffic backhauling to centralized servers, reducing network latency, improving the user experience and boosting employee productivity.
  • Explicit identity-based policies: ZTNA enforces granular, user identity-aware, and context-aware policies for private application access. By eliminating the implicit trust placed on multiple factors, including users, devices and network location, ZTNA secures organizations from both internal and external threats.
  • Least-privileged access: ZTNA micro-segments the networks to create software-defined perimeters and allows “least privileged” access to specific, authorized applications, and not the entire underlying network. This prevents overentitlement of services and unauthorized data access. Micro-segmentation also significantly reduces the cyberattack surface and prevents lateral movement of threats in case of a breach.
  • Application cloaking: ZTNA shields private applications behind secure gateways and prevents the need to open inbound firewall ports for application access. This creates a virtual darknet and prevents application discovery on public Internet, securing organizations from Internet-based data exposure, malware and DDoS attacks.

Is securing the access enough? How about data protection?

Though ZTNAs are frequently promoted as VPN replacements, nearly all ZTNA solutions share an important drawback with VPNs – lack of data awareness and risk awareness. First-generation ZTNA solutions have categorically focused on solving the access puzzle and have left data security and threat prevention problems unattended. Considering that ubiquitous data awareness and risk assessment are the key tenets of the SASE framework, this is a major shortcoming when you consider how much traffic is going back and forth between users and private applications.

Moreover, the growing adoption of personal devices for work, oftentimes connecting over unsecure remote networks, significantly expands the threat surface and increases the risk of sensitive data exposure and theft due to lack of endpoint, cloud and web security controls.

Addressing these challenges requires ZTNA solutions to supplement their Zero Trust access capabilities with centralized monitoring and device posture assessment, along with integrated data and threat protection.

MVISION Private Access

MVISION Private Access, from McAfee Enterprise, is designed for organizations in need for an all-encompassing security solution that focuses on protecting their ever-crucial data, while enabling remote access to corporate applications. The solution combines the secure access capabilities of ZTNA with the data and threat protection capabilities of Data Loss Prevention (DLP) and Remote Browser Isolation (RBI) to offer the industry’s leading integrated, data-centric solution for private application security, while utilizing McAfee’s industry-leading Endpoint Security solution to derive deep insights into the user devices and validating their security posture before enabling zero trust access.

MVISION Private Access allows customers to immediately apply inline DLP policies to the collaboration happening over the secure sessions for deep data inspection and classification, preventing inappropriate handling of sensitive data and blocking malicious file uploads. Additionally, customers can utilize a highly innovative Remote Browser Isolation solution to protect private applications from risky and untrusted unmanaged devices by isolating the web sessions and allowing read-only access to the applications.

Fig. 1: MVISION Private Access

Private Access further integrates with MVISION Unified Cloud Edge (UCE) to enable defense-in-depth and offer full scope of data and threat protection capabilities to customers from device-to-cloud. Customers can achieve the following benefits from the integrated solution:

  • Complete visibility and control over data across endpoint, web and cloud.
  • Unified incident management across control points with no increase in operational overhead, leading to total cost of ownership (TCO) reduction.
  • Multi-vector data protection, eliminating data visibility gaps and securing collaboration from cloud to third-parties.
  • Defending private applications against cloud-native threats, advanced malware and fileless attacks.
  • Continuous device posture assessment powered by industry-leading endpoint security.

Additionally, UCE’s Hyperscale Service Edge, that operates at 99.999% service uptime and is powered by intelligently peered data centers, provides blazing fast, seamless experience to private access users. Authentication via Identity Providers eliminates the risk of threat actors infiltrating the corporate networks using compromised devices or user credentials.

What Sets MVISION Private Access apart?

With dozens of ZTNA solutions on the market, we’ve made sure that MVISION Private Access stands out from the crowd with the following:

  • Integrated data loss prevention (DLP) and industry-leading Remote Browser Isolation (RBI): Enables advanced threat protection and complete control over data collaborated through private access sessions, preventing inappropriate handling of sensitive data, blocking files with malicious content and securing unknown traffic activity to prevent malware infections on end-user devices.
  • SASE readiness with UCE integration: MVISION Private Access converges with MVISION UCE to deliver complete data and threat protection to any device at any location in combination with other McAfee security offerings, that include Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), and Endpoint Protection, while enabling direct-to-cloud access in partnership with leading SD-WAN vendors. This ensures a consistent user experience across web, public SaaS, and private applications.
  • Endpoint security and posture assessment: MVISION Private Access leverages industry-leading McAfee Endpoint Security powered by proactive threat intelligence from 1 billion sensors to evaluate device and user posture, which informs a risk-based zero trust decision in real-time. The rich set of telemetry, which goes well beyond the basic posture checking performed by competitive solutions, allows organizations to continuously assess the device and user risks, and enforce adaptive policies for private application access.
  • Securing unmanaged devices with clientless deployments: MVISION Private Access secures access from unmanaged devices through agentless, browser-based deployment, enabling collaboration between employees, external partners or third-party contractors in a most frictionless manner.

With MVISION Private Access customers can establish granular, least privileged access to their private applications hosted across cloud and IT environments, from any device and location, while availing all the goodness of McAfee’s leading data and threat protection capabilities to accelerate their business transformation and enable the fastest route to SASE. To learn more, visit www.mcafee.com/privateaccess.

 

 

 

The post Introducing MVISION Private Access appeared first on McAfee Blogs.

Introducing MVISION Cloud Firewall – Delivering Protection Across All Ports and Protocols

By Sadik Al-Abdulla

Architected for the cloud-first and remote-first deployments, MVISION Cloud Firewall secures access to applications and resources on the internet, accessed from every remote site and location, through a cloud-native service model. The solution inspects end-to-end user traffic – across all ports and protocols, enabling unified visibility and policy enforcement across the organizational footprint. Powered by McAfee Enterprise’s industry leading next-generation intrusion detection and prevention system, contextual policy engine and advanced threat detection platform, and supported by Global Threat Intelligence feeds, MVISION Cloud Firewall proactively detects and blocks emerging threats and malware with a high degree of accuracy, uniquely addressing the security challenges of the modern remote workforce. MVISION Cloud Firewall is an integral component of McAfee Unified Cloud Edge, offering organizations an all-encompassing, cloud-delivered Secure Access Service Edge (SASE) security solution for accelerating their business transformation.

Wherever networks went, firewalls followed

For a long time, firewalls and computer networks were like conjoined twins. Businesses simply could not afford to run an enterprise network without deploying a security system at the edge to create a secure perimeter around their crown jewels. The growing adoption of web-based protocols and their subsequent employment by cybersecurity adversaries for launching targeted malware attacks, often hidden within encrypted traffic, saw the emergence of next-generation firewall (NGFW) solutions. Apart from including stateful firewall and unified threat management services, NGFWs offered multi-layered protection and performed deep packet inspection, allowing organizations greater awareness and control over the applications to counter web-based threats.

Cloud computing changed the playing field

But things took a dramatic turn with the introduction of cloud computing. Cloud service providers came up with an offer the organizations could not refuse – unlimited computing power and storage volumes at significantly lower operating costs, along with the option to seamlessly scale business operations without hosting a single piece of hardware on-premises. Hence began the mass exodus of corporate data and applications to the cloud. Left without a fixed network perimeter to protect, the relationship between firewalls and networks entered complicated terms. While the cloud service providers offered a basic level of security functionality, they lacked the muscle power of on-premises firewalls, particularly NGFWs. This was further exacerbated by the ongoing pandemic and the overnight switch of the workforce to remote locations, which introduced the following challenges:

  • Remote users were required to backhaul the entire outbound traffic to centralized firewalls through expensive MPLS connections, impacting the network performance due to latency and degrading the overall user experience.
  • Remote users connecting direct-to-cloud often bypassed the on-premises security controls. With the firewalls going completely blind to the remote user traffic, security practitioners simply couldn’t protect what they couldn’t see.
  • Deploying security appliances at each remote site and replicating the firewall policies across every site significantly increased the capital and operational expenditure. Additionally, these hardware applications lack the ability to scale and accommodate the growing volume of user traffic.
  • On-premises firewalls struggled to integrate with cloud-native security solutions, such as Secure Web Gateways (SWG) and Cloud Access Security Brokers (CASB), creating a roadblock in Secure Access Service Edge (SASE) deployments.

Enter Firewall-as-a-Service

The distributed workforce has expanded the threat landscape at an alarming rate. According to the latest McAfee Labs Threats Reports, the volume of malware threats observed by McAfee Labs averaged 688 threats per minute, an increase of 40 threats per minute (3%) in the first quarter of 2021. While SWGs and CASBs could address the security challenges for web and SaaS traffic, respectively, how could organizations secure the remaining non-web traffic? The answer lies in Firewall-as-a-Service, or FWaaS. FWaaS can be defined as a firewall hosted in the cloud, offering all the NGFW capabilities, including deep packet inspection, application-layer filtering, intrusion prevention and detection, advanced threat protection, among others. While, at the onset, FWaaS may give the impression of lifting and shifting NGFWs to the cloud, their business benefits are far more profound and relevant for the modern workforce, some of which include:

  • Securing the remote workers and local internet breakouts, allowing direct-to-cloud connections to reduce network latency and improve user experience. Avoiding traffic backhauls from remote sites to centralized firewalls through expensive VPN and MPLS lines reduces the deployment costs.
  • Significant cost savings by eliminating hardware installation at remote branch offices.
  • Aggregating the network traffic from on-premises datacenters, clouds, remote branch offices and remote user locations, allowing centralized visibility and unified policy enforcement across all locations.
  • Seamless scaling to handle the growing volume of traffic and the need for inspecting encrypted traffic for threats and malware.
  • Centralizing the service management, such as patching and upgrades, reducing the operational costs for repetitive tasks.

Introducing MVISION Cloud Firewall

McAfee MVISION Cloud Firewall is a cutting-edge Firewall-as-a-Service solution that enforces centralized security policies for protecting the distributed workforce across all locations, for all ports and protocols. MVISION Cloud Firewall allows organizations to extend comprehensive firewall capabilities to remote sites and remote workers through a cloud-delivered service model, securing data and users across headquarters, branch offices, home networks and mobile networks, with real-time visibility and control over the entire network traffic.

The core value proposition of MVISION Cloud Firewall is characterized by a next-generation intrusion detection and prevention system that utilizes advanced detection and emulation techniques to defend against stealthy threats and malware attacks with industry best efficacy. A sophisticated next-generation firewall application control system enables organizations to make informed decisions about allowing or blocking applications by correlating threat activities with application awareness, including Layer 7 visibility of more than 2000 applications and protocols.

Fig. MVISION Cloud Firewall Architecture

What makes MVISION Cloud Firewall special?

Superior IPS efficacy: MVISION Cloud Firewall delivers superior IPS performance through deep inspection of network traffic and seamless detection and blocking of both known and unknown threats across the network perimeter, data center, and cloud environments. The next-generation IPS engine offers 20% better efficacy than competitive solutions, while far exceeding the detection rates of open-source solutions. The solution combines with MVISION Extended Threat Detection and Response (XDR) to offer superior threat protection by correlating threat intelligence and telemetry across multiple vectors and proactively detecting and resolving adversarial threats before that can lead to any enterprise damage or loss. Additional advantages include inbound and outbound SSL decryption, signature-less malware analysis, high availability, and disaster recovery protection.

End-to-end visibility and optimization: The ability to visualize and control remote user sessions allows MVISION Cloud Firewalls to proactively monitor the end-to-end traffic flow and detect any critical issues observed across user devices, networks, and cloud. This offers network administrators a unified, organization-wide view of deployed assets to pinpoint and troubleshoot issues before the overall network performance and user productivity gets impacted. Optimizing network performance elevates the user experience through reduced session latency while keeping a check on the help desk ticket volumes.

Policy Sophistication: MVISION Cloud Firewall considers multiple contextual factors, such as the device type, security posture of devices, networks and users, and pairs that with application intelligence to define a robust and comprehensive policy lexicon that is more suitable for protecting the modern remote workforce. For example, most NGFWs can permit or block user traffic based on the configured rule set, such as permitting accounting users to access files uploaded on a Teams site. McAfee, on the other hand, utilizes its data protection and endpoint protection capabilities to create more powerful NGFW rules, such as permitting accounting users to access a third-party Teams site only if they have endpoint DLP enabled.

SASE Convergence

MVISION Cloud Firewall converges with MVISION Unified Cloud Edge to offer an integrated solution comprising of industry best Cloud Access Security Broker (CASB), Secure Web Gateway (SWG), Zero Trust Network Access (ZTNA), unified Data Loss Prevention (DLP) across endpoint, cloud and network, Remote Browser Isolation (RBI) and Firewall-as-a-Service, making McAfee one of the only vendors in the industry that solves the network security puzzle of the SASE framework. With the inclusion of MVISION Cloud Firewall, McAfee Enterprise customers can now utilize a unified security solution to inspect any type of traffic destined to the cloud, web, or corporate networks, while securing the sensitive assets and users across every location.

The post Introducing MVISION Cloud Firewall – Delivering Protection Across All Ports and Protocols appeared first on McAfee Blogs.

Adding Security to Smartsheet with McAfee CASB Connect

By Nick Shelly

The Smartsheet enterprise platform has become an essential part of most organizations, as it has done much to transform the way customers conduct business and collaborate, with numerous services available to increase productivity and innovation. Within the McAfee customer base, customers had expressed their commitment to Smartsheet, but wanted to inject the security pedigree of McAfee to make their Smartsheet environments even stronger.

In June 2021, McAfee MVISION Cloud released support for Smartsheet – providing cornerstone CASB services to Smartsheet through the CASB Connect framework, which makes it possible to provide API-based security controls to cloud services, such as:

  • Data Loss Prevention (find and remediate sensitive data)
  • Activity Monitoring & Behavior Analytics (set baselines for user behavior)
  • Threat Detection (insider, compromised accounts, malicious/anomalous activities)
  • Collaboration Policies (assure sensitive data gets shared properly)
  • Device Access Policies (only authorized devices connect)

How does it work?

Utilizing the CASB Connect framework, McAfee MVISION Cloud becomes an authorized third party to a customer’s Smartsheet Event Reporting service. This is an API-based method for McAfee to ingest event/audit logs from Smartsheet.

These logs contain information about what activities occur in Smartsheet. This information has value; McAfee will see user logon activity, sheet creation, user creation activity, sheet updates, deletions, etc. Overall, over 120 unique items are stored in the activity warehouse where intelligence is inferred from it. When an inference is made (example: Insider Threat), the platform can show all the forensics data that lead to that conclusion. This provides value to the Smartsheet customer since it shows potential threats that could lead to data loss, either unintended by a well-meaning end-user or not.

Policies for content detection are another important use-case. Most McAfee customers will utilize Data Loss Prevention (DLP) across their endpoint devices as well as in the cloud utilizing policies that are important to them. Examples of DLP policies could be uncovering credit card numbers, health records, customer lists, specific intellectual property, price lists, and more. Each customer will have some kind of data that is critical for their business, a DLP policy can be crafted to support finding it.

In Smartsheet, when an event from the Event Reporting service is captured that relates to DLP – a field is updated, a file is uploaded, or a sheet is shared, the DLP service in MVISION Cloud will perform an inspection of the event. Should the content or sharing violate a policy, an incident will be raised with forensic details describing what user performed the action and why the violation was flagged. This is important for customers because it operationalizes security in Smartsheet and other cloud applications that MVISION Cloud protects. The same DLP policies can be utilized across all of their critical cloud services, including Smartsheet.

Lastly, MVISION Cloud integrates with most popular Identity Providers (IDP). Through standards-based authentication, MVISION Cloud can enforce policies such as location and device policies that assure that only authorized users connect to Smartsheet; for regulated industries this can be important to ensure no compliance issues are violated as they conduct business.

Summary

Smartsheet enterprise customers benefit significantly from MVISION Cloud’s support. Visibility of user activity, threats and sensitive data give users a chance to further entrench their business processes in a cloud app they want to use. Adding security tools to an enterprise platform like Smartsheet reduces overall risk and gives organizations the confidence to more deeply depend on their critical cloud services.

Next Steps:

Trying out Smartsheet and McAfee MVISION Cloud is easy. Contact McAfee directly at cloud@mcafee.com or visit resources related to this blog post:

 

 

The post Adding Security to Smartsheet with McAfee CASB Connect appeared first on McAfee Blogs.

Cloud Native Security Approach Comparisons

By Vishwas Manral

Vinay Khanna, Ashwin Prabhu & Sriranga Seetharamaiah also contributed to this article. 

In the Cloud, security responsibilities are shared between the Cloud Service Provider (CSP) and Enterprise Security teams. To enable Security teams to provide compliance, visibility, and control across the application stack, CSPs and security vendors have added various innovative approaches across the different layers. In this blog we compare the approaches and provide a framework for Enterprises to think of these approaches.

Overview

Cloud Service Providers are launching new services at a breakneck pace to enable enterprise application developers to bring in new business value to the marketplace faster. For each of these services the CSPs are taking up more and more of the security responsibility while letting the enterprise security teams focus more on the application. To be able to provide visibility, security and enhance existing tools in such diverse and fast changing environments CSPs enable logs, APIs, Native agents and other technologies, that can be used by Enterprise security teams.

Comparison

There are many different approaches to security and each have varying tradeoffs in terms of the depth of visibility and security they provide, the ease of deployment, permissions required, the costs, and the scale they work at.

APIs and logs are the best approach to do get started with discovering your Cloud accounts and finding anomalous activity interesting to security teams in those accounts. It is easy to get access to data from various accounts using these mechanisms, without the security teams having to do much more than get cross account access to the numerous accounts in the organization. The approach provides great visibility but needs to be complemented with protection approaches.

Image and snapshot analysis are a good approach to get deeper data of the workloads both before the application starts and as they run. In this method the image/ snapshot of the disk of the running system can be analyzed to detect any anomalies, vulnerabilities, config incidents etc. Snapshots provide deep data of workloads but may not detect memory resident issues like fileless malware. Also, as we move to ephemeral workloads, analyzing snapshots periodically may have limited usage. The mechanism may not work for cloud services for which disk snapshots may not be possible to obtain. The approach provides deep data of snapshots but needs to be complemented with some protection approaches to be useful.

Native agents and scripts are a good approach to enable deeper visibility and controls by providing an easy way to enhance Cloud native agents like SSM on a machine. Based on the functionality agents can have high resource usage. Native agent support is limited by the CSP provided capabilities, like OS support/ features provided. In a lot of cases the native agents run commands that log the information needed, which implies we need to have the logging approach working in parallel.

DaemonSet and Sidecar containers is an approach to deploying agents easily in Container and serverless environments. Sidecar allow running one container per pod which provide deep data but the resource usage and the cost as a result are high, because multiple sidecars would run on a single server. Sidecars can work in Container Serverless models in which case DaemonSet containers do not work. As the functionality of a Sidecar and DaemonSet is like that of an agent, many of the agent limitations mentioned apply here too.

Agent approach provides the deepest visibility and best control of the environment in which an application runs, by running code coresident with the application. This approach is however harder because the security teams need to have deep discovery capabilities beforehand to be able to deploy these agents.  There is also friction in adding agents as it has to run on every machine and security teams do not have rights to run software on every machine, especially in the cloud. The resource usage and cost of a solution can be high depending on the use cases supported. Newer technologies like Extended Berkley Packet Filters (eBPF) enable reducing resource usage of agents to make them more palatable for broader usage.

Built-into-Image/ Build-into-code approach allows for the security being built into the application image deployed. This allows security functionality to be deployed without having to work on deploying an agent with each workload. This approach provides deep visibility of the application and works even for serverless workloads. Compiling in code adds immense friction by having to add code into the build process, and code libraries need to be available in every application language.

MVISION CNAPP

MVISION Cloud takes a Multi-pronged approach to securing applications and enable security teams to gain control of their Cloud environments.

  1. Security teams often lack visibility into their ephemeral Cloud infrastructures and MVISION Cloud provides a seamless way by using Cross-Account IAM access and then using API and Logs to provide visibility into Cloud environments.
  2. Using the same access MVISION Cloud can not only provide an Audit of the configuration of customer environment but also do image scans to identify vulnerabilities in the components of the workload.
  3. MVISION Cloud can then help identify risk against resources, so security teams can focus on securing the right resources. All of this without having to deploy an agent.
  4. Then using approaches like Sidecars, DaemonSet containers and agents MVISION CNAPP helps provide deep visibility and protect the applications against the most sophisticated attacks by providing File Integrity Monitoring (FIM), Application Allow Listing (AAL), Anti-Malware, run time Vulnerability analysis and performing hardening checks.
  5. Using the data from all the sources MVISION CNAPP provides a Risk score against incidents to help security teams prioritize incidents and focus on the biggest risks.

Conclusion

The various approaches to security have their own unique tradeoffs and no one approach can satisfy all the requirements for the various teams, for the diverse set of platforms they support.

At any point of time different cloud services will be at different levels of adoption maturity. Security teams need to take an incremental approach where they start off adopting solutions that are easy to insert and can provide the basic guardrail of security and visibility, at the start of the service adoption cycle. As applications on a service mature and more high value apps come online, an approach to security that provides deeper discovery and control will be necessary to complement the existing approaches.

No one approach will be able to satisfy all customer use cases and at any time there will be different sets of security solutions that will be active. We are headed to a world of even more diverse security approaches, that have to all work seamlessly to help secure the Enterprise.

 

The post Cloud Native Security Approach Comparisons appeared first on McAfee Blogs.

McAfee Recognised in 2021 Gartner Solution Scorecard Report

By Nigel Hawthorn

Industry analysts perform a huge service in evaluating markets, technology, vendors and sharing their insights with customers via one-on-one discussions and regular publications and events. Gartner publishes Magic Quadrant reports that review a particular market and evaluate vendors for their Completeness of Vision and Ability to Execute.

Gartner also has a separate team of analysts that evaluates single products in greater depth. Their reports review each product or product family across hundreds of criteria and produce a scorecard, key findings and customer recommendations.

We are proud to read the new Solution Scorecard for McAfee MVISION Cloud by Gartner, where we scored “94 out of 100 against Gartner’s 480-point Solution Criteria for Cloud Access Security Brokers”. MVISION Cloud was the only CASB product to score 94 out of 100 in the 2021 scorecards.”

We have licensed it for anyone to read.

We believe, for this review, they reviewed 480 sets of criteria across eleven areas from architecture, management and functions such as data security, threat protection and Cloud Security Posture Management. Once they had reviewed and weighted each attribute, MVISION Cloud came out with a total blended total score of 94 out of 100.

The framework that they used splits each of the criteria into one of three categories – Required, Preferred and Optional. We are pleased to see that they consider MVISION Cloud provides 97% of the Required functionality.

We have also licensed the Magic Quadrant for Cloud Access Security Brokers report from October 2020 – available here.

 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from McAfee. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner, Solution Scorecard for McAfee MVISION Cloud, 5 April 2021, Sushil Aryal, Dennis Xu, Patrick Hevesi

 

The post McAfee Recognised in 2021 Gartner Solution Scorecard Report appeared first on McAfee Blogs.

New Security Approach to Cloud-Native Applications

By Boubker Elmouttahid

With on-premises infrastructure, securing server workloads and applications involves putting security controls between an organization’s network and the outside world. As organisations migrate workloads (“lift and shift”) to the cloud, the same approach was often used. On the contrary to lift and shift, many enterprise businesses had realized that in order to use the cloud efficiently they need to redesign their apps to become cloud-native. Cloud native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud native development incorporates the concepts of DevOps, continuous delivery, microservices, and containers.

IDC predicts, by 2025, nearly two-thirds of enterprises will be prolific software producers with code deployed daily, over 90% of new apps cloud native, 80% of code externally sourced, and 1.6 times more developers”

Monolithic Apps vs Cloud Native Apps                         

So, how do you ensure the security of your cloudnative applications?

Successful protection of cloud-native applications will require a combination of multiple security controls working together and managed from one security platform. First, the cloud infrastructure where is the cloud-native application is running (containers, serverless functions and virtual machines) should be assessed for security misconfigurations (security posture ), compliance and for known vulnerabilities.  Second, securing the workloads needs a different security approach. Workloads are becoming more granular with shorter life spans as development organizations adopt DevOps-style development patterns. DevOps delivers faster software releases , in some cases, several times per day. The best way to secure these rapidly changing and short-lived cloud-native workloads is to start their protection proactively and build security into every part of the DevOps lifecycle.

Cloud Security Posture Management (CSPM):

The biggest cloud breaches are caused by customer misconfiguration, mismanagement, and mistakes. CSPM is a class of security tools to enable compliance monitoring, DevOps integration, incident response, risk assessment, and risk visualization. It is imperative for security and risk management leaders to enable cloud security posture management processes to proactively identify and address data risks.

Cloud Workload Protection Platforms (CWPP):

CWPP is an agent-based workload security protection technology. CWPP addresses unique requirements of server workload protection in modern hybrid data center architectures including on-premises, physical and virtual machines (VMs), and multiple public cloud infrastructure. This includes support for container-based application architectures.

 

What is MVISION CNAPP

MVISION CNAPP is the industry’s first platform to bring application and risk context to converge Cloud Security Posture Management (CSPM) for multi public cloud infrastructure, and Cloud Workload Protection (CWPP) to protect hybrid, multi cloud workloads including VMs, containers, and serverless functions. McAfee MVISION CNAPP extends MVISION Cloud’s data protection – both Data Loss Prevention and malware detection – threat prevention, governance and compliance to comprehensively address the needs of this new cloud-native application world thereby improving security capabilities and reducing the Total Cost of Ownership of cloud security.

7 Key elements of MVISION CNAPP:

1. Single Hybrid multi cloud security platform: McAfee MVISION Cloud simplify multi-cloud complexity by using a single, cloud-native enforcement point. It’s a comprehensive cloud security solution that protects and prevents enterprise and customer data, assets and applications from advanced security threats and cyberattacks across multiple cloud infrastructures and environments.

2. Cloud Security Posture Management: McAfee MVISION Cloud provide a continuous monitoring for multi cloud IaaS / PaaS environments to identify gaps between their stated security policy and the actual security posture. At the heart of CSPM is the detection of cloud misconfiguration vulnerabilities that can lead to compliance violations and data breaches.

3. Deep discovery and risk based application:You can’t protect what you can’t see. Discovering all cloud resources and prioritise them based on the risk. MVISION CNAPP uniquely provided deep discovery of all workloads, data, and infrastructure across endpoint, networks, and cloud. If you can quickly understand those risks relative to each other, you can quickly prioritize your remediation reducing overall riskMas quickly as possible.

4. Shift Left posture and vulnerability:By moving security into the CI/CD pipeline and make it easy for developers to incorporate into their normal application development processes and ensuring that applications are secure before they are ever published reduces the chance of introducing new vulnerabilities and minimizing threats to the organization.

5. Zero Trust policy control: McAfee’s CNAPP solution supported by CWPP focus on Zero Trust network and workload policies. This approach not only allows you to gain analytics about who is accessing your environment and how an important component of your SOC strategy but it also ensures that people and services have appropriate permissions to perform necessary tasks.

6. Unified Threat Protection:CWPP unifies threat protection across workloads in the cloud and on-premise. Including OS Hardening, Configuration and Vulnerability Management, Application Control/Allow-Listing and File Integrity control. It also synthesizes workload protections and account permissions into the same motion. Finally, by connecting cloud-native application protection to XDR, you are able to have full visibility, risk management, and remediation across your on-premise and cloud infrastructures.

7. Governance and Compliance:The ideal solution for protecting cloud-native applications includes the ability to manage privileged access and address threat protection for both workloads and sensitive data, regardless of where they reside

Business value:

  • One Cloud Security Platform for all your CSPs
  • Scan workloads and configurations in development and protect workloads and configurations at runtime.
  • Better security by enabling standardization and deeper layered defenses.
  • The convergence of CSPM and CWPP

 

IDC FutureScape: Worldwide IT Industry 2020 Predictions

https://www.idc.com/research/viewtoc.jsp?containerId=US45599219

The post New Security Approach to Cloud-Native Applications appeared first on McAfee Blogs.

You Don’t Have to Give Up Your Crown Jewels in Hopes of Better Cloud Security

By Rich Vorwaller

If you’re like me, you love a good heist film. Movies like The Italian Job, Inception, and Ocean’s 11 are riveting, but outside of cinema these types of heists don’t really happen anymore, right? Think again. In 2019, the Green Vault Museum in Dresden, Germany reported a jewel burglary worthy of its own film.

On November 25, 2019 at 4am, the Berlin Clan Network started a fire that destroyed the museum’s power box, disabling some of the alarm systems. The clan then cut through iron bars and broke into the vault. Security camera footage published online shows two suspects entering the room with flashlights, across a black-and-white-tiled floor. After grabbing 37 sets of stolen jewelry in a couple of minutes, the thieves exited through the same window, replacing the bars in order to delay detection. Then they fled in a car which was later found torched.[1]

Since then, there’s been numerous police raids and a couple of arrests, but an international manhunt is still underway and none of the stolen jewels have been recovered. What’s worse is that the museum didn’t insure the jewelry, resulting in a $1.2 billion-dollar loss. Again, this is a story ripe for Hollywood.

Although we may not read about jewelry heists like this one every day, we do see daily headlines about security breaches resulting in companies losing their own crown jewels – customer data. In fact, the concept of protecting crown jewels is so well known in the cybersecurity industry, that MITRE has created a process called Crown Jewels Analysis (CJA), which helps organizations identify the most important cyber assets and create mitigation processes for protecting those assets.[2] Today exposed sensitive data has become synonymous with cloud storage breaches and there is no shortage of victims.

To be fair all of these breaches have a common factor – the human element in charge of managing cloud storage misconfigured or didn’t enable the correct settings. However, at the same time we can’t always blame people when security fails. If robbers can so easily access multiple crown jewels again and again, you can’t keep blaming the security guards. Something is wrong with the system.

Some of the most well-versed cloud native companies like Netflix, Twilio, and Uber have suffered security breaches with sensitive data stored in cloud storage.[3] This has gotten to the point that in 2020, the Verizon Data Breach Report listed Errors as the second highest cause for data breaches due “in large part, associated with internet-exposed storage.”[4]

So why is securing cloud storage services so hard? Why do so many different companies struggle with this concept? As we’ve talked to our customers and asked what makes protecting sensitive data in the cloud so challenging, many simply don’t know if they had sensitive data in the cloud or struggle with handling the countless permissions and available overrides for each service.[5] Most of them have taken the approach that someone – whether that be an internal employee, a third-party contractor, or a technology partner – will eventually fail in setting the right permissions for their data, and they need a solution that will continuously check for sensitive data and prevent it from being accessed regardless of the location or service-level permissions.

Enter in Cloud Native Application Protection Platform (CNAPP). Last month our new CNAPP service dedicated to securing hybrid cloud infrastructure and cloud native applications became generally available. One of the core pillars behind CNAPP is Apps & Data – meaning that along with Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP), CNAPP provides a cohesive Data Loss Prevention (DLP) service.

Figure 1: CNAPP Pillars

Typically, the way security vendors perform DLP scans for cloud storage is by copying down customer data to their platform. They do this because in order to scan for sensitive data, the vendor needs access to your data from a platform that can run their DLP engine. However, this solution presents some challenges:

  • Costs – copying down storage objects means customers incur charges for every bit of data that goes across the wire which include but aren’t limited to requests charges, egress charges, and data transfer charges. For some customers these charges are significant enough where they have to pick and choose which objects to scan instead of protecting their entire data store in the cloud.
  • Operational burden – customers who aren’t comfortable sending the data over the public internet have to create tunnels or direct connections to vendor solutions. This means additional overhead, architectural changes, and sometimes backhauling large amounts of data across those connections.
  • Defeats the Purpose of DLP – this was a lesson learned from our MVISION Cloud DLP scanning; for some customers performing DLP scans over network connections was convenient but for other customers it was a huge security risk. Essentially, these solutions require customers to hand over their crown jewels in order to determine if that data has the crown jewels. Ultimately, we arrived at the conclusion that data should be local, but DLP policies should be global.

This is where we came up with the concept of in-tenant DLP scanning. In-tenant DLP scanning works by launching a small software stack inside the customers’ AWS, Azure, or GCP account. The stack is a headless, microservice (called a Micro Point of Presence or Micro PoP) that pushes out workload protection policies to compute and storage services. The Micro PoP connects to the CNAPP console for management purposes but allows customers to perform local DLP scans within each virtual network segment using direct access. No customer data ever leaves the customers’ tenant.

Figure 2: In-tenant DLP Scanning

Customers can also choose to connect multiple virtual network segments to a single Micro PoP using services like AWS PrivateLink if they want to consolidate DLP scans for multiple S3 buckets. There’s no capacity limit or license limitation to how many Micro PoPs customers can deploy. CNAPP supports in-tenant DLP scanning for Amazon S3, Azure Blob, and GCP storage today with on-prem storage coming soon. Lastly, customers don’t have to pick and choose only one deployment model – they can use our traditional DLP scans (called API scans) over network connections or select our in-tenant DLP scans for more sensitive workloads.

In-tenant DLP scanning is just one of the many innovate features we’ve launched with CNAPP. I invite you to check out the solution for yourself. Visit https://mcafee.com/CNAPP for more information or request a demo at https://mcafee.com/demo. We’d love to get your feedback and see how MVISION CNAPP can help your company stay out of the headlines and make sure your crown jewels are right where they should be.

 

Disclaimer: this blog post contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Contact your McAfee representative to obtain the latest forecast, schedule, specifications, and roadmaps.

[1] https://www.dw.com/en/germanys-heist-that-shocked-the-museum-world-the-green-vault-theft/a-55702898

[2] https://www.mitre.org/publications/systems-engineering-guide/enterprise-engineering/systems-engineering-for-mission-assurance/crown-jewels-analysis

[3] https://www.darkreading.com/cloud/twilio-security-incident-shows-danger-of-misconfigured-s3-buckets/d/d-id/1338447

[4] https://enterprise.verizon.com/resources/reports/dbir/

[5] https://www.upguard.com/blog/s3-security-is-flawed-by-design

The post You Don’t Have to Give Up Your Crown Jewels in Hopes of Better Cloud Security appeared first on McAfee Blogs.

Lessons We Can Learn From Airport Security

By Nigel Hawthorn
Remote Learning

Most of us don’t have responsibility for airports, but thinking about airport security can teach us lessons about how we consider, design and execute IT security in our enterprise. Airports have to be constantly vigilant from a multitude of threats; terrorists, criminals, rogue employees and their security defenses need to combat major attacks, individual threats, stowaways, smuggling as well as considering the safety of passengers and none of this can stop the smooth flow of travelers as every delay has business knock on effects. Whew! And this is just the start.

The airport operators are a lesson in supply-chain and 3rd party communications. They cooperate with airlines, retailers and government agencies, and their threats can be catastrophic. They also need to consider mundane problems like how do you move a large number of people around quickly, what to do when someone leaves a bag to go shopping and how to balance risk reduction with traveler comfort – many needs to be considered, planned for and the execution when a risk is identified needs to be immediate. All this before thinking about IT-related issues, thefts from retailers, employee assessments and training, building safety, people tracking and … the list seems almost endless.

Our business IT security needs might not seem so complex; however every enterprise has its external and internal attackers; hackers, ransomware, DDoS attacks to take down your systems and rogue employees or inadvertent actions by good employees who don’t realize what link they are clicking on or data they are over-sharing. At the same time, the business needs to be able to enable the newest and most effective apps and systems and employees hate anything that appears to get in their way.

So, let’s see what airports can teach us about thinking about possible threats and appropriate safeguards to deploy a layered approach that protects your data, users and infrastructure.

If you take just one threat; terrorism as – this image shows that US airports have more than 20 layers of security – a mixture of human and technological measures.

There’s no silver bullet, there’s not one piece of security awareness or technology that will solve all problems – but if integrated, they can all build together to draw a picture of the possible threat.  Our defenses shouldn’t rely on just one technology either, but when we have multiple capabilities working together, we can evaluate, identify and address our security needs.

Here’s my table of some of the needs of an airport and equivalent areas in general IT security. Just as in an airport, individual pieces are of limited benefit unless they are brought together. Even though each item improves overall security, a single management console that can correlate all these pieces of knowledge and suggest or make policy decisions is crucial to ensure you get maximum benefit.

Airport Enterprise IT
Check ticket against passport Global SSO and multi-factor authentication for every app (including cloud)
X-ray baggage Scan attachments for malware
Security gates and handbaggage check DLP for confidential data loss control
Facial recognition comparing security gate and plane gate with ticket Zero trust – keep checking at all times
Baggage weight check Review email attachments – treat previously unseen executables as suspect
CCTV as passengers move around airport User behavior analytics for risky behavior
Database of travellers, prior travel, destination information Logging / analytics
Temperature tests for COVID Block surfing to high risk web sites
Visa requirements Access control to sensitive areas or sensitive data
Check expiry date on passport Reconfirm credentials after a period
History of prior travel User behavior analytics to understand “normal traffic” for each individual user and alert on unusual patterns.
Open Skies Initative – sharing data with destination – allowing arrest on landing Insights to check and implement defences before attacks based on other organization’s threats
Landing card (where staying, reason etc.) Employee justification for actions – feedback loops when challenged
Finger prints on landing – check against previous travel history Insights
Security guards, customs agents, check in staff, people monitoring CCTV The personal touch – the SOC team investigating threats and defining and implementing policies
Different security lines for additional checks Remote Browser Isolation
Overall SOC center to correlate all inputs Global management

 

What have we learned?

Firstly, the job of securing an airport is complex and involves a lot of planning, cooperation with 3rd parties and a vast mixture of people and technology-based security.

Secondly, we cannot rely on one defense, just like airports.

Thirdly, concepts like zero trust, MITRE ATT&CK framework, Cyber Kill Chain are all aiming to look at threats in the round – we need look at threats from every angle we can and implement the best technology we can.

The best solutions will be integrated, you need to be able to collate activity patterns to evaluate risks and define defenses.  McAfee’s Device to Cloud Suites are designed to bring together multiple systems all under one umbrella and let you accelerate cloud adoption, improve productivity and bring together more than ten different security technologies all managed by McAfee ePO.

 

Device to Cloud Suites

Easy, comprehensive protection that spans endpoints, web, and cloud

Learn more

 

The post Lessons We Can Learn From Airport Security appeared first on McAfee Blogs.

The Fastest Route to SASE

By Robert Arandjelovic

Shortcuts aren’t always the fastest or safest route from Point A to Point B. Providing faster “direct to cloud” access for your users to critical applications and cloud services can certainly improve productivity and reduce costs, but cutting corners on security can come with huge consequences. The Secure Access Service Edge (SASE) framework shows how to achieve digital transformation without compromising security, but organizations still face a number of difficult choices in how they go about it. Now, McAfee can help your organization take the shortest, fastest, and most secure path to SASE with its MVISION Unified Cloud Edge solution delivered alongside SD-WAN.

Decision makers seek a faster, more efficient high road to cloud and network transformation without compromising security. The need for speed and scalability is crucial, but corners cannot be cut when it comes to maintaining data and threat protection. Safety and security cannot be left behind in a cloud of transformation dust. This blog will look at the major trends driving SASE adoption, and will then discuss how a complete SASE deployment can deliver improved performance, superior threat & data security, lower complexity, and cost savings. We’ll then explain why fast AND secure cloud transformation requires an intelligent, hyperscale platform to accelerate SASE adoption.

Dangerous Detours, Potholes, and Roadblocks

While digital transformation promises substantial gains in productivity and efficiencies, the journey is littered with security and efficiency challenges that can detour your organization from its desired upgrades and safe destination.

Digital transformation challenges that must be addressed include:

  • The Big Shift – Shifting your organization’s applications and data out of corporate data centers and into the cloud.
  • Going More Mobile – The proliferation of mobile devices leaves your corporate resources more vulnerable as they are being accessed by a growing number of devices many of which are personally owned and unmanaged.
  • Work from Anywhere– The seemingly permanent shift towards “Work from Home” creates an increased demand for more efficient distributed access to cloud-based corporate resources that secures visibility and control amidst the eroding traditional network.
  • Costly Infrastructure – MPLS connections, VPN concentrators, and huge centralized network security infrastructure represent major investments with significant operational expense. The fact that multiple security solutions typically operate in distinct siloes compounds management effort and costs.
  • Slow Performance, High Latency, and Low Productivity – Dedicated MPLS and VPN lines are also slow and architecturally inefficient, requiring all traffic to go to the data center for security and then all the way back out to internet resources – NOT a straight line.
  • Data Vulnerability – Data resides and moves completely outside the scope of perimeter security through collaboration from the cloud to third parties, between cloud services, and access by unmanaged devices, leaving it prone to incidents without security teams knowing.
  • Evolving Threats and Techniques – Staying ahead of the latest malware remains a priority, but many modern attacks are emerging that use techniques like social engineering to exploit the features of cloud providers and mimic user behavior with legitimate credentials. Detecting these seemingly legitimate behaviors is extremely difficult for traditional security tools.

Feel the Need for Safe, But Less Costly Speed

The increasingly difficult challenge of providing a fast and safe cloud environment to an increasingly distributed workforce has become a major detour in the drive to transform from traditional enterprise networks and local data centers. Companies have had to meet the challenge to “adapt or die” in connecting their employees and devices to corporate resources, but many have generally needed to choose between two unsatisfactory compromises: secure but slow and expensive, or fast and affordable but not secure. Adopting a SASE framework is the way to achieve all of the benefits of cloud transformation without compromise:

  • Reduction in Cost and Complexity – A great benefit for your SOC and IT teams, SASE promotes a network transformation that simplifies your technology stack, reducing costs and complexity.
  • Increased Speed and Productivity – Fast, uninterrupted access to applications and data boosts the user experience and improves productivity. SASE provides ubiquitous, low-latency connectivity for your workforce – even remote workers – via a fast and ubiquitous cloud service, and uses a streamlined “single pass” inspection model that ensures they aren’t bogged down by security.
  • Multi-Vector Data Protection – SASE mandates the protection of data traveling through the internet, within the cloud, and moving cloud to cloud, enabling Zero Trust policy decisions at every control point.
  • Comprehensive Threat Defense – A SASE framework fortifies an organization’s threat defense capabilities for detecting both cloud-native and advanced malware attacks within the cloud and from any web destination.

Selecting the Best Path to Transformation

When network and security decision makers come to the proverbial fork in the road to network transformation, what is the best path that enables fast and affordable access without leading to unacceptable security risk? A recent blog by McAfee detailed four architectural approaches based on the willingness to embrace new technologies and bring them together. After examining the pros and cons of these four paths, the ideal solution to achieve fast, secure, and cost-effective access to web and cloud resources is a SASE model that brings together a ubiquitous, tightly integrated security stack with a robust, direct-to-cloud SD-WAN integrated networking solution. This combination provides a secure network express lane to the cloud, cruising around the latency challenges of slow, expensive MPLS links for connectivity to your applications and resources.

MVISION Unified Cloud Edge (UCE) + SD-WAN: Fast, Furious and Secure

Fast Network. Data Protection. Threat Protection. Speed, security and safety turbocharged connectivity throughout a hyperscale cloud network without compromise.

MVISION UCE is the best framework for implementing a SASE architecture to accelerate digital transformation with cloud services, enabling cloud and internet access from any device while empowering ultimate workforce productivity. MVISION UCE brings SASE’s most important security technologies – Cloud Access Security Broker (CASB), Next-gen Secure Web Gateway (SWG), Data Loss Prevention (DLP), and Remote Browser Isolation (RBI) – together in a single cloud-native hyperscale service edge that delivers single-pass security inspection with ultra-low latency and 99.999% availability.

With MVISION Unified Cloud Edge and our SD-WAN integration partners, you can lead a network transformation that reduces costs and speeds up the user experience by using fast, affordable broadband connections instead of expensive MPLS.

MVISION UCE and SD-WAN transforms your network architecture by enabling users to directly access cloud resources without having to go back through their corporate network through MLPS or VPN connection. Now users can directly access cloud resources, and the McAfee cloud infrastructure is so well-optimized that they can often access resources even FASTER than if there was no intervening security stack! Read how Peering POPs make negative latency possible in this McAfee White Paper.

Because of the way we’ve delivered our product, MVISION UCE + SD-WAN unleashes SASE’s benefits, with data and threat protection that other vendors can’t match.

Reduction in Cost and Complexity, Increased Speed and Agility

  • The resulting converged cloud service is substantially more efficient than building your own SASE by manually integrating separate cloud-based technologies
  • Minimize inefficient traffic backhauling with intelligent, efficient, and secure direct-to-cloud access
  • Protect remote sites via SD-WAN using industry standard Dynamic IPSec and GRE protocols leveraging SD-WAN technology that gets office sites to cloud resources faster and more directly than ever before
  • Enjoy low latency and unlimited scalability with a global cloud footprint and cloud-native architecture that includes global Peering POPs (Point of Presence) reducing delays
  • As a cloud service with 99.999% uptime (Maintained Service Availability) and internet speeds faster than a direct connection, you improve the productivity of your workforce while reducing the cost of your network infrastructure.

Multi-Vector Data Protection

  • The McAfee approach to data protection is unified, meaning each control point works as part of a whole solution.
  • All access points are covered using the same data loss prevention (DLP) engine, giving you an easily traceable path from device to cloud
  • Your data classifications can be set once, and applied in policies that protect the endpoint, web traffic and any cloud interaction
  • All incidents are centralized in one management console for a single view of your data protection practice, giving you a streamlined incident management experience

Comprehensive Threat Defense

  • Intelligence-driven unified protection – CASB, Next-gen SWG, DLP – against the most sophisticated cyberattacks and data loss
  • Remote Browser Isolation (RBI) protection from web-based threats and malware through the remote exclusion and containment of all browsing activities to a remote server hosted in the cloud
  • The industry’s most effective in-line emulation sandbox, capable of removing zero-day malware at line speed
  • User and entity behavior analytics (UEBA) monitoring all cloud activity for anomalies and threats to your data

If you are looking for improved productivity and lower costs of cloud transformation without cutting corners, McAfee MVISION UCE offers the fastest route to SASE — without compromising your data and threat security.

 

The post The Fastest Route to SASE appeared first on McAfee Blogs.

Domain Age as an Internet Filter Criteria

By Jeff Ebeling

Use of “domain age” is a feature being promoted by various firewall and web security vendors as a method to protect users and systems from accessing malicious internet destinations. The concept is to use domain age as a generic traffic filtering parameter. The thought is that hosts associated with newly registered domains should be either completely blocked, isolated, or treated with high suspicion. This blog will describe what domain age is, how domains are created and registered, domain age value, and how domain age can be used most effectively as a compliment to other web security tools.

Domain Age Feature Definition

The sites and domains of the internet are constantly changing and evolving. In the first quarter of 2020 an average of over 40,000 domains were registered per day. If the domain of a target host is known that domain has a registration date available for lookup from various sources. Domain age is a simple calculation of the time between initial domain registration and the current date.

A domain age feature is designed for use in policy control, where an administrator can set a minimum domain age that should be necessary to allow access to a given internet destination. The idea is that since domains are so easy and cheap to establish, new domains should be treated with great care, if not blocked outright. Unfortunately, with most protocols and implementations, domain age policy selection is a binary decision to allow or block. This is not very useful when the ultimate destinations are hosts, subdomains, and destination addresses that can be rapidly activated, changed, and deactivated without ever changing the domain age. As a result, binary security decisions based solely on domain name or domain age will naturally result in both false positives and false negatives that are detrimental to security, user experience, and productivity.

Domain Registration

IANA (Internet Assigned Numbers Authority) is the department of ICANN (Internet Corporation for Assigned Names and Numbers) responsible for managing the registries of, protocol parameters, domain names, IP addresses, and Autonomous System Numbers.

IANA manages the DNS root zone and TLDs (Top Level Domains like .com, .org, .edu, etc.) and registrars are responsible for working with the Internet Registry and IANA to register individual subdomains within the top-level domains.

Details of the registration process and definitions can be found on the IANA site (iana.org). Additional details can be found here: https://whois.icann.org/en/domain-name-registration-process This location includes the following statement:

“In some cases, a person or organization who does not wish to have their information listed in WHOIS may contract with a proxy service provider to register domain names on their behalf. In this case, the service provider is the domain name registrant, not the end customer.”

This means that service providers, and end customers are free to register a domain once and reuse, reassign or sell that domain without changing the registration date or changing any other registration information. Registrars can and do auction addresses creating a vast market for domain “squatters and trolls.” An attacker can cheaply purchase an established domain of a defunct business or register a completely new legitimate sounding domain and leave it unused for weeks, months or years.  For example, as of this writing airnigeria.com is up for sale on godaddy.com for just $65 USD. The domain airnigeria.com was originally registered in 2003. IANA and the registrars have no responsibility or control over usage of domains.

Determining Domain Age

Domain age is determined from the domain record in the Internet Registry managed by the registry operator for a TLD (Top Level Domain). Ultimately the registrar is responsible for the establishment of a domain registration and updating related data. The record in the registry will have an original creation date but that date doesn’t change unless the registration for a specific domain expires and the domain name is re-registered. Because of this, domain age is an extremely inaccurate measure of when an individual destination became active.

And what if only the destination IP address is known at the time of the filtering decision? This could be the case for filtering the first packet sent to a specific destination (TCP SYN or first UDP packet of some other network or transport level protocol). One way to get the domain for the destination would be a reverse DNS lookup, but the domain for the host may not match the domain that was originally submitted for resolution, so what value is domain age there?

For example, www.mcafee.com can currently resolve to 172.224.15.98 which reverse resolves to a172-224-15-98.deploy.static.akamaitechnologies.com. While the mcafee.com domain was registered on 1992-08-05, akamaitechnologies.com was registered on 1998-08-18. Both are long established domains, but just because this destination, in the well-established mcafee.com domain, is hosted on the well-established akamaitechnologies.com domain, this doesn’t provide any indication of when the www.mcafee.com, or 172.224.15.98 destination became active, or the risk of communicating with that IP address. Domain age becomes even less useful when we consider destinations hosted in the public cloud (IaaS and SaaS) using the providers’ domains.

Obtaining the wrong domain and therefore wrong domain age from reverse lookup could be somewhat mitigated by tracking the DNS queries of the client and attempting to map those domains back to the requested destination IP. However, doing this would also be dependent on having full visibility into all DNS requests from the client, and assumes that the destination IP address was determined using standard DNS or by the system providing the domain age filtering.

Challenges with Using Domain Age as a Generic Filter Criteria

Even if the correct domain for the transmission can be established, and the domain age can be accurately retrieved, there are still issues that should be considered.

Registrars are free to maintain, change, and reassign established domains to any customer, and resellers can do the same. This greatly diminishes the usefulness of domain age as a stand-alone filtering parameter because a malicious actor can easily acquire an existing well-established domain with a neutral or even positive reputation. A malicious actor can also register a new domain long before it is put into use as a command and control or attack domain.

Legitimate and perfectly safe sites are constantly being registered and established in many cases within days or even hours of being put into use. When using domain age as filter criteria there will always be a tradeoff between false positive and false negative rates.

It should also be noted that domain age provides little value relative to when an individual hostname record was created within a domain. Well established domains can have an infinite number of subdomains and individual hosts within those domains, and there is no way to accurately determine hostname age or even when the name was associated with an active IP. All that could possibly be determined is that the destination hostname is part of a domain that was registered at some earlier date.

The bottom line is that domain age is not nearly granular or substantive enough to make a useful filtering decision on its own. However, domain age could provide some limited security value in the complete absence of more specific criteria, provided the false positive rate and false negative rate associated with the selected recency threshold can be tolerated. Domain age can provide supplemental value when combined with other more definitive filter criteria for example protocol, content type, host category, host reputation, host first seen, frequency of host access, web service attributes, and others.

Domain Age in the Context of HTTP/S and Proxy Based Filtering

More specific criteria are always available when the HTTP protocol is in use. HTTP and HTTPS filtering is most effectively handled via explicit or transparent proxy. If the protocol is followed (enforced by the device or service), information cannot be transferred, and a compromise or attack cannot be initiated, until after TCP connection establishment.

Given that the traffic is being proxied, and HTTPS can be decrypted, accurate Fully Qualified Domain Name (FQDNs) for the host, URL path, and URL parameters can be identified and verified by the proxy for use in filtering decisions. The ability to lookup information on the FQDN, full URL path, and URL parameters provides much more valuable information relative to the history, risk level, and usage of the specific site, destination, and service independent of the domain or the domain’s date of registration Such contextual data can be further enhanced when the proxy associates the request with a specific service and its data security attributes (such as type of service, intellectual property ownership, breach history, etc.).

Industry leading web proxy vendors maintain extensive and comprehensive databases of the most frequently used sites, domains, applications, services, and URLs. The McAfee Global Threat Intelligence and Cloud Registry databases associate sites, domains, and URLs with geolocation, category, service, service attributes, applications, data risk reputations, threat reputations and more. As a side benefit, lack of an entry in the databases for a specific host, domain, service, or URL is an extremely strong, and much more accurate, indication that the site is newly established or little used and therefore should not be inherently trusted. Such sites should be treated with caution and blocked or coached or isolated (the latter two options are uniquely available with proxied HTTP/S) based on that criteria alone, regardless of domain age.

McAfee’s Unified Cloud Edge provides all of the above functionality and includes remote browser isolation (RBI) for uncategorized, unverified, and otherwise risky sites. This virtually eliminates the risks of browsers or other applications accessing uncategorized sites, without adding the complications of false positives and false negatives from a domain age filter.

When using HTTP/S, hostname age, or even first and/or last hostname seen date could provide additional value, but domain age is pretty much useless when the FQDN and more specific site or service related information is available. Best practice is to block, isolate, or at a minimum, coach unverified sites and services without regard to domain age. Allowing unverified sites or services based on domain age adds significant risk of false negatives (risky sites and services being allowed simply because the domain was not recently registered). Generically blocking sites and services based on domain age alone would lead to over-blocking sites that have established good reputations and should not be blocked.

Conclusion

Domain age can be somewhat useful for supplementing filter decisions in situations where no other more accurate and specific information is available about the destination of a network packet. When considering use of domain age for HTTP/S filtering, it is an extremely poor substitute for a more comprehensive threat intelligence and service database. If the decision is made to deviate from best practice and allow HTTP/S connections to unverified sites, without isolation, then domain age can provide limited supplemental value by blocking unverified sites that are in newly registered domains. This comes at the expense of a false sense of security and much greater risk of false negatives when compared to the best practice of using comprehensive web threat intelligence, performing thorough request and response analysis, and simply blocking, isolating, or coaching unverified sites.

 

The post Domain Age as an Internet Filter Criteria appeared first on McAfee Blogs.

Finally, True Unified Multi-Vector Data Protection in a Cloud World

By Suhaas Kodagali

This week, we announced the latest release of MVISION Unified Cloud Edge, which included a number of great data protection enhancements. With working patterns and data workflows dramatically changed in 2020, this release couldn’t be more timely.

According to a report by Gartner earlier in 2020, 88% of organizations have encouraged or required employees to work from home. And a report from PwC found that, corporations have termed the remote work effort in 2020, by and large, a success. Many executives are reconfiguring office layouts to cut capacity by half or more, indicating that remote work is here to stay as a part of work life even after we come out of the restrictions placed on us by the pandemic.

Security teams, scrambling to keep pace with the work from home changes, are grappling with multiple challenges, a key one being how to protect corporate data from exfiltration and maintain compliance in this new work from home paradigm. Employees are working in less secure environments and using multiple applications and communication tools that may not have been permitted within the corporate environment. What if they upload sensitive corporate data to a less than secure cloud service? What if employees use their personal devices to download company email content or Salesforce contacts?

McAfee’s Unified Cloud Edge provides enterprises with comprehensive data and threat protection by bringing together its flagship secure web gateway, CASB, and endpoint DLP offerings into a single integrated Secure Access Service Edge (SASE) solution. The unified security solution offered by UCE features unified data classification and incident management across the network, sanctioned and unsanctioned (Shadow IT) cloud applications, web traffic, and endpoints, thereby covering multiple key exfiltration vectors.

UCE Protects Against Multiple Data Exfiltration Vectors

1. Exfiltration to High Risk Cloud Services

According to a recent McAfee report, 91% of cloud services do not encrypt data at rest and 87% of cloud services do not delete data upon account termination, allowing the cloud service to own customer data in perpetuity. McAfee UCE detects the usage of risky cloud services using over 75 security attributes and enforces policies, such blocking all services with a risk score over 7, which helps prevent exfiltration of data into high risk cloud services.

2. Exfiltration to permitted cloud services

Some cloud services, especially the high risk ones, can be blocked. But there are others which may not be fully sanctioned by IT, but fulfill a business need or improve productivity and thus may have to be allowed. To protect data while enabling these services, security teams can enforce partial controls, such as allowing users to download data from these services but blocking uploads. This way, employees remain productive while company data remains protected.

3. Exfiltration from sanctioned cloud services

Digital transformation and cloud-first initiatives have led to significant amounts of data moving to cloud data stores such as Office 365 and G Suite. So, companies are comfortable with sensitive corporate data living in these data stores but are worried about it being exfiltrated to unauthorized users. For example, a file in OneDrive can be shared with an unauthorized external user, or a user can download data from a corporate SharePoint account and then upload it to a personal OneDrive account. MVISION Cloud customers commonly apply collaboration controls to block unauthorized third party sharing and use inline controls like Tenant Restrictions to ensure employees always login with their corporate accounts and not with their personal accounts.

4. Exfiltration from endpoint devices

An important consideration for all security teams, especially given most employees are now working from home, is the plethora of unmanaged devices such as storage drives, printers, and peripherals that data can be exfiltrated into. In addition, services that enable remote working, like Zoom, WebEx, and Dropbox, have desktop apps that enable file sharing and syncing actions that cannot be controlled by network policies because of web socket or certificate pinning considerations. The ability to enforce data protection policies on endpoint devices becomes crucial to protect against data leakage to unauthorized devices and maintain compliance in a WFH world.

5. Exfiltration via email

Outbound email is one of the critical vectors for data loss. The ability to extend and enforce DLP policies to email is an important consideration for security teams. Many enterprises choose to apply inline email controls, while some choose to use the off-band method, which surfaces policy violations in a monitoring mode only.

UCE provides a Unified and Comprehensive Data Protection Offering

Using point security solutions for data protection raises multiple challenges. Managing policy workflows in multiple consoles, rewriting policies, and aligning incident information in multiple security products result in operational overhead and coordination challenges that slow down the teams involved and hurt the company’s ability to respond to a security incident. UCE brings web, CASB, and endpoint DLP into a converged offering for data protection. By providing a unified experience, UCE increases consistency and efficiencies for security teams in multiple ways.

1. Reusable classifications

A single set of classifications can be reused across different McAfee platforms, including ePO, MVISION Cloud, and Unified Cloud Edge. For example, if a classification is implemented to identify Brazilian driver’s license information to apply DLP policies on endpoint devices, the same classification can be applied in DLP policies on collaboration policies in Office 365 or outgoing emails in Exchange Online. Alternatively, if the endpoint and cloud were secured by two separate products, it would require creating disparate classifications and policies on both platforms and then ensuring the 2 policies have the same underlying regex rules to keep policy violations consistent. This increases operational complexity and overhead for security teams.

2. Converged incident infrastructure

Customers using MVISION Cloud have a unified view of cloud, web, and endpoint DLP incidents in a single unified console. This can be extremely helpful in scenarios where a single exfiltration act by an employee is spread across multiple vectors. For example, an employee attempts to share a company document with his personal email address, and then tries to upload it to a shadow service like WeTransfer. When both these attempts don’t work, he uses a USB drive to copy the document from his office laptop. Each of these fires an incident, but when we present a consolidated view of these incidents based on the file, your admins have a unique perspective and possibly a different remediation action as opposed to trying to parse these incidents from separate solutions.

3. Consistent experience

McAfee data protection platforms provide customers with a consistent experience in creating a DLP policy, whether it is securing sanctioned cloud services, protecting against malware, or preventing data exfiltration to shadow cloud services. Having a familiar workflow makes it easy for multiple teams to create and manage policies and remediate incidents.

As the report from PwC states, the work from home paradigm is likely not going away anytime soon. As enterprises prepare for the new normal, a solution like Unified Cloud Edge enables the security transformation they need to gain success in a remote world.

The post Finally, True Unified Multi-Vector Data Protection in a Cloud World appeared first on McAfee Blogs.

3 Reasons Why Connected Apps are Critical to Enterprise Security

By McAfee

Every day, new apps are developed to solve problems and create efficiency in individuals’ lives.  Employees are continually experimenting with new apps to enhance productivity and simplify complex matters. When in a pinch, using DropBox to share large files or an online PDF editor for quick modifications are commonalities among employeesHowever, these apps, although useful, may not be sanctioned or observable by an IT department. The rapid adoption of this process, while bringing the benefit of increased productivity and agility, also raises the ‘shadow IT problem’ where IT has little to no visibility into the cloud services that employees are using or the risk associated with these services. Without visibility, it becomes very difficult for IT to manage both cost expenditure and risk in the cloud. Per the McAfee Cloud Adoption and Risk report, the average enterprise today uses 1950 cloud services, of which less than 10% are enterprise ready. To divert a data breach (with the average cost of a data breach in the US being $7.9 million), enterprises must exercise governance and control over their unsanctioned cloud usage. Does this sound all too familiar? It’s because these are many of the issues we face with Shadow IT, and are facing today regarding a similar security risk with connected apps.   

What are Connected Apps? Collaboration platforms such as Office 365 enable teams and end-users to install and connect third-party apps or create their own custom apps to help solve new and existing business problems. For example, Microsoft hosts the Microsoft Store, where end-users can browse througthousands of apps and install them into their company’s Office 365 environment. These apps help augment native Microsoft office capabilities and help increase enduser productivity. Some examples include WebEx to set up meetings from Outlook or Survey Monkey add-in to initiate surveys from Microsoft Teams.  When these apps are added, they will often ask the enduser to authorize access to their Cloud app resources. This could be data stored in the app, like in SharePoint, or calendar information or email content. Authorizing access to third party apps creates concerns for many organizations. 

Reason 1: Risky Data Exfiltrated to 3rd Party Apps 

What if the app itself is risky? For example, PDF converter apps ask for access to all data so they can generate PDF versions for sharing. Corporate data is moving out of the corporate cloud app into these risky applications. Or, even if the app is not risky, it may be accessing cloud resources such as mail, drive, calendar, which contain data considered highly sensitive by the company. For example, the Evernote app for Outlook can be used for saving email data. Now, the app itself is not risky, but the company may not have approved it for employees to use. If that is the case, an introduction of apps in this manner represents a data exfiltration of corporate data.    

Reason 2: No Coverage with Existing Controls 

Connected Apps establishes a cloud-to-cloud connection with your sanctioned cloud services that is not visible to existing network policies and controls. So, if a company has put in place controls on the web gateway or firewall to block unauthorized file sharing services, then it is still possible for employees to add the connected app from the marketplace and bypass these existing controls. Even the API based DLP policies do not apply to data moving into Connected Apps. All of this means that organizations need to exercise more oversight and control on the usage of Connected apps by their employees.  

Reason 3: Shared Responsibility 

The Shared Responsibility model applies to Connected Apps as wellCloud services like Google and Microsoft provide a marketplace for customers to add appsbut they expect the companies to take responsibility for their data and users and ensure that the usage of these connected apps is in line with security and compliance policies.  

MVISION Cloud provides comprehensive security solutions through visibility, control, and the ability to troubleshoot into third-party applications connected to sanctioned cloud services, such as these marketplace apps. With a database of over 30,000 cloud services, MVISION Cloud provides comprehensive and up to date information on Connected Apps plugged into corporate cloud services such as Microsoft 365 and G Suite. Customers can use this visibility to apply controls to block, allow, or selectively allow apps for some users. As large users deploy Connected Apps to their hundreds of thousands of users, MVISION Cloud also provides troubleshooting tools to track activities and add notes to allow for quick diagnosis and resolution of Support issues. To learn more, see the brief video below provides a deeper look into securing connected apps with MVISION Cloud.  

The post 3 Reasons Why Connected Apps are Critical to Enterprise Security appeared first on McAfee Blogs.

Securing Containers with NIST 800-190 and MVISION CNAPP

By Sunny Suneja

Government and Private Sector organizations are transforming their businesses by embracing DevOps principles, microservice design patterns, and container technologies across on-premises, cloud, and hybrid environments. Container adoption is becoming mainstream to drive digital transformation and business growth and to accelerate product and feature velocity. Companies have moved quickly to embrace cloud native applications and infrastructure to take advantage of cloud provider systems and to align their design decisions with cloud properties of scalability, resilience, and security first architectures. The declarative nature of these systems enables numerous advantages in application development and deployment, like faster development and deployment cycles, quicker bug fixes and patches, and consistent build and monitoring workflows. These streamlined and well controlled design principles in automation pipelines lead to faster feature delivery and drive competitive differentiation.

As more enterprises adapt to cloud-native architectures and embark on multi-cloud strategies, demands are changing usage patterns, processes, and organizational structures. However, the unique methods by which application containers are created, deployed, networked, and operated present unique challenges when designing, implementing, and operating security systems for these environments. They are ephemeral, often too numerous to count, talk to each other across nodes and clusters more than they communicate with the outside endpoints, and they are typically part of fast-moving continuous integration/continuous deployment (CI/CD) pipelines. Additionally, development toolchains and operations ecosystems continue to present new ways to develop and package code, secrets, and environment variables. Unfortunately, this also compounds supply chain risks and presents an ever-increasing attack surface.

Lack of a comprehensive container security strategy or often not knowing where to start can be a challenge to effectively address risks presented in these unique ecosystems. While teams have recognized the need to evolve their security toolchains and processes to embrace automation, it is imperative for them to integrate specific security and compliance checks early into their respective DevOps processes. There are legitimate concerns that persist about miscon­figurations and runtime risks in cloud native applications, and still too few organizations have a robust security plan in place.

These complex problem definitions have led to the development of a special publication from National Institute of Standards and Technology (NIST) – NIST SP 800-190 Application Security Container Guide. It provides guidelines for securing container applications and infrastructure components, including sectional review of the fundamentals of containers, key risks presented by core components of application container technologies, countermeasures, threat scenario examples, and actionable information for planning, implementing, operating, and maintaining container technologies.

MVISION Cloud Native Application Protection Platform (CNAPP) is a comprehensive device-to-cloud security platform for visibility and control across SaaS, PaaS, & IaaS platforms.  It provides deep coverage on cloud native security controls that can be implemented throughout the entire application lifecycle. By mapping all the applicable risk elements and countermeasures from Sections 3 and 4 of NIST SP 800-190 to capabilities within the platform, we want to provide an architectural point of reference to help customers and industry partners automate compliance and implement security best practices for containerized application workloads. This mapping and a detailed review of platform capabilities aligned with key countermeasures can be referenced here.

As outlined in one of the supporting charts in the whitepaper, CNAPP has capabilities that effectively address all the risk elements described in the NIST special publication guidance.

While the breadth of coverage is critical, it is worth noting that the most effective way to secure containerized applications requires embedding security controls into each phase of the container lifecycle. If we leverage Department of Defense’s Enterprise DevSecOps Reference Design guidance as a point of reference, it describes the DevSecOps lifecycle in terms of nine transition stages comprising of plan, develop, build, test, release, deliver, deploy, operate, and monitor.

DevSecOps Software Lifecycle: Referenced in DoD Enterprise DevSecOps Reference Design v1.0 Guidance

The foundational principle of DevSecOps implementations is that the software development lifecycle is not a monolithic linear process.  The “big bang” style delivery of the Waterfall SDLC process is replaced with small but more frequent deliveries, so that it is easier to change course as necessary. Each small delivery is accomplished through a fully automated process or semi-automated process with minimal human intervention to accelerate continuous integration and delivery. The DevSecOps lifecycle is adaptable and has many feedback loops for continuous improvement.

Specific to containerized applications and workloads, a more abstract view of a container’s lifecycle spans across three high-level phases of Build, Deploy, and Run.

Build

The “Build” phase centers on what ends up inside the container images in terms of the components and layers that make up an application. Usually created by the developers, security efforts are typically focused on reducing business risk later in the container lifecycle by applying best practices and identifying and eliminating known vulnerabilities early. These assessments can be conducted in an “inner” loop iteratively as developers perform incremental builds and add security linting and automated tests or can be driven via an “outer” feedback loop that’s driven by operational security reviews and penetration testing efforts.

Deploy

In the “Deploy” phase, developers configure containerized applications for deployment into production. Context grows beyond information about images to include details about configuration options available for orchestrated services. Security efforts in this phase often center around complying with operational best practices, applying least-privilege principles, and identifying misconfigurations to reduce the likelihood and impact of potential compromises.

Runtime

Runtime” is broadly classified as a separate phase wherein containers go into production with live data, live users, and exposure to networks that could be internal or external in nature. The primary purpose of implementing security during the runtime phase is to protect running applications as well as the underlying container infrastructure by finding and stopping malicious actors in real time.

Docker containerized application life cycle. 

By applying this understanding of container lifecycle stages to respective countermeasures that can be implemented and audited upon within MVISION Cloud, CNAPP customers can establish an optimal security posture and achieve synergies of shift left and runtime security models.   Security assessments are critically important early in planning and design, where important decisions are made about architecture approach, development tooling and technology platforms and where mistakes or misunderstandings can be dangerous and expensive. As DevOps teams move their workloads into the cloud, security teams will need to implement best practices that apply operations, monitoring and runtime security controls across public, private, and hybrid cloud consumption models.

CNAPP first discovers all the cloud-native components mapped to an application, including hosts, IaaS/PaaS services, containers, and the orchestration context that a container operates within.  With the use of native tagging and network flow log analysis, customers can visualize cloud infrastructure interactions including across compute, network, and storage components. Additionally, the platform scans cloud native object and file stores to assess presence of any sensitive data or malware. Depending on the configuration compliance of the underlying resources and data sensitivity, an aggregate risk score is computed per application which provides detailed context for an application owner to understand risks and prioritize mitigation efforts.

As a cloud security posture management platform, CNAPP provides a set of capabilities that ensure that assets comply with industry regulations, best practices, and security policies. This includes proactive scanning for vulnerabilities in container images and VMs and ensuring secure container runtime configurations to prevent non-compliant builds from being pushed to production.  The same principles apply to orchestrator configurations to help secure how containers get deployed using CI/CD tools. These baseline checks can be augmented with other policy types to ensure file integrity monitoring and configuration hardening of hosts (e.g., no insecure ports or unnecessary services), which help apply defense-in-depth by minimizing the overall attack surface.

Finally, the platform enforces policy-based immutability on running container instances (and hosts) to help identify process-, service-, and application-level whitelists. By leveraging the declarative nature of containerized workloads, threats can be detected during the runtime phase, including any exposure created as a result of misconfigurations, application package vulnerabilities, and runtime anomalies such as execution of reverse shell or other remote access tools. While segmentation of workloads can be achieved in the build and deploy phases of a workload using posture checks for constructs like namespaces, network policies, and container runtime configurations to limit system calls, the same should also be enforced in the runtime phase to detect and respond to malicious activity in an automated and scalable way.  The platform defines baselines and behavioral models that can specially be effective to investigate attempts at network reconnaissance, remote code execution due to zero-day application library and package vulnerabilities, and malware callbacks.  Additionally, by mapping these threats and incidents to the MITRE ATT&CK tactics and techniques, it provides a common taxonomy to cloud security teams regardless of the underlying cloud application or an individual component. This helps them extend their processes and security incident runbooks to the cloud, including their ability to remediate security misconfigurations and preemptively address all the container risk categories outlined in NIST 800-190.

The post Securing Containers with NIST 800-190 and MVISION CNAPP appeared first on McAfee Blogs.

Think Beyond the Edge: Why SASE is Incomplete Without Endpoint DLP

By Shlomi Zrahia

The move to a distributed workforce came suddenly and swiftly. In February 2020, less than 40% of companies allowed most of their employees to work from home one day a week. By April, 77% of companies had most of their employees working exclusively from home.

Organizations have been in the midst of digital transformation projects for years, but this development represented a massive test. Most organizations were pleasantly surprised to see that their employees could remain productive while working from home thanks to successful cloud migration projects and the adoption of various mobility and remote access technologies, but companies have become more worried that they have far less visibility into data on employees’ systems when they are working remotely. Traditional Network DLP can protect data while it is traversing through the network up to the corporate edge, but it has little visibility to data once it is out of the corporate network and its effectiveness is further limited when the workforce is distributed.

Figure 1: Data protection gaps resulting from direct-to-cloud access.

More than three-quarters of CIOs are concerned with the impact that this increased data sprawl is having on security. Despite the fact that roughly half of all corporate data was stored in the cloud last year, only 36% of companies could enforce data protection policies there. Many organizations therefore forced home-based users to hairpin all traffic back to the corporate data center via VPN so that they could be protected by the network data loss prevention (DLP) system. This maintained security, but it came at the cost of poor performance and reduced worker productivity.

Cloud-native security is part of the solution

Organizations that employed cloud-based security technologies like a Cloud Access Security Broker (CASB), DLP, or Secure Web Gateway (SWG) could enable their users to perform their jobs with fast and secure direct-to-cloud access. However, this still leads to headaches: IT organizations have to manage multiple disparate solutions, while users face latency while their traffic needs to bounce between multiple siloed technologies before they can access their data.

The Secure Access Service Edge (SASE) presents a solution to this dilemma by providing a framework for organizations to bring all of these technologies together into a single integrated cloud service. End users enjoy low-latency access to the cloud, while IT management and costs are simplified. So everyone wins, right? Not entirely.

Many SASE proponents posit that the best way to architect a distributed Work From Home environment would be to have all security functionality in the cloud at the “service edge”, while end user devices have only a small agent to redirect traffic to that service edge. However, this model poses a data protection dilemma. While a cloud-delivered service can extend data protection to data centers, cloud applications, and web traffic, there are a number of blind spots:

  • Every remote worker’s home is now a remote office with a range of unmanaged, unsecured devices like printers, storage drives, and peripherals that can be compromised or be used to exfiltrate data.
  • Attached devices like USB keys can be used to get data off of a corporate device and beyond the reach of and data protection controls.
  • Cloud applications like Webex, Dropbox, and Zoom all have desktop companion apps that enable actions like file syncing or screen/file sharing; these websocket apps run locally on the user’s system and are not subject to cloud-based data protection policies.

These blind spots can only be addressed by endpoint-based data loss prevention (DLP) that enforces data protection policy on the user’s device. This is not dissimilar to how SASE frameworks rely on SD-WAN customer premises equipment (CPE) that perform essential network flow functionality at branch office locations. Therefore, it’s imperative to look for SASE solutions that include endpoint DLP coverage.

Figure 2: How endpoint DLP uniquely addresses home office security gaps.

Bringing it all together is the key

It’s great to say that to address the challenges of cloud transformation and the remote workforce, existing network DLP solutions – with their dedicated management interface, data classifications, and policy workflows – need to be accompanied by similar capabilities in the cloud, and then again on the endpoint. Of course, that’s completely impractical where IT organizations are already struggling to deal with the status quo due to finite budgets and skilled personnel. Not only is it impractical, but it undermines the consolidation, simplification, and cost reduction promised both by digital transformation and the SASE framework.

The answer to this dilemma is a comprehensive data protection solution that encompasses networks, devices, and the cloud, something that is uniquely delivered by McAfee MVISION Unified Cloud Edge (UCE). MVISION UCE is a cloud-native solution that seamlessly converges core security technologies such as Data Loss Prevention (DLP), cloud access security broker (CASB) and next-gen secure web gateway (SWG) to help accelerate SASE adoption. MVISION UCE features multi-vector data protection that features unified data classification and incident management across the network, sanctioned and unsanctioned Shadow IT cloud applications, web traffic, and equally important, endpoint DLP. This provides corporate information-security teams the necessary visibility, control and management capability to secure home-based and mobile workers as they access data anywhere.

Figure 3: Unified Multi-Vector Data Protection

To manage data security of a distributed workforce, linking device security to corporate policy becomes extremely important. With a managed DLP agent on the device, IT security can know where sensitive data exists, block untrusted services and removable media, protect against cloud services and desktop apps, and educate employees to potential dangers.

Historically, data protection has focused on a central point like the network or the cloud because implementing it on the device has been difficult. However, with McAfee’s Unified Computing Edge (UCE), DLP becomes an easy-to-deliver feature.

Centrally managed by McAfee MVISION ePO, McAfee DLP can be easily deployed to endpoints. With its unique device-to-cloud DLP features, on-prem DLP policies can be easily extended to the Cloud with a single click and as fast as under one minute.  Shared data classification tags ensure consistent multi-environment protection for your most sensitive data across endpoints, network and cloud. —

Incorporating security into the cloud and the edge, and delivering data protection at the endpoint, are the only way to really deliver on what SASE promises and unlock your remote workforce. Looking to the future, a widely distributed workforce is here to stay. Companies need to take steps to secure devices and data wherever they are.

To find out more, please visit www.mcafee.com/unifiedcloud.

The post Think Beyond the Edge: Why SASE is Incomplete Without Endpoint DLP appeared first on McAfee Blogs.

How CASB and EDR Protect Federal Agencies in the Age of Work from Home

By John Amorosi

Malicious actors are increasingly taking advantage of the burgeoning at-home workforce and expanding use of cloud services to deliver malware and gain access to sensitive data. According to an Analysis Report (AR20-268A) from the Cybersecurity and Infrastructure Security Agency (CISA), this new normal work environment has put federal agencies at  risk of falling victim to cyber-attacks that exploit their use of Microsoft Office 365 (O365) and misuse their VPN remote access services.

McAfee’s global network of over a billion threat sensors affords its threat researchers the unique advantage of being able to thoroughly analyze dozens of cyber-attacks of this kind. Based on this analysis, McAfee supports CISA’s recommendations to help prevent adversaries from successfully establishing persistence in agencies’ networks, executing malware, and exfiltrating data. However, McAfee also asserts that the nature of this environment demands that additional countermeasures be implemented to quickly detect, block and respond to exploits originating from authorized cloud services.

Read on to learn from McAfee’s analysis of these attacks and understand how federal agencies can use cloud access security broker (CASB) and endpoint threat detection and response (EDR) solutions to detect and mitigate such attacks before they have a chance to inflict serious damage upon their organizations.

The Anatomy of a Cloud Services Attack

McAfee’s analysis supports CISA’s findings that adversaries frequently attempt to gain access to organizations’ networks by obtaining valid access credentials for multiple users’ O365 accounts and domain administrator accounts, often via vulnerabilities in unpatched VPN servers. The threat actor will then use the credentials to log into a user’s O365 account from an anomalous IP address, browse pages on SharePoint sites, and then attempt to download content. Next, the cyberthreat actor would connect multiple times from a different IP address to the agency’s Virtual Private Network (VPN) server, and eventually connect successfully.

Once inside the network, the attacker could:

  • Begin performing discovery and enumerating the network
  • Establish persistence in the network
  • Execute local command line processes and multi-stage malware on a file server
  • Exfiltrate data

Basic SOC Best Practices

McAfee’s comprehensive analysis of these attacks supports CISA’s proposed  best practices to prevent or mitigate such cyber-attacks. These recommendations include:

  • Hardening account credentials with multi-factor authentication,
  • Implementing the principle of “least privilege” for data access,
  • Monitoring network traffic for unusual activity,
  • Patching early and often.

While these recommendations provide a solid foundation for a strong cybersecurity program, these controls by themselves may not go far enough to prevent more sophisticated adversaries from exploiting and weaponizing cloud services to gain a foothold within an enterprise.

Why Best Practices Should Include CASB and EDR

Organizations will gain a running start to identifying and thwarting the attacks in question by implementing a full-featured CASB such as McAfee MVISION Cloud, and an advanced EDR solution, such as McAfee MVISION Endpoint Threat Detection and Response.

Deploying MVISION Cloud for Office 365 enables agencies’ SOC analysts to assert greater control over their data and user activity in Office 365—control that can hasten identification of compromised accounts and resolution of threats. MVISION Cloud takes note of all user and administrative activity occurring within cloud services and compares it to a threshold based either on the user’s specific behavior or the norm for the entire organization. If an activity exceeds the threshold, it generates an anomaly notification. For instance, using geo-location analytics to visualize global access patterns, MVISION Cloud can immediately alert agency analysts to anomalies such as instances of Office 365 access originating from IP addresses located in atypical geographic areas.

When specific anomalies appear concurrently—e.g., a Brute Force anomaly and an unusual Data Access event—MVISION Cloud automatically generates a Threat. In the attacks McAfee analyzed, Threats would have been generated early on since the CASB’s user behavior analytics would have identified the cyber actor’s various activities as suspicious. Using MVISION Cloud’s activity monitoring dashboard and built-in audit trail of all user and administrator activities, SOC analysts can detect and analyze anomalous behaviors across multiple dimensions to more rapidly understand what exactly is occurring when and to what systems—and whether an incident concerns a compromised account, insider threat, privileged user threat, and/or malware—to shrink the gap to remediation.

In addition, with MVISION Cloud, an agency security analyst can clearly see how each cloud security incident maps to MITRE ATT&CK tactics and techniques, which not only accelerates the entire forensics process but also allows security managers to defend against similar attacks with greater precision in the future.

Figure 1. Executed Threat View within McAfee MVISION Cloud

 

Figure 2. Gap Analysis & Investigations – McAfee MVISION Cloud Policy Recommendations

 

Furthermore, using MVISION Cloud for Office 365, agencies can create and enforce policies that prevent the uploading of sensitive data to Office 365 or downloading of sensitive data to unmanaged devices. With such policies in place, an attacker’s attempt to exfiltrate sensitive data will be mitigated.

In addition to deploying a CASB, implementing an EDR solution like McAfee MVISION EDR to monitor endpoints centrally and continuously—including remote devices—helps organizations defend themselves from such attacks. With MVISION EDR, agency SOC analysts have at their fingertips advanced analytics and visualizations that broaden detection of unusual behavior and anomalies on the endpoint. They are also able to grasp the implications of alerts more quickly since the information is presented in a format that reduces noise and simplifies investigation—so much so that even novice analysts can analyze at a higher level. AI-guided investigations within the solution can also provide further insights into attacks.

Figure 3. MITRE ATT&CK Alignment for Detection within McAfee MVISION EDR

With a threat landscape that is constantly evolving and attack surfaces that continue to expand with increased use of the cloud, it is now more important than ever to embrace CASB and EDR solutions. They have become critical tools to actively defend today’s government agencies and other large enterprises.

Learn more about the cloud-native, unified McAfee MVISION product family. Get your questions answered by tweeting @McAfee

The post How CASB and EDR Protect Federal Agencies in the Age of Work from Home appeared first on McAfee Blogs.

SPOTLIGHT: Women in Cybersecurity

By McAfee

There are new and expanding opportunities for women’s participation in cybersecurity globally as women are present in greater numbers in leadership. In recent years, the international community has recognized the important contributions of women to cybersecurity, however, equal representation of women is nowhere near a reality, especially at senior levels.

The RSA Conference USA 2019 held in San Francisco — which is the world’s largest cybersecurity event with more than 40,000 people and 740 speakers — is a decent measuring stick for representation of women in this field. “At this year’s Conference 46 percent of all keynote speakers were women,” according to Sandra Toms, VP and curator, RSA Conference, in a blog she posted on the last day of this year’s event. “While RSAC keynotes saw near gender parity this year, women made up 32 percent of our overall speakers,” noted Toms.

Forrester also predicts that the number of women CISOs at Fortune 500 companies will rise to 20 percent in 2019, compared with 13 percent in 2017. This is consistent with new research from Boardroom Insiders which states that 20 percent of Fortune 500 global chief information officers (CIOs) are now women — the largest percentage ever.

Research from Cybersecurity Ventures, which first appeared in the media early last year, predicts that women will represent more than 20 percent of the global cybersecurity workforce by the end of 2019. This is based on in-depth discussions with numerous industry experts in cybersecurity and analyzing and synthesizing third-party reports, surveys, and media sources.

Either way, the 20 percent figure is still way too low, and our industry needs to continue pushing for more women in cyber. Heightened awareness on the topic — led by numerous women in cyber forums and initiatives — has helped move the needle in a positive direction.

Live Panel

Women in Cloud and Security – A Panel with McAfee, AWS, and Our Customers

Thursday, November 5, 2020
10am PT | 12pm CT | 1pm ET

Register Now

 

Join McAfee in our Women in Cloud and Security Panel

Please join McAfee, AWS, and our customers to discuss the impact women are having on information security in the cloud.  These remarkable women represent multiple roles in cloud and security, from technical leadership through executive management. Can’t make it? This same panel will reconvene later in the year during the AWS re:Invent.

 

Meet the speakers:

Alexandra Heckler
Chief Information Security Officer
Collins Aerospace

Alexandra Heckler is Chief Information Security Officer at Collins Aerospace, where she leads a diverse team of cyber strategy and defense experts to protect against cyber threats and ensure regulatory compliance. Prior to joining Collins, Alexandra led Booz Allen’s Commercial Aerospace practice, building and overseeing multi-disciplinary teams to advise C-level clients on cybersecurity and digital transformation initiatives. Her work centered on helping aerospace manufacturers manage the convergence of cyber risk across their increasingly complex business ecosystem, including IT, OT and connected products. Alexandra also helped build and led the firm’s automotive practice, working with OEMs, suppliers and the Auto-ISAC to drive industry-leading vehicle cyber security capabilities. During her first few years at Booz Allen, she supported technology, innovation and risk analysis initiatives across U.S. government clients. Throughout her tenure, she engaged in Booz Allen’s Women in Cyber—a company-wide initiative to attract, develop and retain female cyber talent—and supported the firm’s partnership with the Executive Women’s Forum. She also served as Finance and Audit Chair on the Executive Committee of the newly-founded Space-ISAC. Alexandra holds a B.S. in Foreign Service with an Honors Certificate in International Business Diplomacy, and a M.A. in Communication, Culture and Technology from Georgetown University.

Diane Brown
Sr. Director/CISO of IT Risk Management
Ulta Beauty

Diane Brown is the Sr. Director/CISO of IT Risk Management at Ulta Beauty located in Bolingbrook, IL. In this role, Diane is accountable for the security of the retail stores, cyber-security, infrastructure, security/network engineering, data protection, third-party risk assessments, Directory Services, SOX & PCI compliance, application security, security awareness and Identity Management. Diane has more than three decades of IT experience in the retail environment and has honed her expertise in information technology leadership with a focus on risk management for the past 15 years. She values her strategic alliances with the business focusing on delivery of secure means to deploy new technologies, motivating people and managing an expanding technology portfolio. She holds a Bachelor’s degree in Information Security and CISSP/ISSAP certifications and is a member of the Executive Security Council for NRF and one of the original members of the RH-ISAC.

Elizabeth Moon
Director, Industry Solutions Americas Solutions Architecture & Customer Success
Amazon Web Services

Elizabeth has been with AWS for 5-1/2 years and leads Industry Solutions within the Americas Solutions Architecture and Customer Success organization. Elizabeth’s team of Specialist Solutions Architects provide industry specific depth for customers in the following segments: Games, Private Equity, Media & Entertainment, Manufacturing/Supply Chain, Healthcare Life Sciences, Financial Services, and Retail. They focus on accelerating cloud migration and building customer confidence and capability on the AWS platform through expert, prescriptive guidance on Foundations (Security, Identity, and Networking), Cost Optimization, Developer Experience, Cloud Migrations and Modernization.

Prior to her role at AWS, Elizabeth led the pre-sales Oracle Enterprise Architecture team within Oracle’s North America Public Sector Consulting organization. She helped customers maximize their investment in Oracle technologies, align business initiatives with the right IT solutions, and mitigate risk of implementations, focused on Oracle Engineered Systems, Database, and Infrastructure solutions.

Elizabeth got her start in technology with Metropolitan Regional Information Systems (MRIS), the nation’s largest Multiple Listing Service (MLS) and real estate information provider. She spent 15 years at this small company across multiple functions: DBA, data architect, system administrator, technical program lead, and operations leader. Most notably, she led design, deployment and growth of the patented database behind the Cornerstone Universal Data Exchange.

She earned a bachelor’s degree in International Business from Eckerd College in St. Petersburg, Florida.

Deana Elizondo
Director of Cyber Risk & Security Services
American Electric Power

Deana Elizondo is the Director of Cyber Risk & Security Services at American Electric Power. She has been with AEP for 16 years and has spent the last 11 years in Cybersecurity. Deana’s organization includes Security Ambassadors, Security Education & Regional Support, Data Protection & Privacy, Enterprise Content Management, and Strategy, Risk & Policies. Deana’s passion is growing and developing her leaders and team members, as well as educating the entire AEP workforce on the value and benefits of reducing Security risk.

Aderonke (Addie) Adeniji
Director Information Assurance Office of Cybersecurity
House of Representatives

Addie Adeniji is a seasoned cybersecurity professional with expertise in Federal IT security governance, risk and compliance (GRC). Currently, she serves as the Director of Information Assurance, within the Office of Cybersecurity, for the U.S. House of Representatives. In this role, she oversees Information Assurance standard and process development and directs risk management and audit compliance efforts across the House. Ms. Adeniji works with House staff to identify, evaluate and report risks to ensure the House maintains a strengthened security risk posture. Her past experience includes security consulting within the Federal health (i.e., FDA, NIH, and HHS headquarters) and energy domains.

Brooke Noelke (Moderator)
Senior Enterprise Cloud Security Strategist/Architect
McAfee

Brooke joins McAfee’s Customer Cloud Security Architecture team after leading McAfee IT’s cloud technical architects and business-facing cloud service management efforts, driving McAfee’s cloud transformation and migration of 70% of our applications to the cloud. She’s spent most of her career in technical leadership roles in cloud strategy, architecture and engineering, spanning professional services strategy though IT delivery leadership. She believes cloud services have already rewritten our IT universe, and we’re all just catching up… but that the cloud “easy buttons” we’re handing developers and business functions aren’t as risk-free as commonly assumed. Her mission is to make the secure path, the easy path to deploying new products, solutions and intelligence in the cloud, through enablement of organizational change, agile automation and well-designed, reusable cloud security reference architectures

Source: https://cybersecurityventures.com/women-in-cybersecurity/

 

The post SPOTLIGHT: Women in Cybersecurity appeared first on McAfee Blogs.

McAfee Named a Leader in the 2020 Gartner Magic Quadrant for CASB

By McAfee

McAfee MVISION Cloud was the first to market with a CASB solution to address the need to secure corporate data in the cloud. Since then, Gartner has published several reports dedicated to the CASB market, which is a testament to the critical role CASBs play in enabling enterprise cloud adoption. Today, Gartner named McAfee a Leader in the 2020 annual Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) for the fourth time evaluating CASB vendors.

Cloud access security brokers have become an essential element of any cloud security strategy, helping organizations govern the use of cloud and protect sensitive data in the cloud. Security and risk management leaders concerned about their organizations’ cloud use should investigate CASBs.

In its fourth Magic Quadrant for Cloud Access Security Brokers, Gartner evaluated eight vendors that met its inclusion criteria. MVISION Cloud, as part of the MVISION family of products at McAfee, is recognized as a Leader in the report; and for the fourth year in a row. To learn more about how Gartner assessed the market and MVISION Cloud, download your copy of the report here.

This year, Gartner commissioned a highly rigorous process to compile its Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) report and they relied on numerous inputs to compile the report, including these materials from vendors to understand their product offerings:

  • Questionnaire – A 300+ point questionnaire resulting in hundreds of pages of responses
  • Financials – Detailed company financial data covering CASB revenue
  • Documentation – Access to all product documentation
  • Customer Peer Reviews – Gartner encourages customers to submit anonymized reviews via their Peer Insights program. You can read them here.
  • Demo – Covering over 50 Gartner-defined use cases to validate product capabilities

In 2020, McAfee made several updates and additions to its solutions, strengthening its position as an industry experts  including:

 

McAfee also received recognition as the only vendor to be named the January 2020 Gartner Peer Insights Customers’ Choice for Cloud Access Security Brokers based on customer feedback and ratings for McAfee MVISION Cloud.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The Gartner Peer Insights Logo is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved

Gartner Peer Insights ‘Voice of the Customer’: Cloud Access Security Brokers, Peer Contributors, 13 March 2020. Gartner Peer Insights reviews constitute the subjective opinions of individual end users based on their own experiences and do not represent the views of Gartner or its affiliates. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a Leader in the 2020 Gartner Magic Quadrant for CASB appeared first on McAfee Blogs.

Catch the Most Sophisticated Attacks Without Slowing Down Your Users

By Michael Schneider

Most businesses cannot survive without being connected to the internet or the cloud. Websites and cloud services enable employees to communicate, collaborate, research, organize, archive, create, and be productive.

Yet, the digital connection is also a threat. External attacks on cloud accounts increased by an astounding 630% in 2019. Ransomware and phishing remain major headaches for IT security teams, and as users and resources have migrated outside of the traditional network security perimeter, it’s become increasingly difficult to protect users from clicking on a link or opening a malicious file.

This challenge has increased the tension between two IT mandates—allowing unfettered access to necessary services, while preventing attacks and blocking access to malicious sites. Automation helps significantly with modern security pipelines blocking about 99.5% of malicious and suspicious activity by filtering known bad files and sites, as well as using sophisticated anti-malware scanning and behavioral analytics.

Security is a lot of work

However, the remaining half of 1% still represents a significant number of sites and potential threats that require time for a team of security analysts to triage. Therefore, IT managers are faced with the challenge of devising balanced security policies. Many companies default to blocking unknown traffic, but over-blocking of web sites and content can hinder user productivity while creating a surge in help-desk tickets as users attempt to go to legitimate sites that have not yet been classified. On the flipside, web policies that allow access too freely greatly increases the likelihood of serious, business-threatening security incidents.

With a focus on digital transformation, accelerated by the change in work habits and locations during the pandemic, companies need flexible, transparent security controls that enable safe user access to critical web and cloud resources without overwhelming security teams with constant help desk calls, policy changes, and manual triaging. Remote Browser Isolation – if implemented properly – can help achieve this.

While security solutions leveraging URL categorization, domain reputation, antivirus, and sandboxes can stop 99.5% of threats, remote browser isolation (RBI) can handle the remaining unknown events, rather than the common strategy of choosing to rigidly block or allow everything. RBI allows web content to be delivered and viewed in a safe environment, while analysis is conducted in the background. Using RBI, any request to an unknown site or URL that remains suspicious after traversing the web protection defense-in-depth pipeline will be rendered remotely, preventing any impact to a user’s system in the event the content is malicious.

Relying on RBI

Remote browser isolation blocks malicious code from running on an employee’s system just because they clicked a link. The technology will also prevent pages from using unprotected cookies to try and gain access to protected web services and sites. Such protections are particularly important in the age of ransomware, when an inadvertent click on a malicious link can lead to significant damage to a company’s digital assets.

Given the benefits of remote browser isolation, some companies have deployed the technology to render every site. While this can very effectively mitigate security risk, isolating all web and cloud traffic demands considerable computing resources and is prohibitively expensive from a license cost point of view.

By integrating remote browser isolation (RBI) technology directly into our MVISION Unified Cloud Edge (UCE) solution, McAfee integrates RBI with the existing triage pipeline. This means that the rest of the threat protection stack – including global threat intelligence, anti-malware, reputation analysis, and emulation sandboxing – can filter out the majority of threats while only one out of every 200 requests needs to be handled using the RBI. This dramatically reduces overhead. McAfee’s UCE makes this approach dead simple: rather than positioning remote browser isolation as a costly and complicated add-on service, it is included with every MVISION UCE license.

Full Protection for High-Risk Individuals

However, there are specific people inside a company—such as the CEO or the finance department—with whom you cannot take chances. For those privileged users, full isolation from potential internet threats is also available. This approach ensures full virtual segmentation of the user’s system from the internet and shields it against any potential danger, enabling him to use the web and cloud freely and productively.

McAfee’s approach greatly reduces the risk of users being compromised by phishing campaigns or inadvertently getting infected by ransomware – such attacks can incur substantial costs and impact an organization’s ability to operate. At the same time, organizations benefit from a workforce that is freely able to access the web and cloud resources they need to be productive, while IT staff are freed from the burden of rigid web policies and constantly addressing help-desk tickets. .

Want to know more? Check out our RBI demonstration.

The post Catch the Most Sophisticated Attacks Without Slowing Down Your Users appeared first on McAfee Blogs.

With No Power Comes More Responsibility

By Rich Vorwaller

You’ve more than likely heard the phrase “with great power comes great responsibility.” Alternatively called the “Peter Parker Principle” this phrase became well known in popular culture mostly due to Spider-Man comics and movies – where Peter Parker is the protagonist. The phrase is so well known today that it actually has its own article in Wikipedia. The gist of the phrase is that if you’ve been empowered to make a change for the better, you have a moral obligation to do so.

However, what I’ve noticed as I talk to customers about cloud security, especially security for the Infrastructure as a Service (IaaS) is a phenomenon I’m dubbing the “John McClane Principle” – the name has been changed to protect the innocent 🙂

The John McClane Principle happens when someone has been given responsibility for fixing something but at the same time has not been empowered to make necessary changes. At the surface this scenario may sound absurd, but I bet many InfoSec teams can sympathize with the problem. The conversation goes something like this:

  • CEO to InfoSec: You need to make sure we’re secure in the cloud. I don’t want to be the next [insert latest breach here].
  • InfoSec to CEO: Yeah, so I’ve looked at how we’re using the cloud and the vast majority of our problems are from a lack of processes and knowledge. We have a ton of teams that are doing their own thing in the cloud, and I don’t have complete visibility into what they’re doing.
  • CEO to InfoSec: Great, go fix it.
  • InfoSec to CEO: Well the problem is I don’t have any say over those teams. They can do whatever they want. To fix the problem they’re going to have change how they use the cloud. We need to get buy-in from managers, but those managers have told me they’re not interested in changing anything because it’ll slows things down.
  • CEO to InfoSec: I’m sure you’ll figure it out. Good luck, and we better not have a breach.

That’s when “with no power comes more responsibility” rings true.

And why is that? The reason being is that IaaS has fundamentally changed how we consume IT and along with that how we scale security. No longer do we submit purchase requests and go through a long, lengthy processes to spin up infrastructure resources. Now anyone with a credit card can spin up the equivalent of a data center within minutes across the globe.

The agility however introduced some unintended changes to InfoSec and in order to scale, cloud security cannot be the sole responsibility of one team. Rather cloud security must be embedded in process and depends on collaboration between development, architects, and operations. These teams now have a more significant role to play in cloud security, and in many cases are the only ones who can implement change in order to enhance security. InfoSec now acts as Sherpas instead of gatekeepers to make sure every team is marching to the same, secure pace.

However, as John McClane can tell you the fact that more teams look after cloud security doesn’t necessarily mean you have a better solution. In fact, having to coordinate across multiple teams with different priorities can make security even more complex and slow you down. Hence the need for a streamlined security solution that facilitates collaboration between developers, architects, and InfoSec but at the same time provides guardrails, so nothing slips throw the cracks.

With that, I’m excited to announce our new cloud security service built especially for customers moving and developing applications in the cloud. We call it MVISION Cloud Native Application Protection Platform – or just CNAPP because every service deserves an acronym.

What is CNAPP? CNAPP is a new security service we’ve just announced today that combines solutions from Cloud Security Posture Management (CSPM), Cloud Workload Protection Platform (CWPP), Data Loss Prevention (DLP), and Application Protection into a single solution. Now in beta with a target launch date of Q1, 2021, we built CNAPP to provide InfoSec teams broad visibility into their cloud native applications. For us, the goal wasn’t how do we slow things down to make sure everything is secure; rather how do we enable InfoSec teams the visibility and context they need for cloud security while allowing dev teams to move fast.

Let me briefly describe what features CNAPP has and list some features that are customer favorites.

CSPM

The vast majority of breaches in IaaS today are due to service misconfigurations. Gartner famously said in 2016 that “95% of cloud security failures will be the customer’s fault.”Just last year Gartner updated that quote to say “99% of cloud security failures will be the customers’ fault.” I’m waiting for the day when Gartner’s says “105% will be the customer’s fault.”

Why is the percentage so high? There are multiple reasons, but we hear a lot from our customers that there is a huge lack of knowledge on how to secure new services. Each cloud provider is releasing new services and capabilities at a dizzying pace with no blockers for adoption. Unfortunately, the industry hasn’t matched pace of having a workforce that knows and understands how best to configure these new services and capabilities. CNAPP provides customers with the ability to immediately audit all cloud services and benchmark those services against best security practices and industry standards like CIS Foundations, PCI, HIPPA, and NIST.

Within that audit (we call it a security incident), CNAPP provides detailed information on how to reconfigure services to improve security, but the service also provides the ability to assign the security incident to dev teams with SLAs so there’s no ambiguity on who owns what and what needs to change. All of these workflows can be automated so multiple teams are empowered in near real-time to find and fix problems.

Additionally, CNAPP has a custom policy feature where customers can create policies for identifying risky misconfigurations unique to their environments as well as integrations with developer tools like Jenkins, Bitbucket, and GitHub that provide feedback on deployments that don’t meet security standards.

CWPP

IaaS platforms have become catalysts for Open Source Software (OSS) like Linux (OS), Docker (container), and Kubernetes (orchestration). The challenge with using these tools is the inherit risk of Common Vulnerabilities and Exposures (CVE) found in software libraries and misconfigurations in deploying new services. Another famous quote by Gartner is that “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” But how does the InfoSec team quickly spot those vulnerabilities and misconfigurations, especially in ephemeral environments with multiple developer teams pushing frequent releases into CI/CD pipelines?

Based on our acquisition of NanoSec last year, CNAPP provides full workload protection by identifying all compute instances, containers, and container services running in IaaS while identifying critical CVEs, misconfigurations in both repository and production container services, and introducing some new protection features. These features include application allow listing, OS hardening, and file integrity monitoring with plans to introduce nano-segmentation and on-prem support soon.

Customer Favorites

We’ve had a great time working jointly with our customers to release CNAPP. I’d like to highlight some of the use cases that have proven to be game changers for our customers.

  • In-tenant DLP scans: many of our customers have legitimate use cases for publicly exposed cloud storage services (sometimes referred to as buckets), but at the same time need to ensure those buckets don’t have sensitive data. The challenge with using DLP for these services is many solutions available in the market copy the data into the vendor’s own environment. This increases customer costs with egress charges and also introduces security challenges with data transit. CNAPP allows customers to perform in-tenant DLP scans where the data never leaves the IaaS environment, making the process more secure and less expensive.
  • MITRE ATT&CK Framework for Cloud: the language of Security Operation Centers (SOC) is MITRE, but there is a lot of nuance in how cloud security incidents fit into this framework. With CNAPP we built an end-to-end process that maps all CSPM and CWPP security incidents to MITRE. Now InfoSec and developer teams can work more effectively together by automatically categorizing every cloud incident to MITRE, facilitating faster responses and better collaboration.
  • Unified Application Security: CNAPP is built on the same platform as our MVISION Cloud service, a Gartner Magic Quadrant Leader for Cloud Access Security Broker (CASB). Customers are now able to get detailed visibility and security control over their SaaS applications along with applications they are building in IaaS with the same solution. Our customers love having one console that provides a holistic picture of application risk across all teams – SaaS for consumers and IaaS for builders.

There are a lot more features I’d love to highlight, but instead I invite you to check out the solution for yourself. Visit https://mcafee.com/CNAPP for more information on our release or request a demo at https://mcafee.com/demo. We’d love to get your feedback and hear how MVISION CNAPP can help you become more empowered and responsible in the cloud.

This post contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Contact your McAfee representative to obtain the latest forecast, schedule, specifications, and roadmaps.

The post With No Power Comes More Responsibility appeared first on McAfee Blogs.

Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust?

By Ned Miller

Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear – we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive, acceptable definition of Zero Trust Architecture.

Today, most entities utilize a multi-phased security approach. Most commonly, the foundation (or first step) in the approach is to implement secure access to confidential resources. Coupled with the shift to remote and distance work, the question arises, “are my resources and data safe, and are they safe in the cloud?”

Thankfully, the DoD is in the process of developing a long-term strategy for ZTA. Industry partners, like McAfee, have been briefed along the way. It has been refreshing to see the DoD take the initial steps to clearly define what ZTA is, what security objectives it must meet, and the best approach for implementation in the real-world. A recent DoD briefing states “ZTA is a data-centric security model that eliminates the idea of trusted or untrusted networks, devices, personas, or processes and shifts to a multi-attribute based confidence levels that enable authentication and authorization policies under the concept of least privilege access”.

What stands out to me is the data-centric approach to ZTA. Let us explore this concept a bit further. Conditional access to resources (such as network and data) is a well-recognized challenge. In fact, there are several approaches to solving it, whether the end goal is to limit access or simply segment access. The tougher question we need to ask (and ultimately answer) is how to do we limit contextual access to cloud assets? What data security models should we consider when our traditional security tools and methods do not provide adequate monitoring? And is securing data, or at least watching user behavior, enough when the data stays within multiple cloud infrastructures or transfers from one cloud environment to another?

Increased usage of collaboration tools like Microsoft 365 and Teams, SLACK and WebEx are easily relatable examples of data moving from one cloud environment to another. The challenge with this type of data exchange is that the data flows stay within the cloud using an East-West traffic model. Similarly, would you know if sensitive information created directly in Office 365 is uploaded to a different cloud service? Collaboration tools by design encourage sharing data in real-time between trusted internal users and more recently with telework, even external or guest users. Take for example a supply chain partner collaborating with an end user. Trust and conditional access potentially create a risk to both parties, inside and outside of their respective organizational boundaries. A data breach whether intentional or not can easily occur because of the pre-established trust and access. There are few to no limited default protection capabilities preventing this situation from occurring without intentional design. Data loss protection, activity monitoring and rights management all come into question. Clearly new data governance models, tools and policy enforcement capabilities for this simple collaboration example are required to meet the full objectives of ZTA.

So, as the communities of interest continue to refine the definitions of Zero Trust Architecture based upon deployment, usage, and experience, I believe we will find ourselves shifting from a Zero Trust model to an Advanced Adaptive Trust model. Our experience with multi-attribute-based confidence levels will evolve and so will our thinking around trust and data-centric security models in the cloud.

 

 

The post Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust? appeared first on McAfee Blogs.

“Best of Breed” – CASB/DLP and Rights Management Come Together

By Nick Shelly

Securing documents before cloud

Before the cloud, organizations would collaborate and store documents on desktop/laptop computers, email and file servers. Private cloud use-cases such accessing and storing documents on intranet web servers and network attached storage (NAS) improved the end-user’s experience. The security model followed a layered approach, where keeping this data safe was just as important as not allowing unauthorized individuals into the building or data center. This was followed by a directory service to sign into to protect your personal computer, then permissions on files stored on file servers to assure safe usage.

Enter the cloud

Most organizations now consider cloud services to be essential in their business. Services like Microsoft 365 (Sharepoint, Onedrive, Teams), Box, and Slack are depended upon by all users. The same fundamental security concepts exist – however many are covered by the cloud service themselves. This is known as the “Shared Security Model” – essentially the Cloud Service Provider handles basic security functions (physical security, network security, operations security), but ultimately the end customer must correctly give access to data and is ultimately responsible for properly protecting it.

The big difference between the two is that in the first security model, the organization owned and controlled the entire process. In the second cloud model, the customer owns the controls surrounding the data they choose to put in the cloud. This is the risk that collaborating and storing data in the cloud brings; once the documents have been stored in M365, what happens if it is mishandled from this point forward? Who is handling these documents? What if my most sensitive information has left the safe confines of the cloud service, how can I protect that once it leaves? Fundamentally: How can I control data that lives hypothetically anywhere, including areas that I do not have control over?

Adding the protection layers that are cloud-native

McAfee and Seclore have extended an integration recently to address these cloud-based use cases. This integration fundamentally answers this question: If I put sensitive data in the cloud that I do not control, can I still protect the data regardless of where it lives?

The solution works like this:

The solution puts guardrails around end-user cloud usage, but also adds significant compliance protections, security operations, and data visibility for the organization.

Data visibility, compliance & security operations

Once an unprotected sensitive file has been uploaded to a cloud service, McAfee MVISION Cloud Data Loss Prevention (DLP) detects the file upload. Customers can assign a DLP policy to find sensitive data such as credit card data (PCI), customer data, personally identifiable information (PII) or any other data they find to be sensitive.

Sample MVISION Cloud DLP Policy

If data is found to be in violation of policy, it means the data must be properly protected. For example, if the DLP engine finds PII, rather than let it sit unprotected in the cloud service, the McAfee policy the customer sets should enact some protection on file. This action is known as an “Response”, and MVISION Cloud will properly show the detection, violating data, and actions taken in the incident data. In this case, McAfee will call Seclore to protect the file. These actions can be performed both in near real-time, or will enact protection whenever data already exists in the cloud service (on demand scan).

“Seclore-It” – Protection Beyond Encryption

Now that the file has been protected, downstream access to the file is managed by Seclore’s policy engine. Examples of policy-based access could be end-user location, data type, user group, time of day, or any other combination of policy choices. The key principle here is the file is protected regardless of where it goes and enforced by a Seclore policy that the organization sets. If a user accesses the file, an audit trail is recorded to assure that organizations have the confidence that data is properly protected. The audit logs show allows and denies, completing the data visibility requirements.

Addressing one last concern; if a file is “lost” or the need to restrict access to files that are no longer in direct control such as when a user leaves the company, or if the organization simply wants to update policies on protected files, the policy on those files can be dynamically updated. This addresses a major data loss concern that companies have for cloud service providers and general data use for remote users. Ensuring files are always protected, regardless of scenario is simple to achieve with Seclore by taking the action to update a policy. Once the policy has been updated, even files on a thumb drive stuffed in a drawer are now re-protected from accidental or intentional disclosure.

Conclusion

This article addresses several notable concerns for customers doing business in a cloud model. Important/sensitive data can now be effortlessly protected as it migrates to and through cloud services to its ultimate destination. The organization can prove compliance to auditors that the data was protected and continues to be protected. Security operations can track incidents and follow the access history of files. Finally, the joint solution is easy to use and enables businesses to confidently conduct business in the cloud.

Next Steps

McAfee and Seclore partner both at the endpoint and in the cloud as an integrated solution. To find out more and see this solution running in your environment, send an inquiry to cloud@mcafee.com

 

The post “Best of Breed” – CASB/DLP and Rights Management Come Together appeared first on McAfee Blogs.

Top 10 Microsoft Teams Security Threats

By Nigel Hawthorn

2020 has seen cloud adoption accelerate with Microsoft Teams as one of the fastest growing collaboration apps, McAfee customers use of Teams increased by 300% between January and April 2020. When we looked into Teams use in more detail in June, we found these statistics, on average, in our customer base:

 

Teams Created                                                                 367

Members added to Teams                                      6,526

Number of Teams Meetings                              106,000

3rd Party Apps added to Teams                                 185

Guest users added to Teams                                  2,906

This means that a typical enterprise has a new guest user added to their teams every few minutes – you wouldn’t allow unknown people to walk into an office, straight past security and walk around the building unescorted looking at papers sitting on people’s desks, but at the same time you want to allow in those guests you trust. For Teams, you need the same controls – allow in those guests you trust, but confirm their identity and make sure that they don’t see confidential information.

Microsoft invests huge amounts of time and money in the security of their systems, but security of the data in those systems and how they are used by the users is the responsibility of the enterprise.

The breadth of options, including inviting guest users and integration with 3rd party applications can be the Achilles heel of any collaboration technology. It takes just seconds to add an external third party into an internal discussion without realizing the potential for data loss, so sadly the risk of misconfiguration, oversharing or misuse can be large.

IT security teams need the ability to manage and control use to reduce risk of data loss or malware entering through Teams.

After working with hundreds of enterprises and over 40 million MVISION Cloud users worldwide and discussing with IT security, governance and risk teams how they address their Microsoft Teams security concerns, we have published a paper that outlines the top ten security threats and how to address them.

Microsoft Teams: Top 10 Security Threats

This collaboration potentially increases threats such as data loss and malware distribution. In this paper, McAfee discusses the top threats resulting from Teams use along with recommended actions.
Download Now

A few of the 10 Top Microsoft Teams Security Threats are below, read the paper for the full list.

  1. Microsoft Teams Guest Users: Guests can be added to see internal/sensitive content. By setting allow and/or block list domains, security can be implemented with the flexibility to allow employees to collaborate with authorized guests via Teams.
  2. Screen sharing that includes sensitive data. Screen sharing is very powerful, but can inadvertently share confidential data, especially if communication applications such as email are showing alerts on the screen.
  3. Access from Unmanaged Devices: Teams can be used on unmanaged devices, potentially resulting in data loss. The ability to set policies for unmanaged devices can safeguard Teams content.
  4. Malware Uploaded via Teams: File uploads from guests or from unmanaged devices may contain malware. IT administrators need the ability to either block all file uploads from unmanaged devices or to scan content when it is uploaded and remove it from the channel, informing IT management of any incidents.
  5. Data Loss Via Teams Chat and File Shares: File shares in Teams can lose confidential data. Data loss prevention technologies with strong sensitive content identification and sharing control capabilities should be implemented on Teams chat and file shares.
  6. Data Loss Via Other Apps: Teams App integration can mean data may go to untrusted destinations. As some of these apps may transfer data via their services, IT administrators need a system to discover third-party apps in use, review their risk profile and provide a workflow to remediate, audit, allow, block or notify users on an app’s status and revoke access as needed.

McAfee has a wealth of experience helping customers security their cloud computing systems, built around the MVISION Cloud CASB and other technologies. We can advise you about Microsoft Teams security and discuss possible threats of taking no action. Contact us to let us help you.

Teams is just one of the many applications within the Microsoft 365 suite and it is important to deploy common security controls for all cloud apps. MVISION Cloud provides security for Microsoft 365 and other cloud-based applications such as Salesforce, Box, Workday, AWS, Azure, Google Cloud Platform and customers’ own internally developed applications.

 

The post Top 10 Microsoft Teams Security Threats appeared first on McAfee Blogs.

MITRE ATT&CK for Cloud: Adoption and Value Study by UC Berkeley CLTC

By Daniel Flaherty

Are you prepared to detect and defend against attacks that target your data in cloud services, or apps you’ve built that are hosted in the cloud? 

Background 

Nearly all enterprises and public sector customers we work with have enabled cloud use in their organization, with many seeing a 600%+ increase1 in use in the March-April timeframe of 2020, when the shift to remote work rapidly took shape. 

The first step to developing a strong cloud security posture is visibility over the often hundreds of services your employees use, what data is within these services, and then how they are being used collaboratively with third parties and other destinations outside of your control. 

With that visibility, you can establish full control over end-user activity and data in the cloud, applying your policy at every entry and exit point to the cloud.  

That covers your risk stemming from legitimate use by employees, external collaborators, and even API-connected marketplace apps, but what about your adversaries? If someone phished your CEO, stole their OneDrive credentials and exfiltrated data, would you know? What if your CEO used the same password across multiple accounts, and the adversary had access to apps like Smartsheet, Workday, or Salesforce? Are you set up to detect this kind of multi-cloud attack? 

Our Research to Uncover the Best Solution  

Most enterprise security operations centers (SOCs) use MITRE ATT&CK to map the events they see in their environment to a common language of adversary tactics and techniques. This helps to understand gaps in protection, model how attackers progress from access to exfiltration (or encryption/destruction), and to plan out security policy decisions.  

The original ATT&CK framework applied to Windows/Mac/Linux environments, with Android/iOS included as well. For cloud environments, the MITRE ATT&CK framework has a shorter history (released October 2019), but is quickly gaining adoption as the model for cloud threat investigation 

In collaboration with the University of California Berkeley’s Center for Long-Term Cybersecurity (CLTC) and MITRE, we sought to uncover how enterprises investigate threats in the cloud, with a focus on MITRE ATT&CK. In this initiative, researchers from UC Berkeley CLTC conducted a survey of 325 enterprises in a wide range of industries, with 1K employees or above, split between the US, UK, and Australia. The Berkeley team also conducted 10 in-depth interviews with security leaders in various cybersecurity functions.  

Findings 

MITRE has done an excellent job identifying and categorizing adversary tactics and techniques used in the cloud. When asked about the prevalence of these tactics observed in their environment, 81% of our survey respondents had experienced each of the tactics in the Cloud Matrix on average. 58% had experienced the initial access phase of an attack at least monthly. 

Given the frequency in which most enterprises experience these adversary tactics and techniques, we found widespread adoption of the ATT&CK Cloud Matrix, with 97% of our respondents either planning to or already using the Matrix. 

In the full report, we explore deeper implications of using MITRE ATT&CK for Cloud, including consensus on the value it brings to enterprise organizations, challenges with implementation, and many more interesting results from our investigation. Head to the full report here to dive in.  

One of the most promising benefits of MITRE ATT&CK is the unification of events derived from endpoints, network traffic, and the cloud together into a common language. Right now, only 39% of enterprises correlate events from these three environments in their threat investigation. Further adoption of MITRE ATT&CK over time will unlock the ability to efficiently investigate attacks that span multiple environments, such as a compromised endpoint accessing cloud data and exfiltrating to an adversary destination. 

This research demonstrates promising potential for MITRE ATT&CK in the enterprise SOC, with downstream benefits for the business. 87% of our respondents stated that adoption of MITRE ATT&CK will improve cloud security in their organization, with another 79% stating that it would also make them more comfortable with cloud adoption overall. A safer transition to cloud-based collaboration and app development can accelerate businesses, a subject we’ve investigated in the past2MITRE ATT&CK can play a key role in secure cloud adoption, and defense of the enterprise overall.  

Dive into the full research report for more on these findings! 

White Paper

MITRE ATT&CK® as a Framework for Cloud Threat Investigation

81% of enterprise organizations told us they experience the adversary techniques identified in the MITRE ATT&CK for Cloud Matrix – but are they defending against them effectively?

Download Now

 

1https://www.mcafee.com/enterprise/en-us/forms/gated-form.html?docID=3804edf6-fe75-427e-a4fd-4eee7d189265&eid=LAVVPBCF  

2https://www.mcafee.com/enterprise/en-us/forms/gated-form.html?docID=75e3a9dc-793e-488a-8d8a-8dbf31aa5d62&eid=5PES9QHP 

The post MITRE ATT&CK for Cloud: Adoption and Value Study by UC Berkeley CLTC appeared first on McAfee Blogs.

MVISION Cloud for Microsoft Teams

By Gopi Boyinapalli

McAfee MVISION Cloud for Microsoft Teams, now offers secure guest user collaboration features allowing the security admins to not only monitor sensitive content posted in the form of messages and files within Teams but also monitor guest users joining Teams to remove any unauthorized guests joining Teams.  

Working from home has become a new reality for many, as more and more companies are requesting that their staff work remotely. Already, we are seeing how solutions that enable remote work and learning across chat, video, and file collaboration have become central to the way we work. Microsoft has seen an unprecedented spike in Teams usage and they have more than 75 million daily users as of May 2020, a 70% increase in daily active users from the month of March1 

What’s New in MVISION Cloud for Microsoft Teams 

MVISION Cloud for Microsoft Teams now provides policy controls for security admins to monitor and remove unauthorized guest users based on their domains, the team guest users are joining etc. As organizations use Microsoft Teams to collaborate with trusted partners to exchange messages, participate in calls, and share files, it is critical to ensure that partners are joining teams designated for external communication and only guest users from trusted partner domains are joining the teams.  

 Organizations can configure policies in McAfee MVISION Cloud to:

  • Monitor guest users from untrusted domains and remove the guest users automatically. Security admins do not have to reach out to Microsoft Teams admin and ask them to remove any untrusted guest users manually.  
  • Define the list of teams designated for external communication and make sure that users from partner organizations are joining only those teams and not any internal teams. If the partner users join any internal-only teams, they will be removed by McAfee MVISION Cloud automatically.  

With these new features, McAfee offers complete data protection and collaboration control capabilities to enable organizations to safely collaborate with partners without having to worry about exposing confidential data to guest users 

Here is the comprehensive list of use cases organizations can enable by using MVISION Cloud for Microsoft Teams. 

  • Modern data security. IT can extend existing DLP policies to messages and files in all types of Teams channels, enforcing policies based on keywords, fingerprints, data identifiers, regular expressions and match highlighting for content and metadata. 
  • Collaboration control. Messages or files posted in channels can be restricted to specific users, including blocking the sharing of data to any external location. 
  • Guest user control. Guest users can be restricted to join only teams meant for external communication and unauthorized guest users from any domains other than trusted partner domains can be automatically removed.  
  • Comprehensive remediation. Enables auditing of regulated data uploaded to Microsoft Teams and remediates policy violations by coaching users, notifying administrators, quarantining, tombstoning, restoring and deleting user actions. End users can autonomously correct their actions, removing incidents from IT’s queue. 
  • Threat prevention. Empowers organizations to detect and prevent anomalous behavior indicative of insider threats and compromised accounts. McAfee captures a complete record of all user activity in Teams and leverages machine learning to analyze activity across multiple heuristics to accurately detect threats. 
  • Forensic investigations: With an auto-generated, detailed audit trail of all user activity, MVISION Cloud provides rich capabilities for forensics and investigations. 
  • On-the-go security, for on-the-go policies. Helps secure multiple access modes, including browsers and native apps, and applies controls based on contextual factors, including user, device, data and location. Personal devices lacking adequate control over data can be blocked from access. 

McAfee MVISION Cloud for Microsoft Teams is now in use with a substantial number of large enterprise customers to enable their security, governance and compliance capabilities. The solution fits all industry verticals due to the flexibility of policies and its ease of use. 

The post MVISION Cloud for Microsoft Teams appeared first on McAfee Blogs.

The Life Cycle of a Compromised (Cloud) Server

By Bob McArdle

Trend Micro Research has developed a go-to resource for all things related to cybercriminal underground hosting and infrastructure. Today we released the second in this three-part series of reports which detail the what, how, and why of cybercriminal hosting (see the first part here).

As part of this report, we dive into the common life cycle of a compromised server from initial compromise to the different stages of monetization preferred by criminals. It’s also important to note that regardless of whether a company’s server is on-premise or cloud-based, criminals don’t care what kind of server they compromise.

To a criminal, any server that is exposed or vulnerable is fair game.

Cloud vs. On-Premise Servers

Cybercriminals don’t care where servers are located. They can leverage the storage space, computation resources, or steal data no matter what type of server they access. Whatever is most exposed will most likely be abused.

As digital transformation continues and potentially picks up to allow for continued remote working, cloud servers are more likely to be exposed. Many enterprise IT teams, unfortunately, are not arranged to provide the same protection for cloud as on-premise servers.

As a side note, we want to emphasize that this scenario applies only to cloud instances replicating the storage or processing power of an on-premise server. Containers or serverless functions won’t fall victim to this same type of compromise. Additionally, if the attacker compromises the cloud account, as opposed to a single running instance, then there is an entirely different attack life cycle as they can spin up computing resources at will. Although this is possible, however, it is not our focus here.

Attack Red Flags

Many IT and security teams might not look for earlier stages of abuse. Before getting hit by ransomware, however, there are other red flags that could alert teams to the breach.

If a server is compromised and used for cryptocurrency mining (also known as cryptomining), this can be one of the biggest red flags for a security team. The discovery of cryptomining malware running on any server should result in the company taking immediate action and initiating an incident response to lock down that server.

This indicator of compromise (IOC) is significant because while cryptomining malware is often seen as less serious compared to other malware types, it is also used as a monetization tactic that can run in the background while server access is being sold for further malicious activity. For example, access could be sold for use as a server for underground hosting. Meanwhile, the data could be exfiltrated and sold as personally identifiable information (PII) or for industrial espionage, or it could be sold for a targeted ransomware attack. It’s possible to think of the presence of cryptomining malware as the proverbial canary in a coal mine: This is the case, at least, for several access-as-a-service (AaaS) criminals who use this as part of their business model.

Attack Life Cycle

Attacks on compromised servers follow a common path:

  1. Initial compromise: At this stage, whether a cloud-based instance or an on-premise server, it is clear that a criminal has taken over.
  2. Asset categorization: This is the inventory stage. Here a criminal makes their assessment based on questions such as, what data is on that server? Is there an opportunity for lateral movement to something more lucrative? Who is the victim?
  3. Sensitive data exfiltration: At this stage, the criminal steals corporate emails, client databases, and confidential documents, among others. This stage can happen any time after asset categorization if criminals managed to find something valuable.
  4. Cryptocurrency mining: While the attacker looks for a customer for the server space, a target attack, or other means of monetization, cryptomining is used to covertly make money.
  5. Resale or use for targeted attack or further monetization: Based on what the criminal finds during asset categorization, they might plan their own targeted ransomware attack, sell server access for industrial espionage, or sell the access for someone else to monetize further.

 

lifecycle compromised server

The monetization lifecycle of a compromised server

Often, targeted ransomware is the final stage. In most cases, asset categorization reveals data that is valuable to the business but not necessarily valuable for espionage.

A deep understanding of the servers and network allows criminals behind a targeted ransomware attack to hit the company where it hurts the most. These criminals would know the dataset, where they live, whether there are backups of the data, and more. With such a detailed blueprint of the organization in their hands, cybercriminals can lock down critical systems and demand higher ransom, as we saw in our 2020 midyear security roundup report.

In addition, while a ransomware attack would be the visible urgent issue for the defender to solve in such an incident, the same attack could also indicate that something far more serious has likely already taken place: the theft of company data, which should be factored into the company’s response planning. More importantly, it should be noted that once a company finds an IOC for cryptocurrency, stopping the attacker right then and there could save them considerable time and money in the future.

Ultimately, no matter where a company’s data is stored, hybrid cloud security is critical to preventing this life cycle.

 

The post The Life Cycle of a Compromised (Cloud) Server appeared first on .

Removing Open Source Visibility Challenges for Security Operations Teams

By Trend Micro

 

Identifying security threats early can be difficult, especially when you’re running multiple security tools across disparate business units and cloud projects. When it comes to protecting cloud-native applications, separating legitimate risks from noise and distractions is often a real challenge.

 

That’s why forward-thinking organizations look at things a little differently. They want to help their application developers and security operations (SecOps) teams implement unified strategies for optimal protection. This is where a newly expanded partnership from Trend Micro and Snyk can help.

 

Dependencies create risk

 

In today’s cloud-native development streams, the insatiable need for faster iterations and time-to-market can impact both downstream and upstream workflows. As a result, code reuse and dependence on third-party libraries has grown, and with it the potential security, compliance and reputational risk organizations are exposing themselves to.

 

Just how much risk is associated with open source software today? According to Snyk research, vulnerabilities in open source software have increased 2.5x in the past three years. https://info.snyk.io/sooss-report-2020. What’s more, a recent report claimed to have detected a 430% year-on-year increase in attacks targeting open source components, with the end goal of infecting the software supply chain. While open source code is therefore being used to accelerate time-to-market, security teams are often unaware of the scope and impact this can have on their environments.

 

Managing open source risk

 

This is why cloud security leader Trend Micro, and Snyk, a specialist in developer-first open source security, have extended their partnership with a new joint solution. It’s designed to help security teams manage the risk of open source vulnerabilities from the moment code is introduced, without interrupting the software delivery process.

 

This ambitious achievement helps improve security for your operations teams without changing the way your developer teams work. Trend Micro and Snyk are addressing open source risks by simplifying a bottom-up approach to risk mitigation that brings together developer and SecOps teams under one unified solution. It combines state-of-the-art security technology with collaborative features and processes to eliminate the security blind spots that can impact development lifecycles and business outcomes.

 

Available as part of Trend Micro Cloud One, the new solution being currently co-developed with Snyk will:

  • Scan all code repositories for vulnerabilities using Snyk’s world-class vulnerability scanning and database
  • Bridge the organizational gap between DevOps & SecOps, to help influence secure DevOps practices
  • Deliver continuous visibility of code vulnerabilities, from the earliest code to code running in production
  • Integrate seamlessly into the complete Trend Micro Cloud One security platform

CloudOne

 

 

This unified solution closes the gap between security teams and developers, providing immediate visibility across modern cloud architectures. Trend Micro and Snyk continue to deliver world class protection that fits the cloud-native development and security requirements of today’s application-focused organizations.

 

 

 

The post Removing Open Source Visibility Challenges for Security Operations Teams appeared first on .

This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

By Jon Clay (Global Threat Communications)
week in security

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, read about one of Microsoft’s largest Patch Tuesday updates ever, including fixes for 120 vulnerabilities and two zero-days. Also, learn about Trend Micro’s new integrations with Amazon Web Services (AWS).

 

Read on:

 

Microsoft Patches 120 Vulnerabilities, Two Zero-Days

This week Microsoft released fixes for 120 vulnerabilities, including two zero-days, in 13 products and services as part of its monthly Patch Tuesday rollout. The August release marks its third-largest Patch Tuesday update, bringing the total number of security fixes for 2020 to 862. “If they maintain this pace, it’s quite possible for them to ship more than 1,300 patches this year,” says Dustin Childs of Trend Micro’s Zero-Day Initiative (ZDI).

 

XCSSET Mac Malware: Infects Xcode Projects, Performs UXSS Attack on Safari, Other Browsers, Leverages Zero-day Exploits

Trend Micro has discovered an unusual infection related to Xcode developer projects. Upon further investigation, it was discovered that a developer’s Xcode project at large contained the source malware, which leads to a rabbit hole of malicious payloads. Most notable in our investigation is the discovery of two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 1

We’re all now living in a post-COVID-19 world characterized by uncertainty, mass home working and remote learning. To help you adapt to these new conditions while protecting what matters most, Trend Micro has developed a two-part blog series on ‘the new normal’. Part one identifies the scope and specific cyber-threats of the new normal. 

 

Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

Trend Micro enhances agility and automation in cloud security through integrations with Amazon Web Services (AWS). Through this collaboration, Trend Micro Cloud One offers the broadest platform support and API integration to protect AWS infrastructure whether building with Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, AWS Fargate, containers, Amazon Simple Storage Service (Amazon S3), or Amazon Virtual Private Cloud (Amazon VPC) networking.

 

Shedding Light on Security Considerations in Serverless Cloud Architectures

The big shift to serverless computing is imminent. According to a 2019 survey, 21% of enterprises have already adopted serverless technology, while 39% are considering it. Trend Micro’s new research on serverless computing aims to shed light on the security considerations in serverless environments and help adopters in keeping their serverless deployments as secure as possible.

 

In One Click: Amazon Alexa Could be Exploited for Theft of Voice History, PII, Skill Tampering

Amazon’s Alexa voice assistant could be exploited to hand over user data due to security vulnerabilities in the service’s subdomains. The smart assistant, which is found in devices such as the Amazon Echo and Echo Dot — with over 200 million shipments worldwide — was vulnerable to attackers seeking user personally identifiable information (PII) and voice recordings.

 

New Attack Lets Hackers Decrypt VoLTE Encryption to Spy on Phone Calls

A team of academic researchers presented a new attack called ‘ReVoLTE,’ that could let remote attackers break the encryption used by VoLTE voice calls and spy on targeted phone calls. The attack doesn’t exploit any flaw in the Voice over LTE (VoLTE) protocol; instead, it leverages weak implementation of the LTE mobile network by most telecommunication providers in practice, allowing an attacker to eavesdrop on the encrypted phone calls made by targeted victims.

 

An Advanced Group Specializing in Corporate Espionage is on a Hacking Spree

A Russian-speaking hacking group specializing in corporate espionage has carried out 26 campaigns since 2018 in attempts to steal vast amounts of data from the private sector, according to new findings. The hacking group, dubbed RedCurl, stole confidential corporate documents including contracts, financial documents, employee records and legal records, according to research published this week by the security firm Group-IB.

 

Walgreens Discloses Data Breach Impacting Personal Health Information of More Than 72,000 Customers

The second-largest pharmacy chain in the U.S. recently disclosed a data breach that may have compromised the personal health information (PHI) of more than 72,000 individuals across the United States. According to Walgreens spokesman Jim Cohn, prescription information of customers was stolen during May protests, when around 180 of the company’s 9,277 locations were looted.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 2

The past few months have seen radical changes to our work and home life under the Coronavirus threat, upending norms and confining millions of American families within just four walls. In this context, it’s not surprising that more of us are spending an increasing portion of our lives online. In the final blog of this two-part series, Trend Micro discusses what you can do to protect your family, your data, and access to your corporate accounts.

 

What are your thoughts on Trend Micro’s tips to make your home cybersecurity and privacy stronger in the COVID-19-impacted world? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions appeared first on .

ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value

By Trend Micro

This material was published by ESG Research Insights Report, Validating Trend Micro’s Approach and Enhancing GTM Intelligence, 2020.

 

 

 

The post ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value appeared first on .

Have You Considered Your Organization’s Technical Debt?

By Madeline Van Der Paelt

TL;DR Deal with your dirty laundry.

Have you ever skipped doing your laundry and watched as that pile of dirty clothes kept growing, just waiting for you to get around to it? You’re busy, you’re tired and you keep saying you’ll get to it tomorrow. Then suddenly, you realize that it’s been three weeks and now you’re running around frantically, late for work because you have no clean socks!

That is technical debt.

Those little things that you put off, which can grow from a minor inconvenience into a full-blown emergency when they’re ignored long enough.

Piling Up

How many times have you had an alarm go off, or a customer issue arise from something you already knew about and meant to fix, but “haven’t had the time”? How many times have you been working on something and thought, “wow, this would be so much easier if I just had the time to …”?

That is technical debt.

But back to you. In your craze to leave for work you manage to find two old mismatched socks. One of them has a hole in it. You don’t have time for this! You throw them on and run out the door, on your way to solve real problems. Throughout the day, that hole grows and your foot starts to hurt.

This is really not your day. In your panicked state this morning you actually managed to add more pain to your already stressed system, plus you still have to do your laundry when you get home! If only you’d taken the time a few days ago…

Coming Back to Bite You

In the tech world where one seemingly small hole – one tiny vulnerability – can bring down your whole system, managing technical debt is critical. Fixing issues before they become emergent situations is necessary in order to succeed.

If you’re always running at full speed to solve the latest issue in production, you’ll never get ahead of your competition and only fall further behind.

It’s very easy to get into a pattern of leaving the little things for another day. Build optimizations, that random unit test that’s missing, that playbook you meant to write up after the last incident – technical debt is a real problem too! By spending just a little time each day to tidy up a few things, you can make your system more stable and provide a better experience for both your customers and your fellow developers.

Cleaning Up

Picture your code as that mountain of dirty laundry. Each day that passes, you add just a little more to it. The more debt you add on, the more daunting your task seems. It becomes a thing of legend. You joke about how you haven’t dealt with it, but really you’re growing increasingly anxious and wary about actually tackling it, and what you’ll find when you do.

Maybe if you put it off just a little bit longer a hero will swoop in and clean up for you! (A woman can dream, right?) The more debt you add, the longer it will take to conquer it, and the harder it will be and the higher the risk is of introducing a new issue.

This added stress and complexity doesn’t sound too appealing, so why do we do it? It’s usually caused by things like having too much work in progress, conflicting priorities and (surprise!) neglected work.

Managing technical debt requires only one important thing – a cultural change.

As much as possible we need to stop creating technical debt, otherwise we will never be able to get it under control. To do that, we need to shift our mindset. We need to step back and take the time to see and make visible all of the technical debt we’re drowning in. Then we can start to chip away at it.

Culture Shift

My team took a page out of “The Unicorn Project” (Kim, 2019) and started by running “debt days” when we caught our breath between projects. Each person chose a pain point, something they were interested in fixing, and we started there. We dedicated two days to removing debt and came out the other side having completed tickets that were on the backlog for over a year.

We also added new metrics and dashboards for better incident response, and improved developer tools.

Now, with each new code change, we’re on the lookout. Does this change introduce any debt? Do we have the ability to fix that now? We encourage each other to fix issues as we find them whether it’s with the way our builds work, our communication processes or a bug in the code.

We need to give ourselves the time to breathe, in both our personal lives or our work day. Taking a pause between tasks not only allows us to mentally prepare for the next one, but it gives us time to learn and reflect. It’s in these pauses that we can see if we’ve created technical debt in any form and potentially go about fixing it right away.

What’s Next?

The improvement of daily work ultimately enables developers to focus on what’s really important, delivering value. It enables them to move faster and find more joy in their work.

So how do you stay on top of your never-ending laundry? Your family chooses to makes a cultural change and decides to dedicate time to it. You declare Saturday as laundry day!

Make the time to deal with technical debt –your developers, security teams, and your customers will thank you for it.

 

The post Have You Considered Your Organization’s Technical Debt? appeared first on .

Fixing cloud migration: What goes wrong and why?

By Trend Micro

 

The cloud space has been evolving for almost a decade. As a company we’re a major cloud user ourselves. That means we’ve built up a huge amount of in-house expertise over the years around cloud migration — including common challenges and perspectives on how organizations can best approach projects to improve success rates.

As part of our #LetsTalkCloud series, we’ve focused on sharing some of this expertise through conversations with our own experts and folks from the industry. To kick off the series, we discussed some of the security challenges solution architects and security engineers face with customers when discussing cloud migrations. Spoiler…these challenges may not be what you expect.

 

Drag and drop

 

This lack of strategy and planning from the start is symptomatic of a broader challenge in many organizations: There’s no big-picture thinking around cloud, only short-term tactical efforts. Sometimes we get the impression that a senior exec has just seen a ‘cool’ demo at a cloud vendor’s conference and now wants to migrate a host of apps onto that platform. There’s no consideration of how difficult or otherwise this would be, or even whether it’s necessary and desirable.

 

These issues are compounded by organizational siloes. The larger the customer, the larger and more established their individual teams are likely to be, which can make communication a major challenge. Even if you have a dedicated cloud team to work on a project, they may not be talking to other key stakeholders in DevOps or security, for example.

 

The result is that, in many cases, tools, applications, policies, and more are forklifted over from on-premises environments to the cloud. This ends up becoming incredibly expensive. as these organizations are not really changing anything. All they are doing is adding an extra middleman, without taking advantage of the benefits of cloud-native tools like microservices, containers, and serverless.

 

There’s often no visibility or control. Organizations don’t understand they need to lockdown all their containers and sanitize APIs, for example. Plus, there’s no authority given to cloud teams around governance, cost management, and policy assignment, so things just run out of control. Often, shared responsibility isn’t well understood, especially in the new world of DevOps pipelines, so security isn’t applied to the right areas.

 

Getting it right

 

These aren’t easy problems to solve. From a security perspective, it seems we still have a job to do in educating the market about shared responsibility in the cloud, especially when it comes to newer technologies, like serverless and containers. Every time there’s a new way of deploying an app, it seems like people make the same mistakes all over again — presuming the vendors are in charge of security.

 

Automation is a key ingredient of successful migrations. Organizations should be automating everywhere, including policies and governance, to bring more consistency to projects and keep costs under control. In doing so, they must realize that this may require a redesign of apps, and a change in the tools they use to deploy and manage those apps.

 

Ultimately, you can migrate apps to the cloud in a couple of clicks. But the governance, policy, and management that must go along with this is often forgotten. That’s why you need clear strategic objectives and careful planning to secure more successful outcomes. It may not be very sexy, but it’s the best way forward.

 

To learn more about cloud migration, check out our blog series. And catch up on all of the latest trends in DevOps to learn more about securing your cloud environment.

The post Fixing cloud migration: What goes wrong and why? appeared first on .

Are You Promoting Security Fluency in your Organization?

By Trend Micro

 

Migrating to the cloud is hard. The PowerPoint deck and pretty architectures are drawn up quickly but the work required to make the move will take months and possibly years.

 

The early stages require significant effort by teams to learn new technologies (the cloud services themselves) and new ways of the working (the shared responsibility model).

 

In the early days of your cloud efforts, the cloud center of expertise is a logical model to follow.

 

Center of Excellence

 

A cloud center of excellence is exactly what it sounds like. Your organization forms a new team—or an existing team grows into the role—that focuses on setting cloud standards and architectures.

 

They are often the “go-to” team for any cloud questions. From the simple (“What’s an Amazon S3 bucket?”), to the nuanced (“What are the advantages of Amazon Aurora over RDS?”), to the complex (“What’s the optimum index/sort keying for this DynamoDB table?”).

 

The cloud center of excellence is the one-stop shop for cloud in your organization. At the beginning, this organizational design choice can greatly accelerate the adoption of cloud technologies.

 

Too Central

 

The problem is that accelerated adoption doesn’t necessarily correlate with accelerated understanding and learning.

 

In fact, as the center of excellent continues to grow its success, there is an inverse failure in organizational learning which create a general lack of cloud fluency.

 

Cloud fluency is an idea introduced by Forrest Brazeal at A Cloud Guru that describes the general ability of all teams within the organization to discuss cloud technologies and solutions. Forrest’s blog post shines a light on this situation and is summed up nicely in this cartoon;

 

Our own Mark Nunnikhoven also spoke to Forrest on episode 2 of season 2 for #LetsTalkCloud.

 

Even though the cloud center of excellence team sets out to teach everyone and raise the bar, the work soon piles up and the team quickly shifts away from an educational mandate to a “fix everything” one.

 

What was once a cloud accelerator is now a place of burnout for your top, hard-to-replace cloud talent.

 

Security’s Past

 

If you’ve paid attention to how cybersecurity teams operate within organizations, you have probably spotted a number of very concerning similarities.

 

Cybersecurity teams are also considered a center of excellence and the central team within the organization for security knowledge.

 

Most requests for security architecture, advice, operations, and generally anything that includes the prefix “cyber”, word “risk”, or hints of “hacking” get routed to this team.

 

This isn’t the security team’s fault. Over the years, systems have increased in complexity, more and more incidents occur, and security teams rarely get the opportunity to look ahead. They are too busy stuck in “firefighting mode” to take as step back and re-evaluate the organizational design structure they work within.

 

According to Gartner, for every 750 employees in an organization, one of those is dedicated to cybersecurity. Those are impossible odds that have lead to the massive security skills gap.

 

Fluency Is The Way Forward

 

Security needs to follow the example of cloud fluency. We need “security fluency” in order to import the security posture of the systems we built and to reduce the risk our organizations face.

 

This is the reason that security teams need to turn their efforts to educating development teams. DevSecOps is a term chock full of misconceptions and it lacks context to drive the needed changes but it is handy for raising awareness of the lack of security fluency.

 

Successful adoption of a DevOps philosophy is all about removing barriers to customer success. Providing teams with the tools and autonomy they require is a critical factor in their success.

 

Security is just one aspect of the development team’s toolkit. It’s up to the current security team to help educate them on the principles driving modern cybersecurity and how to ensure that the systems they build work as intended…and only as intended.

The post Are You Promoting Security Fluency in your Organization? appeared first on .

Cloud Security Is Simple, Absolutely Simple.

By Mark Nunnikhoven (Vice President, Cloud Research)

“Cloud security is simple, absolutely simple. Stop over complicating it.”

This is how I kicked off a presentation I gave at the CyberRisk Alliance, Cloud Security Summit on Apr 17 of this year. And I truly believe that cloud security is simple, but that does not mean easy. You need the right strategy.

As I am often asked about strategies for the cloud, and the complexities that come with it, I decided to share my recent talk with you all. Depending on your preference, you can either watch the video below or read the transcript of my talk that’s posted just below the video. I hope you find it useful and will enjoy it. And, as always, I’d love to hear from you, find me @marknca.

For those of you who prefer to read rather than watch a video, here’s the transcript of my talk:

Cloud security is simple, absolutely simple. Stop over complicating it.

Now, I know you’re probably thinking, “Wait a minute, what is this guy talking about? He is just off his rocker.”

Remember, simple doesn’t mean easy. I think we make things way more complicated than they need to be when it comes to securing the cloud, and this makes our lives a lot harder than they need to be. There’s some massive advantages when it comes to security in the cloud. Primarily, I think we can simplify our security approach because of three major reasons.

The first is integrated identity and access management. All three major cloud providers, AWS, Google and Microsoft offer fantastic identity, and access management systems. These are things that security, and [inaudible 00:00:48] professionals have been clamouring for, for decades.

We finally have this ability, we need to take advantage of it.

The second main area is the shared responsibility model. We’ll cover that more in a minute, but it’s an absolutely wonderful tool to understand your mental model, to realize where you need to focus your security efforts, and the third area that simplifies security for us is the universal application of APIs or application programming interfaces.

These give us as security professionals the ability to orchestrate. and automate a huge amount of the grunt work away. These three things add up to, uh, the ability for us to execute a very sophisticated, uh, or very difficult to pull off, uh, security practice, but one that ultimately is actually pretty simple in its approach.

It’s just all the details are hard and we’re going to use these three advantages to make those details simpler. So, let’s take a step back for a second and look at what our goal is.

What is the goal of cybersecurity? That’s not something you hear quite often as a question.

A lot of the time you’ll hear the definition of cybersecurity is, uh, about, uh, securing the confidentiality, integrity, and availability of information or data. The CIA triad, different CIA, but I like to phrase this in a different way. I think the goal is much clearer, and the goal’s much simpler.

It is to make sure that whatever you’re building works as intended and only as intended. Now, you’ll realize you can’t accomplish this goal just as a security team. You need to work with your, uh, developers, you need to work with operations, you need to work with the business units, with the end users of your application as well.

This is a wonderful way of phrasing our goal, and realizing that we’re all in this together to make sure whatever you’re building works as intended, and only as intended.

Now, if we move forward, and we look at who are we up against, who’s preventing our stuff from working, uh, well?

You look at normally, you think of, uh, who’s attacking our systems? Who are the risks? Is it nation states? Is it maybe insider threats? While these are valid threats, they’re really overblown. You’re… don’t have to worry about nation state attacks.

If you’re a nation state, worry about it. If you’re not a nation state, you don’t have to worry about it because frankly, there’s nothing you can do to stop them. You can slow them down a little bit, but by definition, they’re going to get through your resources.

As far as insider attacks, this is an HR problem. Treat your people well. Um, check in with them, and have a strong information management policy in place, and you’re going to reduce this threat naturally. If you go hunting for people, you’re going to create the very threats that you’re looking at.

So, it brings us to the next set. What about cyber criminals? You know, we do have to worry about cyber criminals.

Cyber criminals are targeting systems simply because these systems are online, these are profit motivated criminals who are organized, and have a good set of tools, so we absolutely need to worry about them, but there’s a more insidious or more commonplace, maybe a simpler threat that we need to worry about, and that’s one of mistakes.

The vast majority of issues that happen around data breaches around security vulnerabilities in the cloud are mistake driven. In fact, to the point where I would not even worry about cyber criminals simply because all the work we’re going to do to focus on, uh, preventing mistakes.

And catching, and rectifying the stakes really, really quickly is going to uh, you a cover all the stuff that we would have done to block out cyber criminals as well, so mistakes are very common because people are using a lot more services in the cloud.

You have a lot more, um, parts and moving, uh, complexity in your deployment, um, and you’re going to make a mistake, which is why you need to put automated systems in place to make sure that those mistakes don’t happen, or if they do happen that they’re caught very, very quickly.

This applies to standard DevOps, the philosophies for building. It also applies to security very, very wonderfully, so this is the main thing we’re going to focus on.

So, if we look at that sum up together, we have our goal of making sure whatever we’re building works as intended, and only as intended, and our major issue here, the biggest risk to this is simple mistakes and misconfigurations.

Okay, so we’re not starting from ground zero here. We can learn from others, and the first place we’re going to learn is the shared responsibility model. The shared responsibility applies to all cloud service providers.

If you look on the left hand side of the slide here, you’ll see the traditional on premise model. We roughly have six areas where something has to be done roughly daily, whether it’s patching, maintenance, uh, just operational visibility, monitoring, that kind of thing, and in a traditional on premise environment, you’re responsible for all of it, whether it’s your team, or a team underneath your organization.

Somewhere within your tree, people are on the hook for doing stuff daily. Here when we move into an infrastructure, so getting a virtual machine from a cloud provider right off the bat, half of the responsibilities are pushed away.

That’s a huge, huge win.

And, as we move further and further to the right to more managed service, or staff level services, we have less and less daily responsibilities.

Now, of course, you always still have to verify that the cloud service provider’s doing what they, uh, say they’re doing, which is why certifications and compliance frameworks come into play, uh, but the bottom line is you’re doing less work, so you can focus on fewer areas.

Um, that is, or I should say not less work, but you’re doing, uh, less broad of a work.

So you can have that deeper focus, and of course, you always have to worry about service configuration. You are given knobs and dials to turn to lock things down. You should use them like things like encrypting, uh, all your data at rest.

Most of the time it’s an easy check box, but it’s up to you to check it ‘cause it’s your responsibility.

We also have the idea of an adoption framework, and this applies for Azure, for AWS and for Google, uh, and what they do is they help you map out your business processes.

This is important to security, because it gives you the understanding of where your data is, what’s important to the business, where does it lie, who needs to touch it, and access it and process it.

That also gives us the idea, uh, or the ability to identify the stakeholders, so that we know, uh, you know, who’s concerned about this data, who is, has an investment in this data, and finally it helps to, to deliver an action plan.

The output of all of these frameworks is to deliver an action plan to help you migrate into the cloud and help you to continuously evolve. Well, it’s also a phenomenal map for your security efforts.

You want to prioritize security, this is how you do it. You get it through the adoption framework, understanding what’s important to the business, and that lets you identify critical systems and areas for your security.

Again, we want to keep things simple, right? And, the third, uh, the o- other things we want to look at is the CIS foundations. They have them for AWS, Azure and GCP, um, and these provide a prescriptive guidance.

They’re really, um, a strong baseline, and a checklist of tasks that you can accomplish, um, or take on, on your, uh, take on, on your own, excuse me, uh, in order to, um, you know, basically cover off the really basics is encryption at rest on, um, you know, do I make sure that I don’t have, uh, things needlessly exposed to the internet, that type of thing.

Really fantastic reference point and a starting point for your security practice.

Again, with this idea of keeping things as simple as possible, so when it comes to looking at our security policy, we’ve used the frameworks, um, and the baseline to kind of set up a strong, uh, start to understand, uh, where the business is concerned, and to prioritize.

And, the first question we need to ask ourselves as security practitioners, what happened? If we, if something happens, and we ask what happened?

Do we have the ability to answer this question? So, that starts us off with logging and auditing. This needs to be in place before something happened. Let me just say that again, before something happened, you need [laughs] to be able to have this information in place.

Now, uh, this is really, uh, to ask these key questions of what happened in my account, and who, or what made that thing happen?

So, this starts in the cloud with some basic services. Uh, for AWS it’s cloud trail, for Azure, it’s monitor, and for Google Cloud it used to be called Stackdriver, it is now the Google Cloud operations suite, so these need to be enabled on at full volume.

Don’t worry, you can use some lifecycle rules on the data source to keep your costs low.

But, this gives you that layer, that basic auditing and logging layer, so that you can answer that question of what happened?

So, the next question you want to ask yourself or have the ability to answer is who’s there, right? Who’s doing what in my account? And, that comes down to identity.

We’ve already mentioned this is one of the key pillars of keeping security simple, and getting that highly effective security in your cloud.

[00:09:00] So here you’re answering the questions of who are you, and what are you allowed to do? This is where we get a very simple privilege, uh, or principle in security, which is the principle of least privilege.

You want to give an identity, so whether that’s a user, or a role, or a service, uh, only the privileges they, uh, require that are essential to perform the task that, uh, they are intended to do.

Okay?

So, basically if I need to write a file into a storage, um, folder or a bucket, I should only have the ability to write that file. I don’t need to read it, I don’t need to delete it, I just need to write to it, so only give me that ability.

Remember, that comes back to the other pillar of simple security here of, of key cloud security, is integrated identity.

This is where it really takes off, is that we start to assign very granular access permissions, and don’t worry, we’re going to use the APIs to automate all this stuff, so that it’s not a management headache, but the principle of these privilege is absolutely critical here.

The services you’re going to be using, amazingly, all three cloud providers got in line, and named them the same thing. It’s IAM, identity access management, whether that’s AWS, Azure or Google Cloud.

Now, the next question we’re going to a- ask ourselves are the areas where we’re going to be looking at is really where should I be focusing security controls? Where should I be putting stuff in place?

Because up until now we’ve really talked about leveraging what’s available from the cloud service providers, and you absolutely should available, uh, maximize your usage of their, um, native and primitive, uh, structures primitive as far as base concepts, not as, um, refined.

They’re very advanced controls and, but there are times where you’re going to need to put in your own controls, and these are the areas you’re going to focus on, so you’re going to start with networking, right?

So, in your networking, you’re going to maximize the native structures that are available in the cloud that you’re in, so whether that’s a project structure in Google Cloud, whether that’s a service like transit gateway in AWS, um, and all of them have this idea of a VPC or virtual private cloud or virtual network that is a very strong boundary for you to use.

Remember, most of the time you’re not charged for the creation of those. You have limits in your accounts, but accounts are free, and you can keep adding more, uh, virtual networks. You may be saying, wait a minute, I’m trying to simplify things.

Actually, having multiple virtual networks or virtual private clouds ends up being far simpler because each of them has a task. You go, this application runs in this virtual private cloud, not a big shared one in this specific VPC, and that gives you this wonderfully strong security boundaries, and a very simple way of looking at one VPC, one action, very much the Unix philosophy in play.

Key here though is understanding that while all of the security controls in place for your service provider, um, give you, so, you know, whether it’s VPCs, routing tables, um, uh, access control lists, security groups, all the SDN features that they’ve got in place.

These really help you figure out whether service A or system A is allowed to talk to B, but they don’t tell you what they’re saying.

And, that’s where additional controls called an IPS, or intrusion prevention system come into play, and you may want to look at getting a third party control in to do that, because none of the th- big three cloud providers offer an IPS at this point.

[00:12:00] But that gives you the ability to not just say, “Hey, you’re allowed to talk to each other.” But, to monitor that conversation, to ensure that there’s not malicious code being passed back and forth between systems that nobody’s trying a denial of service attack.

A whole bunch of extra things on there have, so that’s where IPS comes into play in your network defense. Now, we look at compute, right?

We can have compute in various forms, whether that’s in serverless functions, whether that’s in containers, manage containers, whether that’s in traditional virtual machines, but all the principles are the same.

You want to understand where the shared responsibility line is, how much is on your plate, how much is on the CSPs?

You want to understand that you need to harden the EOS, or the service, or both in some cases, make sure that, that’s locked down, so have administrator passwords. Very, very complicated.

Don’t log into these systems, uh, you know, because you want to be fixing things upstream. You want to be fixing things in the build pipeline, not logging into these systems directly, and that’s a huge thing for, uh, systems people to get over, but it’s absolutely essential for security, and you know what?

It’s going to take a while, but there’s some tricks there you can follow with me. You can see, uh, on the slides, uh, at Mark, that is my social everywhere, uh, happy to walk you through the next steps.

This idea of this presentation’s really just the simple basics to start with, to give you that overview of where to focus your time, and, dispel that myth that cloud security is complicating things.

It is a huge path is simplicity, which is a massive lens, or for security.

So, the last area you want to focus here is in data and storage. Whether this is databases, whether this is big blob storage, or, uh, buckets in AWS, it doesn’t really matter the principles, again, all the same.

You want to encrypt your data at rest using the native cloud provided, uh, cloud service provider, uh, features functionality, because most of the time it’s just give it a key address, and give it a checkbox, and you’re good to go.

It’s never been easier to encrypt things, and there is no excuse for it and none of the providers charge extra for, uh, encryption, which is amazing, and you absolutely want to be taking advantage of that, and you want to be as granular as possible with your IAM, uh, and as reasonable, okay?

So, there’s a line here, and a lot of the data stores that are native to the cloud service providers, you can go right down to the data cell level and say, Mark has access, or Mark doesn’t have access to this cell.

That can be highly effective, and maybe right for your use case. It might be too much as well.

But, the nice thing is that you have that option. It’s integrated, it’s pretty straightforward to implement, and then, uh, when we look here, uh, sorry. and then, finally you want to be looking at lifecycle strategies to keep your costs under control.

Um, data really spins out of control when you don’t have to worry about capacity. All of the cloud service providers have some fantastic automations in place.

Basically, just giving you, uh, very simple rules to say, “Okay, after 90 days, move this over to cheaper storage. After 180 days, you know, get rid of it completely, or put it in cold storage.”

Take advantage of those or your bill’s going to spiral out of control, and, and that relates to availability ‘cause uh, uh, and reliability, ‘cause the more you’re spending on that kind of stuff, the less you have to spend on other areas like security and operational efficiency.

So, that brings us to our next big security question. Is this working?

[00:15:00] How do you know if any of this stuff is working? Well, you want to talk about the concept of traceability. Traceability is a, you know, somewhat formal definition, but for me it really comes down to where did this come from, who can access it, and when did they access it?

That ties very closely with the concept of observability. Basically, the ability to look at, uh, closed systems and to infer what’s going on inside based on what’s coming into that system, and what’s leaving that system, really what’s going on.

There’s some great tools here from the service providers. Again, you want to look at, uh, Amazon CloudWatch, uh, Azure Monitor and the Google Cloud operations, uh, suite. Um, and here this leads us to the key, okay?

This is the key to simplifying everything, and I know we’ve covered a ton in this presentation, but I really want you to take a good look at this slide, and again, hit me up, uh, @marknca, happy to answer any questions with, questions afterwards as well here, um, that this will really, really make this simple, and this will really take your security practice to the next level.

If the idea of something happened in your, cloud system, right? In your deployment, there’s a trigger, and then, it either is generating an event or a log.

If you go the bottom row here, you’ve got a log, which you can then react to in a function to deliver some sort of result. That’s the slow-lane on the bottom.

We’re talking minutes here. You also have the top lane where your trigger fires off an event, and then, you react to that with a function, and then, you get a result in the fast lane.

These things happen in seconds, sub-second time. You start to build out your security practice based on this model.

You start automating more and more in these functions, whether it’s, uh, Lambda, whether it’s Cloud Functions, whether it’s Azure Functions, it doesn’t matter.

The CSPs all offer the same core functionality here. This is the critical, critical success metric, is that when you start reacting in the fast lane automatically to things, so if you see that a security event is triggered from like your malware, uh, on your, uh, virtual machine, you can lock that off, and have a new one spin up automatically.

Um, if you’re looking for compliance stuff, the slow lane is the place to go, because it takes minutes.

Reactions happen up top, more, um, stately or more sedate things, so somebody logging into a system is both up top and down low, so up top, if you logged into a VPC or into, um, an instance, or a virtual machine, you’d have a trigger fire off and maybe ask me immediately, “Mark, did you log into the system? Uh, ‘cause you’re, you know, you’re not supposed to be.”

But then I’d respond and say, “Yeah, I, I did log in.” So, immediately you don’t have to respond. It’s not an incident response scenario, but on the bottom track, maybe you’re tracking how many times I’ve logged in.

And after the three or fourth time maybe someone comes by, and has a chat with me, and says, “Hey, do you keep logging into these systems? Can’t you fix it upstream in the deployment, uh, and build a pipeline ‘cause that’s where we need to be moving?”

So, you’ll find this balance, and this concept, I just wanted to get into your heads right now of automating your security practice. If you have a checklist, it should be sitting in a model like this, because it’ll help you, uh, reduce your workload, right?

The idea is to get as much automated possible, and keep things in very clear, and simple boundaries, and what’s more simple than having every security action listed as an automated function, uh, sitting in a code repository somewhere?

[00:18:00] Fantastic approach to modern security practice in the cloud. Very simple, very clear. Yes, difficult to implement. It can be, but it’s an awesome, simple mental model to keep in your head that everything gets automated as a function based on a trigger somewhere.

So, what are the keys to success? What are the keys to keeping this cloud security thing simple? And, hopefully you’ve realized the difference between a simple mental model, and the challenges, uh, in, uh, implementation.

It can be difficult. It’s not easy to implement, but the mental model needs to be kept simple, right? Keep things in their own VPCs, and their own accounts, automate everything. Very, very simple approach. Everything fits into this s- into this structure, so the keys here are remembering the goal.

Make sure that cybersecurity, uh, is making sure that whatever you build works as intended and only as intended. It’s understanding the shared responsibility model, and it’s really looking at, uh, having a plan through cloud adoption frameworks, how to build well, which is a, uh, a concept called the Well-Architected Framework.

It’s specific to AWS, but it’s generic, um, its principles, it can be applied everywhere. We didn’t cover it here, but I’ll put the links, um, in the materials for you, uh, as well as remembering systems over people, right?

Adding the right controls at the right time, uh, and then, finally observing and react. Be vigilant, practice. You’re not going to get this right out of the gates, uh, perfect.

You’re going to have to refine, iterate, and then it’s extremely cloud friendly. That is the cloud model is, get it out there, iterate quickly, but putting the structures in place, you’re not going to make sure that you’re not doing that in an insecure manner.

Thank you very much, uh, here’s a couple of links that’ll help you out before we take some Q&A here, um, trendmicro.com/cloud will get you to the products to learn more. We’re also doing this really cool streaming.

Uh, I host a show called Let’s Talk Cloud. Um, we uh, interview experts, uh, and have a great conversation around, um, what they’re talking about, uh, in the cloud, what they’re working on, and not just around security, but just in building in general.

You can hit that up at trendtalks.fyi. Um, and again, hit me up on social @marknca.

So, we have a couple of questions to kick this off, and you can put more questions in the webinar here, and they will send them along, or answer them in kind if they can.

Um, and that’s really what these are about, is the interaction is getting that, um, to and from. So, the first question that I wanted to tackle is an interesting one, and it’s really that systems over people.

Um, you heard me mention it in the, uh, in the end and the question is really what does that mean systems over people? Isn’t security really about people’s expertise?

And, yes and no, so if you are a SOC analyst, if you are working in a security, uh, role right now, I am really confident saying that 80%, 90% of what you do right now could be delegated out to a system.

So, if you were looking at log lines, and stuff that should be done by systems and bubble up, just the goal for you to investigate to do what people are good at in systems are bad at, so systems mean, uh, you know, putting in, uh, to build pipeline, putting in container scanning in the build pipeline, so that you have to manually scan stuff, right to get rid of the basics. Is that a pen test? 100% no.

Um, but it gets rid of that, hey, you didn’t upgrade to, um, you know, this version of this library.

[00:21:00] That’s all automated, and those, the more systems you get in place, the more you as a security professional, or your security team will be able to focus on where they can really deliver value and frankly, where it’s more interesting work, so that’s what systems over people mean, is basically automate as much as you can to get people doing what people are really good at, and to make sure that the systems catch what we make as mistakes all the time.

If you accidentally try to push an old build out, you know that systems should stop that, if you push a build that hasn’t been checked by that container scanning or by, um, you know, it doesn’t have the appropriate security policy in place.

Systems should catch all that humans shouldn’t have to worry about it at all. That’s systems over processing. You saw that on the, uh, keys to success slide here. I’ll just pull it up. Um, you know, is that, that’s absolutely key.

Another question that we had, uh, was what we didn’t get into here, which was around the Well-Architected Framework. Now, this is a document that was published by AWS, uh, a number of years back, and they’ve kept it going.

They’ve evolved it and essentially it has five pillars. Um, performance, efficiency, uh, op- reliability, security, cost optimization, and operational excellence. Hey, I’ve got all five.

Um, and really [laughs] what that is, is it’s about how to take advantage of these cloud tools.

Now, AWS publishes it, but honestly it applies to Azure, it applies to Google Cloud as well. It’s not service specific. It teaches you how to build in the cloud, and obviously security is one of those big pillars, but it’s… so talking about teaching you how to make those trade offs, how to build an innovation flywheel, so that you have an idea, test it, uh, get the feedback from it, and move forward.

Um, and that’s really, really key. Again, now you should be reading that even if you are an Azure, or GCP customer or, uh, that’s where you’re putting your most of your stuff, because it’s really about the principles, and everything we do, and encourage people to build well, it means that there’s less security issues, right?

Especially we know that the number one problem is mistakes.

That leads to the last question we have here, which is about that, how can I say that cyber criminals, you don’t need to worry about them.

You need to worry about mistakes? That’s a good question. It’s valid, and, um, Trend Micro does a huge amount of research around cyber criminals. I do a whole huge amount of research around cyber criminals.

Uh, my training, by training, and by professional experience. I’m a forensic investigator. This is what I do is take down cyber crimes. Um, but I think mistakes are the number one thing that we deal with in the cloud simply because of the underlying complexity.

I know it’s ironic, and to talk about simplicity, to talk about complexity, but the idea is, um, is that you look at all the major breaches, especially around s3 buckets, those are all m- based on mistake.

There’ve been billions, and billions, and billions of records, and, uh, millions of dollars of damage exposed because of simple mistakes, and that is far more common, uh, than cyber criminals.

And yes, cyber crimes you have [inaudible 00:23:32] worry. You have to worry about them, but everything you’re going to do to fix mistakes, and to put systems in place to stop those mistakes from happening is also going to be for your pr- uh, protection up against cyber criminals, and honestly, if you’re the guy who runs around your organization’s screaming about cyber criminals all the time, you’re far less credible than if you’re saying, “Hey, I want to make sure that we build really, really well, and don’t make mistakes.”

Thank you for taking the time. My name’s Mark Nunnikhoven. I’m the vice president of cloud research at Trend Micro. I’m also an AWS community hero, and I love this stuff. Hit me up on social @marknca. Happy to chat more.

The post Cloud Security Is Simple, Absolutely Simple. appeared first on .

Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower

By Trend Micro

Things fail. It happens. A core principle of building well in the AWS Cloud is reliability. Dr. Vogels said it best, “How can you reduce the impact of failure on your customers?” He uses the term “blast radius” to describe this principle.

One of the key methods for reducing blast radius is the AWS account itself. Accounts are free and provide a strong barrier between resources, and thus, failures or other issues. This type of protection and peace of mind helps teams innovate by reducing the risk of running into another team’s work. The challenge is managing all of these accounts in a reasonable manner. You need to strike a balance between providing security guardrails for teams while also ensuring that each team gets access to the resources they need.

AWS Services & Features

There are a number of AWS services and features that help address this need. AWS Organizations, AWS Firewall Manager, IAM Roles, tagging, AWS Resource Access Manager, AWS Control Tower, and more, which all play a role in helping your team manage multiple accounts.

For this post, we’ll look at AWS Control Tower a little closer. AWS Control Tower was made generally available at AWS re:Inforce. The service provides an easy way to setup and govern AWS accounts in your environment. You can configure strong defaults for all new accounts, pre-populate IAM Roles, and more. Essentially, AWS Control Tower makes sure that any new account starts off on the right foot.

For more on the service, check out this excellent talk from the launch.

Partner Integrations

With almost a year under its belt, AWS Control Tower is now expanding to provide partner integrations. Now, in addition to setting up AWS services and features, you can pre-config supported APN solutions as well. Trend Micro is among the first partners to support this integration by providing the ability to add Trend Micro Cloud One™Workload Security and Trend Micro Cloud One™Conformity to your Control Tower account factory. Once configured, any new account that is created via the factory will automatically be configured in your Trend Micro Cloud One account.

Integration Advantage

This integration not only reduces the friction in getting these key security tools setup, it also provides immediate visibility into your environment. Workload Security will now be able show you any Amazon EC2 instances or Amazon ECS hosts within your accounts. You’ll still need to install and apply a policy to the Workload Security agent to protect these instances, but this initial visibility provides a map for your teams, reducing the time to protection. Conformity will start generating information within minutes. This information from Conformity will allow your teams to get a quick handle on their security posture and more with fast and ongoing security and compliance checks.

Integrating this from the beginning of every new account will allow each team to track their progress against a huge set of recommended practices across all five pillars of the Well-Architected Framework.

What’s Next?

One of the biggest challenges in cloud security is integrating it early in the development process. We know that the earlier security is factored into your builds, the better the result. You can’t get much earlier than the initial creation on an account. That’s why this new integration with AWS Control Tower is so exciting. Having security in every account within your organization from day zero provides much needed visibility and a fantastic head start.

The post Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower appeared first on .

Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response

By Trend Micro

The endpoint has long been a major focal point for attackers targeting enterprise IT environments. Yet increasingly, security bosses are being forced to protect data across the organization, whether it’s in the cloud, on IoT devices, in email, or on-premises servers. Attackers may jump from one environment to the next in multi-stage attacks and even hide between the layers. So, it pays to have holistic visibility, in order to detect and respond more effectively.

This is where XDR solutions offer a convincing alternative to EDR and point solutions. But unfortunately, not all providers are created equal. Trend Micro separates themselves from the pack by providing mature security capabilities across all layers, industry-leading threat intelligence, and an AI-powered analytical approach that produces fewer, higher fidelity alerts.

Under pressure

It’s no secret that IT security teams today are under extreme pressure. They’re faced with an enemy able to tap into a growing range of tools and techniques from the cybercrime underground. Ransomware, social engineering, fileless malware, vulnerability exploits, and drive-by-downloads, are just the tip of the iceberg. There are “several hundred thousand new malicious programs or unwanted apps registered every day,” according to a new Osterman Research report. It argues that, while endpoint protection must be a “key component” in corporate security strategy, “It can only be one strand” —complemented with protection in the cloud, on the network, and elsewhere.

There’s more. Best-of-breed approaches have saddled organizations with too many disparate tools over the years, creating extra cost, complexity, management headaches, and security gaps. This adds to the workload for overwhelmed security teams.

According to Gartner, “Two of the biggest challenges for all security organizations are hiring and retaining technically savvy security operations staff, and building a security operations capability that can confidently configure and maintain a defensive posture as well as provide a rapid detection and response capacity. Mainstream organizations are often overwhelmed by the intersectionality of these two problems.”

XDR appeals to organizations struggling with all of these challenges as well as those unable to gain value from, or who don’t have the resources to invest in, SIEM or SOAR solutions. So what does it involve?

What to look for

As reported by Gartner, all XDR solutions should fundamentally achieve the following:

  • Improve protection, detection, and response
  • Enhance overall productivity of operational security staff
  • Lower total cost of ownership (TCO) to create an effective detection and response capability

However, the analyst urges IT buyers to think carefully before choosing which provider to invest in. That’s because, in some cases, underlying threat intelligence may be underpowered, and vendors have gaps in their product portfolio which could create dangerous IT blind spots. Efficacy will be a key metric. As Gartner says, “You will not only have to answer the question of does it find things, but also is it actually finding things that your existing tooling is not.”

A leader in XDR

This is where Trend Micro XDR excels. It has been designed to go beyond the endpoint, collecting and correlating data from across the organization, including; email, endpoint, servers, cloud workloads, and networks. With this enhanced context, and the power of Trend Micro’s AI algorithms and expert security analytics, the platform is able to identify threats more easily and contain them more effectively.

Forrester recently recognized Trend Micro as a leader in enterprise detection and response, saying of XDR, “Trend Micro has a forward-thinking approach and is an excellent choice for organizations wanting to centralize reporting and detection with XDR but have less capacity for proactively performing threat hunting.”

According to Gartner, fewer than 5% of organizations currently employ XDR. This means there’s a huge need to improve enterprise-wide protection. At a time when corporate resources are being stretched to the limit, Trend Micro XDR offers global organizations an invaluable chance to minimize enterprise risk exposure whilst maximizing the productivity of security teams.

The post Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response appeared first on .

Survey: Employee Security Training is Essential to Remote Working Success

By Trend Micro

Organisations have been forced to adapt rapidly over the past few months as government lockdowns kept most workers to their homes. For many, the changes they’ve made may even become permanent as more distributed working becomes the norm. This has major implications for cybersecurity. Employees are often described as the weakest link in the corporate security chain, so do they become an even greater liability when working from home?

Unfortunately, a major new study from Trend Micro finds that, although many have become more cyber-aware during lockdown, bad habits persist. CISOs looking to ramp up user awareness training may get a better return on investment if they try to personalize strategies according to specific user personas.

What we found

We polled 13,200 remote workers across 27 countries to compile the Head in the Clouds study. It reveals that 72% feel more conscious of their organisation’s cybersecurity policies since lockdown began, 85% claim they take IT instructions seriously, and 81% agree that cybersecurity is partly their responsibility. Nearly two-thirds (64%) even admit that using non-work apps on a corporate device is a risk.

Yet in spite of these lockdown learnings, many employees are more preoccupied by productivity. Over half (56%) admit using a non-work app on a corporate device, and 66% have uploaded corporate data to it; 39% of respondents “often” or “always” access corporate data from a personal device; and 29% feel they can get away with using a non-work app, as IT-backed solutions are “nonsense.”

This is a recipe for shadow IT and escalating levels of cyber-risk. It also illustrates that current approaches to user awareness training are falling short. In fact, many employees seem to be aware of what best practice looks like, they just choose not to follow it.

Four security personas

This is where the second part of the research comes in. Trend Micro commissioned Dr Linda Kaye, Cyberpsychology Academic at Edge Hill University, to profile four employee personas based on their cybersecurity behaviors: fearful, conscientious, ignorant and daredevil.

In this way: Fearful employees may benefit from training simulation tools like Trend Micro’s Phish Insight, with real-time feedback from security controls and mentoring.

Conscientious staff require very little training but can be used as exemplars of good behavior, and to team up with “buddies” from the other groups.

Ignorant users need gamification techniques and simulation exercises to keep them engaged in training, and may also require additional interventions to truly understand the consequences of risky behavior.

Daredevil employees are perhaps the most challenging because their wrongdoing is the result not of ignorance but a perceived superiority to others. Organisations may need to use award schemes to promote compliance, and, in extreme circumstances, step up data loss prevention and security controls to mitigate their risky behavior.

By understanding that no two employees are the same, security leaders can tailor their approach in a more nuanced way. Splitting staff into four camps should ensure a more personalized approach than the one-size-fits-all training sessions most organisations run today.

Ultimately, remote working only works if there is a high degree of trust between managers and their teams. Once the pandemic recedes and staff are technically allowed back in the office, that trust will have to be re-earned if they are to continue benefiting from a Work From Home environment.

The post Survey: Employee Security Training is Essential to Remote Working Success appeared first on .

Risk Decisions in an Imperfect World

By Mark Nunnikhoven (Vice President, Cloud Research)

Risk decisions are the foundation of information security. Sadly, they are also one of the most often misunderstood parts of information security.

This is bad enough on its own but can sink any effort at education as an organization moves towards a DevOps philosophy.

To properly evaluate the risk of an event, two components are required:

  1. An assessment of the impact of the event
  2. The likelihood of the event

Unfortunately, teams—and humans in general—are reasonably good at the first part and unreasonably bad at the second.

This is a problem.

It’s a problem that is amplified when security starts to integration with teams in a DevOps environment. Originally presented as part of AllTheTalks.online, this talk examines the ins and outs of risk decisions and how we can start to work on improving how our teams handle them.

 

The post Risk Decisions in an Imperfect World appeared first on .

Perspectives Summary – What You Said

By William "Bill" Malik (CISA VP Infrastructure Strategies)

 

On Thursday, June 25, Trend Micro hosted our Perspectives 2-hour virtual event. As the session progressed, we asked our attendees, composed of +5000 global registrants, two key questions. This blog analyzes those answers.

 

First, what is your current strategy for securing the cloud?

Rely completely on native cloud platform security capabilities (AWS, Azure, Google…) 33%

Add on single-purpose security capabilities (workload protection, container security…) 13%

Add on security platform with multiple security capabilities for reduced complexity 54%

 

This result affirms IDC analyst Frank Dickson’s observation that most cloud customers will benefit from a suite offering a range of security capabilities covering multiple cloud environments. For the 15% to 20% of organizations that rely on one cloud provider, purchasing a security solution from that vendor may provide sufficient coverage. The quest for point products (which may be best-of-breed, as well) introduces additional complexity across multiple cloud platforms, which can obscure problems, confuse cybersecurity analysts and business users, increase costs, and reduce efficiency.  The comprehensive suite strategy compliments most organizations’ hybrid, multi-cloud approach.

Second, and this is multiple choice, how are you enabling secure digital transformation in the cloud today?

 

This shows that cloud users are open to many available solutions for improving cloud security. The adoption pattern follows traditional on-premise security deployment models. The most commonly cited solution, Network Security/Cloud IPS, recognizes that communication with anything in the cloud requires a trustworthy network. This is a very familiar technique, dating back in the on-premise environment to the introduction of firewalls in the early 1990s from vendors like CheckPoint and supported by academic research as found in Cheswick and Bellovin’s Firewalls and Internet Security (Addison Wesley, 1994).

 

The frequency of data exposure due to misconfigured cloud instances surely drives Cloud Security Posture Management, certainly aided by the ease of deployment of tools like Cloud One conformity.

 

The newness of containers in the production environment most likely explains the relatively lower deployment of container security today.

 

The good news is that organizations do not have to deploy and manage a multitude of point products addressing one problem on one environment. The suite approach simplifies today’s reality and positions the organization for tomorrow’s challenges.

 

Looking ahead, future growth in industrial IoT and increasing deployments of 5G-based public and non-public networks will drive further innovations, increasing the breadth of the suite approach to securing hybrid, multi-cloud environments.

 

What do you think? Let me know @WilliamMalikTM.

 

The post Perspectives Summary – What You Said appeared first on .

Principles of a Cloud Migration

By Jason Dablow
cloud

Development and application teams can be the initial entry point of a cloud migration as they start looking at faster ways to accelerate value delivery. One of the main things they might use during this is “Infrastructure as Code,” where they are creating cloud resources for running their applications using lines of code.

In the below video, as part of a NADOG (North American DevOps Group) event, I describe some additional techniques on how your development staff can incorporate the Well Architected Framework and other compliance scanning against their Infrastructure as Code prior to it being launched into your cloud environment.

If this content has sparked additional questions, please feel free to reach out to me on my LinkedIn. Always happy to share my knowledge of working with large customers on their cloud and transformation journeys!

The post Principles of a Cloud Migration appeared first on .

8 Cloud Myths Debunked

By Trend Micro

Many businesses have misperceptions about cloud environments, providers, and how to secure it all. We want to help you separate fact from fiction when it comes to your cloud environment.

This list debunks 8 myths to help you confidently take the next steps in the cloud.

The post 8 Cloud Myths Debunked appeared first on .

Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations

By Trend Micro

 

Trend Micro is excited to launch new Trend Micro Cloud One™ – Conformity capabilities that will strengthen protection for Azure resources.

 

As with any launch, there is a lot of new information, so we decided to sit down with one of the founders of Conformity, Mike Rahmati. Mike is a technologist at heart, with a proven track record of success in the development of software systems that are resilient to failure and grow and scale dynamically through cloud, open-source, agile, and lean disciplines. In the interview, we picked Mike’s brain on how these new capabilities can help customers prevent or easily remediate misconfigurations on Azure. Let’s dive in.

 

What are the common business problems that customers encounter when building on or moving their applications to Azure or Amazon Web Services (AWS)?

The common problem is there are a lot of tools and cloud services out there. Organizations are looking for tool consolidation and visibility into their cloud environment. Shadow IT and business units spinning up their own cloud accounts is a real challenge for IT organizations to keep on top of. Compliance, security, and governance controls are not necessarily top of mind for business units that are innovating at incredible speeds. That is why it is so powerful to have a tool that can provide visibility into your cloud environment and show where you are potentially vulnerable from a security and compliance perspective.

 

Common misconfigurations on AWS are an open Amazon Elastic Compute Cloud (EC2) or a misconfigured IAM policy. What is the equivalent for Microsoft?

The common misconfigurations are actually quite similar to what we’ve seen with AWS. During the product preview phase, we’ve seen customers with many of the same kinds of misconfiguration issues as we’ve seen with AWS. For example, Microsoft Azure Blobs Storage is the equivalent to Amazon S3 – that is a common source of misconfigurations. We have observed misconfiguration in two main areas: Firewall and Web Application Firewall (WAF),which is equivalent to AWS WAF. The Firewall is similar to networking configuration in AWS, which provides inbound protection for non-HTTP protocols and network related protection for all ports and protocols. It is important to note that this is based on the 100 best practices and 15 services we currently support for Azure and growing, whereas, for AWS, we have over 600 best practices in total, with over 70 controls with auto-remediation.

 

Can you tell me about the CIS Microsoft Azure Foundation Security Benchmark?

We are thrilled to support the CIS Microsoft Azure Foundation Security Benchmark. The CIS Microsoft Azure Foundations Benchmark includes automated checks and remediation recommendations for the following: Identity and Access Management, Security Center, Storage Accounts, Database Services, Logging and Monitoring, Networking, Virtual Machines, and App Service. There are over 100 best practices in this framework and we have rules built to check for all of those best practices to ensure cloud builders are avoiding risk in their Azure environments.

Can you tell me a little bit about the Microsoft Shared Responsibility Model?

In terms of shared responsibility model, it’s is very similar to AWS. The security OF the cloud is a Microsoft responsibility, but the security IN the cloud is the customers responsibility. Microsoft’s ecosystem is growing rapidly, and there are a lot of services that you need to know in order to configure them properly. With Conformity, customers only need to know how to properly configure the core services, according to best practices, and then we can help you take it to the next level.

Can you give an example of how the shared responsibility model is used?

Yes. Imagine you have a Microsoft Azure Blob Storage that includes sensitive data. Then, by accident, someone makes it public. The customer might not be able to afford an hour, two hours, or even days to close that security gap.

In just a few minutes, Conformity will alert you to your risk status, provide remediation recommendations, and for our AWS checks give you the ability to set up auto-remediation. Auto-remediation can be very helpful, as it can close the gap in near-real time for customers.

What are next steps for our readers?

I’d say that whether your cloud exploration is just taking shape, you’re midway through a migration, or you’re already running complex workloads in the cloud, we can help. You can gain full visibility of your infrastructure with continuous cloud security and compliance posture management. We can do the heavy lifting so you can focus on innovating and growing. Also, you can ask anyone from our team to set you up with a complimentary cloud health check. Our cloud engineers are happy to provide an AWS and/or Azure assessment to see if you are building a secure, compliant, and reliable cloud infrastructure. You can find out your risk level in just 10-minutes.

 

Get started today with a 60-day free trial >

Check out our knowledge base of Azure best practice rules>

Learn more >

 

Do you see value in building a security culture that is shifted left?

Yes, we have done this for our customers using AWS and it has been very successful. The more we talk about shifting security left the better, and I think that’s where we help customers build a security culture. Every cloud customer is struggling with implementing earlier on in the development cycle and they need tools. Conformity is a tool for customers which is DevOps or DevSecOps friendly and helps them build a security culture that is shifted left.

We help customers shift security left by integrating the Conformity API into their CI/CD pipeline. The product also has preventative controls, which our API and template scanners provide. The idea is we help customers shift security left to identify those misconfigurations early on, even before they’re actually deployed into their environments.

We also help them scan their infrastructure-as-code templates before being deployed into the cloud. Customers need a tool to bake into their CI/CD pipeline. Shifting left doesn’t simply mean having a reporting tool, but rather a tool that allows them to shift security left. That’s where our product, Conformity, can help.

 

The post Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations appeared first on .

The Fear of Vendor Lock-in Leads to Cloud Failures

By Trend Micro

 

Vendor lock-in has been an often-quoted risk since the mid-1990’s.

Fear that by investing too much with one vendor, an organization reduces their options in the future.

Was this a valid concern? Is it still today?

 

The Risk

Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.

This way, as the vendor’s tools mature, they continue to support your business.

The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.

If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.

Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.

All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.

Cloud Maturity

When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.

Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.

In isolation, that’s a smart move.

Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.

Automation

The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.

Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.

Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.

Missing Out

When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.

The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.

You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.

Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.

When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.

 

But Multi-cloud…

In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.

As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.

They use automation to reduce the impact if they have to change providers at some point in the future.

This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.

 

Cloud Spanning Tools

There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.

Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.

The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.

This means automation playbooks can easily support multi-cloud (e.g.,  Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One™).  Observability tools can easily support multi-cloud (e.g., Honeycomb.io).

The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.

In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.

The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .

Principles of a Cloud Migration – Security W5H – The HOW

By Jason Dablow
cloud

“How about… ya!”

Security needs to be treated much like DevOps in evolving organizations; everyone in the company has a responsibility to make sure it is implemented. It is not just a part of operations, but a cultural shift in doing things right the first time – Security by default. Here are a few pointers to get you started:

1. Security should be a focus from the top on down

Executives should be thinking about security as a part of the cloud migration project, and not just as a step of the implementation. Security should be top of mind in planning, building, developing, and deploying applications as part of your cloud migration. This is why the Well Architected Framework has an entire pillar dedicated to security. Use it as a framework to plan and integrate security at each and every phase of your migration.

2. A cloud security policy should be created and/or integrated into existing policy

Start with what you know: least privilege permission models, cloud native network security designs, etc. This will help you start creating a framework for these new cloud resources that will be in use in the future. Your cloud provider and security vendors, like Trend Micro, can help you with these discussions in terms of planning a thorough policy based on the initial migration services that will be used. Remember from my other articles, a migration does not just stop when the workload has been moved. You need to continue to invest in your operation teams and processes as you move to the next phase of cloud native application delivery.

3. Trend Micro’s Cloud One can check off a lot of boxes!

Using a collection of security services, like Trend Micro’s Cloud One, can be a huge relief when it comes to implementing runtime security controls to your new cloud migration project. Workload Security is already protecting thousands of customers and billions of workload hours within AWS with security controls like host-based Intrusion Prevention and Anti-Malware, along with compliance controls like Integrity Monitoring and Application Control. Meanwhile, Network Security can handle all your traffic inspection needs by integrating directly with your cloud network infrastructure, a huge advantage in performance and design over Layer 4 virtual appliances requiring constant changes to route tables and money wasted on infrastructure. As you migrate your workloads, continuously check your posture against the Well Architected Framework using Conformity. You now have your new infrastructure secure and agile, allowing your teams to take full advantage of the newly migrated workloads and begin building the next iteration of your cloud native application design.

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html. To have a more personalized conversation, please add me to LinkedIn!

The post Principles of a Cloud Migration – Security W5H – The HOW appeared first on .

Is Cloud Computing Any Safer From Malicious Hackers?

By Rob Maynard

Cloud computing has revolutionized the IT world, making it easier for companies to deploy infrastructure and applications and deliver their services to the public. The idea of not spending millions of dollars on equipment and facilities to host an on-premises data center is a very attractive prospect to many. And certainly, moving resources to the cloud just has to be safer, right? The cloud provider is going to keep our data and applications safe for sure. Hackers won’t stand a chance. Wrong. More commonly than anyone should, I often hear this delusion from many customers. The truth of the matter is, without proper configuration and the right skillsets administering the cloud presence, as well as practicing common-sense security practices, cloud services are just (if not more) vulnerable.

The Shared Responsibility Model

Before going any further, we need to discuss the shared responsibility model of the cloud service provider and user.

When planning your migration to the cloud, one needs to be aware of which responsibilities belong to which entity. As the chart above shows, the cloud service provider is responsible for the cloud infrastructure security and physical security of such. By contrast, the customer is responsible for their own data, the security of their workloads (all the way to the OS layer), as well as the internal network within the companies VPC’s.

One more pretty important aspect that remains in the hands of the customer is access control. Who has access to what resources? This is really no different than it’s been in the past, exception being the physical security of the data center is handled by the CSP as opposed to the on-prem security, but the company (specifically IT and IT security) are responsible for locking down those resources efficiently.

Many times, this shared responsibility model is overlooked, and poor assumptions are made the security of a company’s resources. Chaos ensues, and probably a firing or two.

So now that we have established the shared responsibility model and that the customer is responsible for their own resource and data security, let’s take a look at some of the more common security issues that can affect the cloud.

Amazon S3 

Amazon S3 is a truly great service from Amazon Web Services. Being able to store data, host static sites or create storage for applications are widely used use cases for this service. S3 buckets are also a prime target for malicious actors, since many times they end up misconfigured.

One such instance occurred in 2017 when Booz Allen Hamilton, a defense contractor for the United States, was pillaged of battlefield imagery as well as administrator credentials to sensitive systems.

Yet another instance occurred in 2017, when due to an insecure Amazon S3 bucket, the records of 198 million American voters were exposed. Chances are if you’re reading this, there’s a good chance this breach got you.

A more recent breach of an Amazon S3 bucket (and I use the word “breach,” however most of these instances were a result of poor configuration and public exposure, not a hacker breaking in using sophisticated techniques) had to do with the cloud storage provider “Data Deposit Box.” Utilizing Amazon S3 buckets for storage, a configuration issue caused the leak of more than 270,000 personal files as well as personal identifiable information (PII) of its users.

One last thing to touch on the subject of cloud file storage has to do with how many organizations are using Amazon S3 to store uploaded data from customers as a place to send for processing by other parts of the application. The problem here is how do we know if what’s being uploaded is malicious or not? This question comes up more and more as I speak to more customers and peers in the IT world.

API

APIs are great. They allow you to interact with programs and services in a programmatic and automated way. When it comes to the cloud, APIs allow administrators to interact with services, an in fact, they are really a cornerstone of all cloud services, as it allows the different services to communicate. As with anything in this world, this also opens a world of danger.

Let’s start with the API gateway, a common construct in the cloud to allow communication to backend applications. The API gateway itself is a target, because it can allow a hacker to manipulate the gateway, and allow unwanted traffic through. API gateways were designed to be integrated into applications. They were not designed for security. This means untrusted connections can come into said gateway and perhaps retrieve data that individual shouldn’t see. Likewise, the API requests to the gateway can come with malicious payloads.

Another attack that can affect your API gateway and likewise the application behind it, is a DDOS attack. The common answer to defend against this is Web Application Firewall (WAF). The problem is WAFs struggle to deal with low, slow DDOS attacks, because the steady stream of requests looks like normal traffic. A really great way to deter DDOS attacks at the API gateway however is to limit the number of requests for each method.

A great way to prevent API attacks lies in the configuration. Denying anonymous access is huge. Likewise, changing tokens, passwords and keys limit the chance effective credentials can be used. Lastly, disabling any type of clear-text authentication. Furthermore, enforcing SSL/TLS encryption and implementing multifactor authentication are great deterrents.

Compute

No cloud service would be complete without compute resources. This is when an organization builds out virtual machines to host applications and services. This also introduces yet another attack surface, and once again, this is not protected by the cloud service provider. This is purely the customers responsibility.

Many times, in discussing my customers’ migration from an on-premises datacenter to the cloud, one of the common methods is the “lift-and-shift” approach. This means customers take the virtual machines they have running in their datacenter and simply migrating those machines to the cloud. Now, the question is, what kind of security assessment was done on those virtual machines prior to migrating? Were those machines patched? Were discovered security flaws fixed? In my personal experience the answer is no. Therefore, these organizations are simply taking their problems from one location to the next. The security holes still exist and could potentially be exploited, especially if the server is public facing or network policies are improperly applied. For this type of process, I think a better way to look at this is “correct-and-lift-and-shift”.

Now once organizations have already established their cloud presence, they will eventually need to deploy new resources, and this can mean developing or building upon a machine image. The most important thing to remember here is that these are computers. They are still vulnerable to malware, so regardless of being in the cloud or not, the same security controls are required including things like anti-malware, host IPS, integrity monitoring and application control just to name a few.

Networking

Cloud services make it incredibly easy to deploy networks and divide them into subnets and even allow cross network communication. They also give you the ability to lock down the types of traffic that are allowed to traverse those networks to reach resources. This is where security groups come in. These security groups are configured by people, so there’s always that chance that a port is open that shouldn’t be, opening a potential vulnerability. It’s incredibly important from this perspective to really have a grasp on what a compute resource is talking to and why, so the proper security measures can be applied.

So is the cloud really safe from hackers? No safer than anything else unless organizations make sure they’re taking security in their hands and understand where their responsibility begins, and the cloud service provider’s ends. The arms war between hackers and security professionals is still the same as it ever was, the battleground just changed.

The post Is Cloud Computing Any Safer From Malicious Hackers? appeared first on .

❌