FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Network security simplified with Amazon VPC Ingress Routing and Trend Micro

By Trend Micro

Today, Amazon Web Services (AWS) announced the availability of a powerful new service, Amazon Virtual Private Cloud (Amazon VPC) Ingress Routing. As a Launch Partner for Amazon VPC Ingress Routing, we at Trend Micro are proud to continue to innovate alongside AWS to provide solutions to customers—enabling new approaches to network security. Trend Micro™ TippingPoint™ and Trend Micro™ Cloud One integrate with Amazon VPC Ingress Routing deliver network security that allows customers to quickly obtain compliance by inspecting both ingress and egress traffic. This gives you a deployment experience designed to eliminate any disruption in your business.

Cloud network layer security by Trend Micro

A defense-in-depth or layered security approach is important to organizations, especially at the cloud network layer. That being said, customers need to be able to deploy a solution without re-architecting or slowing down their business, the problem is, previous solutions in the marketplace couldn’t meet both requirements.

So, when our customers asked us to bring TippingPoint intrusion prevention system (IPS) capabilities to the cloud, we responded with a solution. Backed by industry leading research from Trend Micro Research, including the Zero Day Initiative™, we created a solution that includes cloud network IPS capabilities, incorporating detection, protection and threat disruption—without any disruption to the network.

At AWS re:Invent 2018, AWS announced the launch of Amazon Transit Gateway. This powerful architecture enables customers to route traffic through a hub and spoke topology. We leveraged this as a primary deployment model in our Cloud Network Protection, powered by TippingPoint, cloud IPS solution, announced in July 2019. This enabled our customers to quickly gain broad security and compliance, without re-architecting. Now, we’re adding a flexible new deployment model.

 

Enhancing security through partnered innovation

This year we are excited to be a Launch Partner for Amazon VPC Ingress Routing, a new service that allows for customers to gain additional flexibility and control in their network traffic routing. Learn more about this new feature here.

Amazon VPC Ingress Routing is a service that helps customers simplify the integration of network and security appliances within their network topology. With Amazon VPC Ingress Routing, customers can define routing rules at the Internet Gateway (IGW) and Virtual Private Gateway (VGW) to redirect ingress traffic to third-party appliances, before it reaches the final destination. This makes it easier for customers to deploy production-grade applications with the networking and security services they require within their Amazon VPC.

By enabling customers to redirect their north-south traffic flowing in and out of a VPC through internet gateway and virtual private gateway to the Trend Micro cloud network security solution. Not only does this enable customers to screen all external traffic before it reaches the subnet, but it also allows for the interception of traffic flowing into different subnets, using different instances of the Trend Micro solution.

Trend Micro customers now have the ability to have powerful cloud network layer security in AWS leveraging Amazon VPC Ingress Routing. With this enhancement, customers can now deploy in any VPC, without any disruptive re-architecture and without introducing any additional routing or proxies. Deploying directly inline is the ideal solution and enables simplified network security without disruption in the cloud.

 

What types of protection can customers expect?

When you think of classic IPS capabilities, of course you think of preventing inbound attacks. Now, with Amazon VPC Ingress Routing and Trend Micro, customers can protect their VPCs in even more scenarios. Here is what our customers are thinking about:

  • Protecting physical and on-premises assets by routing that traffic to AWS via DirectConnect or VPN
  • Detecting compromised cloud workloads (cloud native or otherwise) and disrupting those attacks, including DNS filters and geo-blocking capabilities
  • Preventing lateral movement between multi-tiered applications or between connected partner ecosystems
  • Prevention for cloud-native threats, including Kubernetes® and Docker® vulnerabilities, and container image and repository compromises occurring when pulled into VPCs

 

Trend Micro™ Cloud One ­– Network Security

Amazon VPC Ingress Ingress Routing will be available as a deployment option soon for Cloud Network Protection, powered by TippingPoint, available in AWS Marketplace. It will also be available upon release of our recently announced Trend Micro™ Cloud One – Network Security, a key service in Trend Micro’s new Cloud One, a cloud security services platform.

The post Network security simplified with Amazon VPC Ingress Routing and Trend Micro appeared first on .

How To Get The Most Out Of Industry Analyst Reports

By Trend Micro

Whether you’re trying to inform purchasing decisions or just want to better understand the cybersecurity market and its players, industry analyst reports can be very helpful. Following our recent accolades by Forrester and IDC in their respective cloud security reports, we want to help customers understand how to use this information.

Our VP of cybersecurity, Greg Young, taps into his past experience at Gartner to explain how to discern the most value from industry analyst reports.

The post How To Get The Most Out Of Industry Analyst Reports appeared first on .

The Summit of Cybersecurity Sits Among the Clouds

By Trend Micro

Trend Micro Apex One™ as a Service

You have heard it before, but it needs to be said again—threats are constantly evolving and getting sneakier, more malicious, and harder to find than ever before.

It’s a hard job to stay one step ahead of the latest threats and scams organizations come across, but it’s something Trend Micro has done for a long time, and something we do very well! At the heart of Trend Micro security is the understanding that we have to adapt and evolve faster than hackers and their malicious threats. When we released Trend Micro™ OfficeScan™ 11.0, we were facing browser exploits, the start of advanced ransomware and many more new and dangerous threats. That’s why we launched our connected threat defense approach—allowing all Trend Micro solutions to share threat information and research, keeping our customers one step ahead of threats.

 

With the launch of Trend Micro™ OfficeScan™ XG, we released a set of new capabilities like anti-exploit prevention, ransomware enhancements, and pre-execution and runtime machine learning, protecting customers from a wider range of fileless and file-based threats. Fast forward to last year, we saw a huge shift in not only the threats we saw in the security landscape, but also in how we architected and deployed our endpoint security. This lead to Trend Micro Apex One™, our newly redesigned endpoint protection solution, available as a single agent. Trend Micro Apex One brought to the market enhanced fileless attack detection, advanced behavioral analysis, and combined our powerful endpoint threat detection capabilities with our sophisticated endpoint detection and response (EDR) investigative capabilities.

 

We all know that threats evolve, but, as user protection product manager Kris Anderson says, with Trend Micro, your endpoint protection evolves as well. While we have signatures and behavioral patterns that are constantly being updated through our Smart Protection Network, attackers are discovering new tactics that threaten your company. At Trend Micro, we constantly develop and fine-tune our detection engines to combat these threats, real-time, with the least performance hit to the endpoint. This is why we urge customers to stay updated with the latest version of endpoint security—Apex One.”

Trend Micro Apex One has the broadest set of threat detection capabilities in the industry today, and staying updated with the latest version allows you to benefit from this cross-layered approach to security.

 

One easy way to ensure you are always protected with the latest version of Trend Micro Apex One is to migrate to Trend Micro Apex One™ as a Service. By deploying a SaaS model of Trend Micro Apex One, you can benefit from automatic updates of the latest Trend Micro Apex One security features without having to go through the upgrade process yourself. Trend Micro Apex One as a Service deployments will automatically get updated as new capabilities are introduced and existing capabilities are enhanced, meaning you will always have the most recent and effective endpoint security protecting your endpoints and users.

 

Trend Micro takes cloud security seriously, and endpoint security is no different. You can get the same gold standard endpoint protection of Trend Micro Apex One, but delivered as a service, allowing you to benefit from easy management and ongoing maintenance.

The post The Summit of Cybersecurity Sits Among the Clouds appeared first on .

Four Reasons Your Cloud Security Is Keeping You Up At Night

By Trend Micro

We are excited to introduce guest posts from our newest Trenders from Cloud Conformity, now Trend Micro Cloud One – Conformity. More insights will be shared from this talented team to help you be confident and in control of the security of your cloud environments!

Why your cloud security is keeping you up at night

We are all moving to the cloud for speed, agility, scalability, and cost-efficiency and have realized that it demands equally powerful security management. As the cloud keeps on attracting more businesses, security teams are spending sleepless nights securing the infrastructure.

Somewhere, a cyber con artist has a target set on you and is patiently waiting to infiltrate your security. Managing your security posture is as critical as wearing sunscreen even if the sun is hiding behind a cloud. You may not feel the heat instantly, but it definitely leaves a rash for you to discover later.

Analyzing the volume of issues across the global Trend Micro Cloud One – Conformity customer base clearly shows that ‘Security’ is the most challenging area within AWS infrastructure.

According to an internal study in June 2019, more than 50% of issues belonged to the ‘Security’ category.

We can definitely reduce the number of security issues affecting cloud infrastructure, but first need to conquer the possible reasons for security vulnerabilities.

 1. Not scanning your accounts regularly enough

If you deploy services and resources multiple times a day, you must continuously scan all your environments and instances at regular intervals. Tools like Conformity Bot scans your accounts against 530 rules across five pillars of the Well-Architected Framework to help you identify potential security risks and prioritize them. You can even set up the frequency of scans or run them manually as required.

2. Not investing in preventative measures

Seemingly harmless misconfigurations can cause enormous damage that can rapidly scale up and result in a security breach. You can prevent potential security risks from entering live environments by investing some time in scanning your staging or test accounts before launching any resources or services. You can use a Template Scanner to scan your account settings against CloudFormation Template and identify any security and compliance issues before deployment.

3. Not monitoring real-time activity

Catastrophes don’t wait! It may take a few minutes before someone barges into your cloud infrastructure while you are away on the weekend. You need to watch activity in real-time to act on threats without delay. A tool such as Real-Time Monitoring Add-on tracks your account’s activity in real time and triggers alerts for suspicious activity based on set configurations. For example, you can set up alerts to monitor account activity from a specific country or region.

4. Not communicating risks in a timely manner

The information trickling from your monitoring controls is fruitless until you get the right people to act quickly. One of the best practices to maintain smooth security operations is to merge the flow of security activity and events into information channels. Conformity allows you to integrate your AWS accounts with communication channels, for example Jira, email, SMS, Slack, PagerDuty, Zendesk, ServiceNow ITSM, and Amazon SNS. Moreover, configuring communication triggers sends notifications and alerts to set teams through the selected channels.

AWS provides you with the services and resources to host your apps and infrastructure, but remember – Security is a shared responsibility in which you must take an active role.

See how Trend Micro can support your part of the shared responsibility model for cloud security: https://www.trendmicro.com/cloudconformity.

Stay Safe!

The post Four Reasons Your Cloud Security Is Keeping You Up At Night appeared first on .

Trend Micro Cloud App Security Blocked 12.7 Million High-Risk Email Threats in 2019 – in addition to those detected by cloud email services’ built-in security

By Chris Taylor

On March 3, 2020, the cyber division of Federal Bureau of Investigation (FBI) issued a private industry notification calling out Business Email Compromise (BEC) scams through exploitation of cloud-based email services. Microsoft Office 365 and Google G Suite, the two largest cloud-based email services, are targeted by cyber criminals based on FBI complaint information since 2014. The scams are initiated through credential phishing attacks in order to compromise business email accounts and request or misdirect transfers of funds. Between January 2014 and October 2019, the Internet Crime Complaint Center (IC3) received complaints totaling over $2.1 billion in actual losses from BEC scams targeting the two cloud services. The popularity of Office 365 and G Suite has positioned themselves as attractive targets for cybercriminals.

Trend Micro™ Cloud App Security™ is an API-based service protecting Microsoft® Office 365™, Google G Suite, Box, and Dropbox. Using multiple advanced threat protection techniques, it acts as a second layer of protection after emails and files have passed through Office 365 and G Suite’s built-in security.

In 2019, Trend Micro Cloud App Security caught 12.7 million high-risk email threats in addition to what Office 365 and Gmail security have blocked. Those threats include close to one million malware, 11.3 million phishing attempts, and 386,000 BEC attempts. The blocked threats include 4.8 million of credential phishing and 225,000 of ransomware. These are potential attacks that could result in an organization’s monetary, productivity, or even reputation losses.

Trend Micro started publishing its Cloud App Security threat report since 2018. For third year in a row, Trend Micro Cloud App Security is proven to provide effective protection for cloud email services. The following customer examples for different scenarios further show how Cloud App Security is protecting different organizations.

Customer examples: Additional detections after Office 365 built-in security (2019 data)

These five customers, ranging from 550 seats to 80K seats, are across different industries. All of them use E3, which includes basic security (Exchange Online Protection). This data shows the value of adding CAS to enhance Office 365 native security. For example, a transportation company with 80,000 Office 365 E3 users found an additional 16,000 malware, 510,000 malicious & phishing URLs and 27,000 BEC, all in 2019. With the average cost of a BEC attack at $75,000 each and the potential losses and costs to recover from credential phishing and ransomware attacks, Trend Micro Cloud App Security pays for itself very quickly.

Customer examples: Additional Detections after Office 365 Advanced Threat Protection (2019 data)

Customers using Office 365 Advanced Threat Protection (ATP) also need an additional layer of filtering as well. For example, an IT Services company with 10,000 users of E3 and ATP detected an additional 14,000 malware, 713,000 malicious and phishing URLs, and 6,000 BEC in 2019 with Trend Micro Cloud App Security.

Customer examples: Additional Detections after third-party email gateway (2019 data)

Many customers use a third-party email gateway to scan emails before they are delivered to their Office 365 environment. Despite these gateway deployments, many of the sneakiest and hardest to detect threats still slipped though. Plus, a gateway solution can’t detect internal email threats, which can originate from compromised devices or accounts within Office 365.

For example, a business with 120,000 Office 365 users with a third-party email gateway stopped an additional 27,000 malware, 195,000 malicious and phishing emails, and almost 6,000 BEC in 2019 with Trend Micro Cloud App Security.

Customer examples: Additional Detections after Gmail built-in security (2019 data)

*Trend Micro Cloud App Security supports Gmail starting April 2019.

For customer choosing G suite, Trend Micro Cloud App Security can provide additional protection as well. For example, a telecommunication company with 12,500 users blocked almost 8,000 high risk threats with Cloud App Security in just five months.

Email gateway or built-in security for cloud email services is no longer enough to protect organizations from email-based threats. Businesses, no matter the size, are at risk from a plethora of dangers that these kinds of threats pose. Organizations should consider a comprehensive multilayered security solution such as Trend Micro Cloud App Security. It supplements the included security features in email and collaboration platforms like Office 365 and G Suite.

Check out the Trend Micro Cloud App Security Report 2019 to get more details on the type of threats blocked by this product and common email attacks analyzed by Trend Micro Research in 2019.

The post Trend Micro Cloud App Security Blocked 12.7 Million High-Risk Email Threats in 2019 – in addition to those detected by cloud email services’ built-in security appeared first on .

Smart Check Validated for New Bottlerocket OS

By Trend Micro

Containers provide a list of benefits to organizations that use them. They’re light, flexible, add consistency across the environment and operate in isolation.

However, security concerns prevent some organizations from employing containers. This is despite containers having an extra layer of security built in – they don’t run directly on the host OS.

To make containers even easier to manage, AWS released an open-source Linux-based operating system meant for hosting containers. While Bottlerocket AMIs are provided at no cost, standard Amazon EC2 and AWS charges apply for running Amazon EC2 instances and other services.

Bottlerocket is purpose-built to run containers and improves security and resource utilization by only including the essential software to run containers, which improves resource utilization and reduces the attack surface compared to general-purpose OS’s.

At Trend Micro, we’re always focused on the security of our customers cloud environments. We’re proud to be a launch partner for AWS Bottlerocket, with our Smart Check component validated for the OS prior to the launch.

Why use additional security in cloud environments

While an OS specifically for containers that includes native security measures is a huge plus, there seems to be a larger question of why third-party security solutions are even needed in cloud environments. We often hear a misconception with cloud deployment that, since the cloud service provider has built in security, users don’t have to think about the security of their data.

That’s simply not accurate and leaves a false sense of security. (Pun intended.)

Yes – cloud providers like AWS build in security measures and have addressed common problems by adding built in security controls. BUT cloud environments operate with a shared responsibility model for security – meaning the provider secures the environment, and users are responsible for their instances and data hosted therein.

That’s for all cloud-based hosting, whether in containers, serverless or otherwise.

 

Why Smart Check in Bottlerocket matters

Smooth execution without security roadblocks

DevOps teams leverage containerized applications to deploy fast and don’t have time for separate security roadblocks. Smart Check is built for the DevOps community with real-time image scanning at any point in the pipeline to ensure insecure images aren’t deployed.

Vulnerability scanning before runtime

We have the largest vulnerability data set of any security vendor, which is used to scan images for known software flaws before they can be exploited at runtime. This not only includes known vendor vulnerabilities from the Zero Day Initiative (ZDI), but also vulnerability intelligence for bugs patched outside the ZDI program and open source vulnerability intelligence built in through our partnership with Snyk.

Flexible enough to fit with your pipeline

Container security needs to be as flexible as containers themselves. Smart Check has a simple admin process to implement role-based access rules and multiple concurrent scanning scenarios to fit your specific pipeline needs.

Through our partnership with AWS, Trend Micro is excited to help ensure customers can continue to execute on their portion of the shared responsibility model through container image scanning by validating that the Smart Check solution will be available for customers to run on Bottlerocket at launch.

More information can be found here: https://aws.amazon.com/bottlerocket/

If you are still interested in learning more, check out this AWS blog from Jeff Barr.

The post Smart Check Validated for New Bottlerocket OS appeared first on .

The AWS Service to Focus On – Amazon EC2

By Trend Micro
cloud services

If we run a contest for Mr. Popular of Amazon Web Services (AWS), without a doubt Amazon Simple Storage Service (S3) has ‘winner’ written all over it. However, what’s popular is not always what is critical for your business to focus on. There is popularity and then there is dependability. Let’s acknowledge how reliant we are on Amazon Elastic Cloud Computing (EC2) as AWS infrastructure led-organizations.

We reflected upon our in-house findings for the AWS ‘Security’ pillar in our last blog, Four Reasons Your Cloud Security is Keeping You Up at Night, explicitly leaving out over caffeination and excessive screen time!

Drilling further down to the most affected AWS Services, Amazon EC2 related issues topped the list with 32% of all issues. Whereas Mr. Popular – Amazon S3 contributed to 12% of all issues. While cloud providers, like AWS, offer a secure infrastructure and best practices, many customers are unaware of their role in the shared responsibility model. The results showing the number of issues impacting Amazon EC2 customers demonstrates the security gap that can happen when the customer part of the shared responsibility model is not well understood.

While these AWS services and infrastructure are secure, customers also have a responsibility to secure their data and to configure environments according to AWS best practices. So how do we ensure that we keep our focus on this crucial service and ensure the flexibility, scalability, and security of a growing infrastructure?

Introducing Rules

If you thought you were done with rules after passing high school and moving out of your parent’s house, you would have soon realized that you were living a dream. Rules seem to be everywhere! Rules are important, they keep us safe and secure. While some may still say ‘rules are made to be broken’, you will go into a slump if your cloud infrastructure breaks the rules of the industry and gets exposed to security vulnerabilities.

It is great if you are already following the Best Practices for Amazon EC2, but if not, how do you monitor the performance of your services day in and day out to ensure their adherence to these best practices? How can you track if all your services and resources are running as per the recommended standards?

We’re here to help with that. Trend Micro Cloud One – Conformity ‘Rules’ provide you with that visibility for some of the most critical services like Amazon EC2.

What is the Rule?

A ‘Rule’ is the definition of the best practice used as a basis for an assessment that is run by Conformity on a particular piece of your Cloud infrastructure. When a rule is run against the infrastructure (resources) associated with your AWS account, the result of the scan is referred to as a Check. For example, an Amazon EC2 may have 60 Rules (Checks) scanning for various risks/vulnerabilities. Checks are either a SUCCESS or a FAILURE.

Conformity has about 540 Rules and 60 of them are for monitoring your Amazon EC2 services best practices. Conformity Bot scans your cloud accounts for these Rules and presents you with the ‘Checks’ to prioritize and remediate the issues keeping your services healthy and prevent security breaches.

Amazon EC2 Best Practices and Rules

Here are just a few examples of how Conformity Rules have got you covered for some of the most critical Amazon EC2 best practices:

  1. To ensure Security, ensure IAM users and roles are used and management policies are established for access policies.
  2. For managing Storage, keep EBS volumes separate for operating systems and data, and check that the Amazon EC2 instances provisioned outside of the AWS Auto Scaling Groups (ASGs) have Termination Protection safety feature enabled to protect your instances from being accidentally terminated.
  3. For efficient Resource Management, utilize custom tags to track and identify resources, and keep on top of your stated Amazon EC2 limits.
  4. For full confident Backup and Recovery, regularly test the process of recovering instances and EBS volumes should they fail, and create and use approved AMIs for easier and consistent future instance deployment.

See how Trend Micro can support your part of the shared responsibility model for cloud security: https://www.trendmicro.com/cloudconformity.

Stay Safe!

The post The AWS Service to Focus On – Amazon EC2 appeared first on .

Cloud-First but Not Cloud-Only: Why Organizations Need to Simplify Cybersecurity

By Wendy Moore

The global public cloud services market is on track to grow 17% this year, topping $266 billion. These are impressive figures, and whatever Covid-19 may do short-term to the macro-economy, they’re a sign of where the world is heading. But while many organizations may describe themselves as “cloud-first”, they’re certainly not “cloud-only.” That is, hybrid cloud is the name of the game today: a blend of multiple cloud providers and multiple datacenters.

Whilst helping to drive agility, differentiation and growth, this new reality also creates cyber risk. As IT leaders try to chart a course for success, they’re crying out for a more holistic, simpler way to manage hybrid cloud security.

Cloud for everyone

Organizations are understandably keen to embrace cloud platforms. Who wouldn’t want to empower employees to be more productive and DevOps to deliver agile, customer-centric services? But digital transformation comes with its own set of challenges. Migration often happens at different rates throughout an organization. That makes it hard to gain unified visibility across the enterprise and manage security policies in a consistent manner — especially when different business units and departments are making siloed decisions. An estimated 85% of organizations are now using multiple clouds, and 76% are using between two and 15 hybrid clouds.

To help manage this complexity, organisations are embracing containers and serverless architectures to develop new applications more efficiently. However, the DevOps teams using these technologies are focused primarily on time-to-market, sometimes at the expense of security. Their use of third-party code is a classic example: potentially exposing the organization to buggy or even malware-laden code.

A shared responsibility

The question is, how to mitigate these risks in a way that respects the Shared Responsibility model of cloud security, but in a consistent manner across the organization? It’s a problem exacerbated by two further concerns.

First, security needs to be embedded in the DevOps process to ensure that the applications delivered are secure, but not in a way that threatens the productivity of teams. They need to be able to use the tools and platforms they want to, but in a way that doesn’t expose the organization to unnecessary extra risk. Second, cloud complexity can often lead to human error: misconfigurations of cloud services that threaten to expose highly regulated customer and corporate data to possible attacks. The Capital One data breach, which affected an estimated 100 million consumers, was caused partly by a misconfigured Web Application Firewall.

Simplifying security

Fortunately, organizations are becoming more mature in their cloud security efforts. We see customers that started off tackling cyber risk with multiple security tools across the enterprise, but in time developed an operational excellence model. By launching what amount to cloud centers of excellence, they’re showing that security policies and processes can be standardized and rolled out in a repeatable way across the organization to good effect.

But what of the tools security teams are using to achieve this? Unfortunately, in too many cases they’re relying on fragmented, point products which add cost, further complexity and dangerous security gaps to the mix. It doesn’t have to be like this.

Cloud One from Trend Micro brings together workload security, container security, application security, network security, file storage security and cloud security posture management (CSPM). The latter, Cloud One – Conformity offers a simple, automated way to spot and fix misconfigurations and enhance security compliance and governance in the cloud.

Whatever stage of maturity you are at with your cloud journey, Cloud One offers simple, automated protection from a single console. It’s simply the way cloud security needs to be.

The post Cloud-First but Not Cloud-Only: Why Organizations Need to Simplify Cybersecurity appeared first on .

Principles of a Cloud Migration – From Step One to Done

By Jason Dablow
cloud

Boiling the ocean with the subject, sous-vide deliciousness with the content.

Cloud Migrations are happening every day.  Analysts predict over 75% of mid-large enterprises will migrate a workload to the cloud by 2021 – but how can you make sure your workload is successful? There are not just factors with IT teams, operations, and security, but also with business leaders, finance, and many other organizations of your business. In this multi-part series, I’ll explore best practices, forward thinking, and use cases around creating a successful cloud migration from multiple perspectives.  Whether you’re a builder in the cloud or an executive overseeing the transformation, you’ll learn from my firsthand experience and knowledge on how to bring value into your cloud migration project.

Here are just a few advantages of a cloud migration:

  • Technology benefits like scalability, high availability, simplified infrastructure maintenance, and an environment compliant with many industry certifications
  • The ability to switch from a CapEx to an OpEx model
  • Leaving the cost of a data center behind

While there can certainly be several perils associated with your move, with careful planning and a company focus, you can make your first step into cloud a successful one.  And the focus of a company is an important step to understand. The business needs to adopt the same agility that the cloud provides by continuing to learn, grow, and adapt to this new environment. The Phoenix Project and the Unicorn Project are excellent examples that show the need and the steps for a successful business transformation.

To start us off, let’s take a look at some security concepts that will help you secure your journey into this new world. My webinar on Principles to Make Your Cloud Migration Journey Secure is a great place to start: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html

The post Principles of a Cloud Migration – From Step One to Done appeared first on .

Cloud Transformation Is The Biggest Opportunity To Fix Security

By Greg Young (Vice President for Cybersecurity)

This overview builds on the recent report from Trend Micro Research on cloud-specific security gaps, which can be found here.

Don’t be cloud-weary. Hear us out.

Recently, a major tipping point was reached in the IT world when more than half of new IT spending was on cloud over non- cloud. So rather than being the exception, cloud-based operations have become the rule.

However, too many security solutions and vendors still treat the cloud like an exception – or at least not as a primary use case. The approach remains “and cloud” rather than “cloud and.”

Attackers have made this transition. Criminals know that business security is generally behind the curve with its approach to the cloud and take advantage of the lack of security experience surrounding new cloud environments. This leads to ransomware, cryptocurrency mining and data exfiltration attacks targeting cloud environments, to name a few.

Why Cloud?

There are many reasons why companies transition to the cloud. Lower costs, improved efficiencies and faster time to market are some of the primary benefits touted by cloud providers.

These benefits come with common misconceptions. While efficiency and time to market can be greatly improved by transitioning to the cloud, this is not done overnight. It can take years to move complete data centers and operational applications to the cloud. The benefits won’t be fully realized till the majority of functional data has been transitioned.

Misconfiguration at the User Level is the Biggest Security Risk in the Cloud

Cloud providers have built in security measures that leave many system administrators, IT directors and CTOs feeling content with the security of their data. We’ve heard it many times – “My cloud provider takes care of security, why would I need to do anything additional?”

This way of thinking ignores the shared responsibility model for security in the cloud. While cloud providers secure the platform as a whole, companies are responsible for the security of their data hosted in those platforms.

Misunderstanding the shared responsibility model leads to the No. 1 security risk associated with the cloud: Misconfiguration.

You may be thinking, “But what about ransomware and cryptomining and exploits?” Other attack types are primarily possible when one of the 3 misconfigurations below are present.

You can forget about all the worst-case, overly complex attacks: Misconfigurations are the greatest risk and should be the No. 1 concern. These misconfigurations are in 3 categories:

  1. Misconfiguration of the native cloud environment
  2. Not securing equally across multi-cloud environments (i.e. different brands of cloud service providers)
  3. Not securing equally to your on-premises (non-cloud) data centers

How Big is The Misconfiguration Problem?

Trend Micro Cloud One™ – Conformity identifies an average of 230 million misconfigurations per day.

To further understand the state of cloud misconfigurations, Trend Micro Research recently investigated cloud-specific cyber attacks. The report found a large number of websites partially hosted in world-writable cloud-based storage systems. Despite these environments being secure by default, settings can be manually changed to allow more access than actually needed.

These misconfigurations are typically put in place without knowing the potential consequences. But once in place, it is simple to scan the internet to find this type of misconfiguration, and criminals are exploiting them for profit.

Why Do Misconfigurations Happen?

The risk of misconfigurations may seem obvious in theory, but in practice, overloaded IT teams are often simply trying to streamline workflows to make internal processes easier. So, settings are changed to give read and/or write access to anyone in the organization with the necessary credentials. What is not realized is that this level of exposure can be found and exploited by criminals.

We expect this trend will increase in 2020, as more cloud-based services and applications gain popularity with companies using a DevOps workflow. Teams are likely to misconfigure more cloud-based applications, unintentionally exposing corporate data to the internet – and to criminals.

Our prediction is that through 2025, more than 75% of successful attacks on cloud environments will be caused by missing or misconfigured security by cloud customers rather than cloud providers.

How to Protect Against Misconfiguration

Nearly all data breaches involving cloud services have been caused by misconfigurations. This is easily preventable with some basic cyber hygiene and regular monitoring of your configurations.

Your data and applications in the cloud are only as secure as you make them. There are enough tools available today to make your cloud environment – and the majority of your IT spend – at least as secure as your non-cloud legacy systems.

You can secure your cloud data and applications today, especially knowing that attackers are already cloud-aware and delivering vulnerabilities as a service. Here are a few best practices for securing your cloud environment:

  • Employ the principle of least privilege: Access is only given to users who need it, rather than leaving permissions open to anyone.
  • Understand your part of the Shared Responsibility Model: While cloud service providers have built in security, the companies using their services are responsible for securing their data.
  • Monitor your cloud infrastructure for misconfigured and exposed systems: Tools are available to identify misconfigurations and exposures in your cloud environments.
  • Educate your DevOps teams about security: Security should be built in to the DevOps process.

To read the complete Trend Micro Research report, please visit: https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/exploring-common-threats-to-cloud-security.

For additional information on Trend Micro’s approach to cloud security, click here: https://www.trendmicro.com/en_us/business/products/hybrid-cloud.html.

The post Cloud Transformation Is The Biggest Opportunity To Fix Security appeared first on .

Cloud Native Application Development Enables New Levels of Security Visibility and Control

By Trend Micro

We are in unique times and it’s important to support each other through unique ways. Snyk is providing a community effort to make a difference through AllTheTalks.online, and Trend Micro is proud to be a sponsor of their virtual fundraiser and tech conference.

In today’s threat landscape new cloud technologies can pose a significant risk. Applying traditional security techniques not designed for cloud platforms can restrict the high-volume release cycles of cloud-based applications and impact business and customer goals for digital transformation.

When organizations are moving to the cloud, security can be seen as an obstacle. Often, the focus is on replicating security controls used in existing environments, however, the cloud actually enables new levels of visibility and controls that weren’t possible before.

With today’s increased attention on cyber threats, cloud vulnerabilities provide an opportunistic climate for novice and expert hackers alike as a result of dependencies on modern application development tools, and lack of awareness of security gaps in build pipelines and deployment environments.

Public clouds are capable of auditing API calls to the cloud management layer. This gives in-depth visibility into every action taken in your account, making it easy to audit exactly what’s happening, investigate and search for known and unknown attacks and see who did what to identify unusual behavior.

Join Mike Milner, Global Director of Application Security Technology at Trend Micro on Wednesday April 15, at 11:45am EST to learn how to Use Observability for Security and Audit. This is a short but important session where we will discuss the tools to help build your own application audit system for today’s digital transformation. We’ll look at ways of extending this level of visibility to your applications and APIs, such as using new capabilities offered by cloud providers for network mirroring, storage and massive data handling.

Register for a good cause and learn more at https://www.allthetalks.org/.

The post Cloud Native Application Development Enables New Levels of Security Visibility and Control appeared first on .

What do serverless compute platforms mean for security?

By Trend Micro

By Kyle Klassen Product Manager – Cloud Native Application Security at Trend Micro

Containers provide many great benefits to organizations – they’re lightweight, flexible, add consistency across different environments and scale easily.

One of the characteristics of containers is that they run in dedicated namespaces with isolated resource requirements. General purpose OS’s deployed to run containers might be viewed as overkill since many of their features and interfaces aren’t needed.

A key tenant in the cybersecurity doctrine is to harden platforms by exposing only the fewest number of interfaces and applying the tightest configurations required to run only the required operations.

Developers deploying containers to restricted platforms or “serverless” containers to the likes of AWS Fargate for example, should think about security differently – by looking upward, looking left and also looking all-around your cloud domain for opportunities to properly security your cloud native applications. Oh, and don’t forget to look outside. Let me explain…

Looking Upward

As infrastructure, OS, container orchestration and runtimes become the domain of the cloud provider, the user’s primary responsibility becomes securing the containers and applications themselves. This is where Trend Micro Cloud One™, a security services platform for cloud builders, can help Dev and Ops teams better implement build pipeline and runtime security requirements.  Cloud One – Application Security embeds a security library within the application itself to provide defense against web application attacks and to detect malicious activity.

One of the greatest benefits of this technology is that once an application is secured in this manner, it can be deployed anywhere and the protection comes along for the ride. Users can be confident their applications are secure whether deployed in a container on traditional hosts, into EKS on AWS Bottlerocket, serverless on AWS Fargate, or even as an AWS Lambda function!

Looking Left

It’s great that cloud providers are taking security seriously and providing increasingly secure environments within which to deploy your containers. But you need to make sure your containers themselves are not introducing security risks. This can be accomplished with container image scanning to identify security issues before these images ever make it to the production environment.

Enter Deep Security Smart Check – Container Image Scanning part of the Cloud One offering. Scans must be able to detect more than just vulnerabilities. Developer reliance on code re-use, public images, and 3rd party contributions mean that malware injection into private images is a real concern. Sensitive objects like secrets, keys and certificates must be found and removed and assurance against regulatory requirements like PCI, HIPAA or NIST should be a requirement before a container image is allowed to run.

Looking All-Around

Imagine taking the effort to ensure your applications, containers and functions are built securely, comply with strict security regulations and are deployed into container optimized cloud environments only to find out that you’ve still become a victim of an attack! How could this be? Well, one common oversight is recognizing the importance of disciplined configuration and management of the cloud resources themselves – you can’t assume they’re secure just because they’re working.

But, making sure your cloud services are secure can be a daunting task – likely comprised of dozens of cloud services, each with as many configuration options – these environments are complex. Cloud One – Conformity is your cloud security companion and gives you assurance that any hidden security issues with your cloud configurations are detected and prioritized. Disabled security options, weak keys, open permissions, encryption options, high-risk exposures and many, many more best practice security rules make it easy to conform to security best practices and get the most from your cloud provider services.

Look Outside

All done? Not quite. You also need to think about how the business workflows of your cloud applications ingest files (or malware?).  Cloud storage like S3 Buckets are often used to accept files from external customers and partners.  Blindly accepting uploads and pulling them into your workflows is an open door for attack.

Cloud One – File Storage Security incorporates Trend Micro’s best-in-class malware detection technology to identify and remove files infected with malware. As a cloud native application itself, the service deploys easily with deployment templates and runs as a ‘set and forget’ service – automatically scanning new files of any type, any size and automatically removing malware so you can be confident that all of your downstream workflows are protected.

It’s still about Shared Responsibility

Cloud providers will continue to offer security features for deploying cloud native applications – and you should embrace all of this capability.  However, you can’t assume your cloud environment is optimally secure without validating your configurations. And once you have a secure environment, you need to secure all of the components within your control – your functions, applications, containers and workflows. With this practical approach, Trend Micro Cloud One™ perfectly complements your cloud services with Network Security, Workload Security, Application Security, Container Security, File Storage Security and Conformity for cloud posture management, so you can be confident that you’ve got security covered no matter which way you look.

To learn more visit Trendmicro.com/CloudOne and join our webinar on cloud native application threats https://resources.trendmicro.com/Cloud-One-Webinar-Series-Cloud-Native-Application-Threats.html

 

 

 

 

The post What do serverless compute platforms mean for security? appeared first on .

Shift Well-Architecture Left. By Extension, Security Will Follow

By Raphael Bottino, Solutions Architect

A story on how Infrastructure as Code can be your ally on Well-Architecting and securing your Cloud environment

By Raphael Bottino, Solutions Architect — first posted as a medium article
Using Infrastructure as Code(IaC for short) is the norm in the Cloud. CloudFormation, CDK, Terraform, Serverless Framework, ARM… the options are endless! And they are so many just because IaC makes total sense! It allows Architects and DevOps engineers to version the application infrastructure as much as the developers are already versioning the code. So any bad change, no matter if on the application code or infrastructure, can be easily inspected or, even better, rolled back.

For the rest of this article, let’s use CloudFormation as reference. And, if you are new to IaC, check how to create a new S3 bucket on AWS as code:

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Well, it’s not that simple…

Although this is a functional and useful CloudFormation template, following correctly all its rules, it doesn’t follow the rules of something bigger and more important: The AWS Well-Architected Framework. This amazing tool is a set of whitepapers describing how to architect on top of AWS, from 5 different views, called Pillars: Security, Cost Optimization, Operational Excellence, Reliability and Performance Efficiency. As you can see from the pillar names, an architecture that follows it will be more secure, cheaper, easier to operate, more reliable and with better performance.

Among others, this template will generate a S3 bucket that doesn’t have encryption enabled, doesn’t enforce said encryption and doesn’t log any kind of access to it–all recommended by the Well-Architected Framework. Even worse, these misconfigurations are really hard to catch in production and not visibly alerted by AWS. Even the great security tools provided by them such as Trusted Advisor or Security Hub won’t give an easy-to-spot list of buckets with those misconfigurations. Not for nothing Gartner states that 95% of cloud security failures will be the customer’s fault¹.

The DevOps movement brought to the masses a methodology of failing fast, which is not exactly compatible with the above scenario where a failure many times is just found out whenever unencrypted data is leaked or the access log is required. The question is, then, how to improve it? Spoiler alert: the answer lies on the IaC itself 🙂

Shifting Left

Even before making sure a CloudFormation template is following AWS’ own best practices, the first obvious requirement is to make sure that the template is valid. A fantastic open-source tool called cfn-lint is made available by AWS on GitHub² and can be easily adopted on any CI/CD pipeline, failing the build if the template is not valid, saving precious time. To shorten the feedback loop even further and fail even faster, the same tool can be adopted on the developer IDE³ as an extension so the template can be validated as it is coded. Pretty cool, right? But it still doesn’t help us with the misconfiguration problem that we created with that really simple template in the beginning of this post.

Conformity⁴ provides, among other capabilities, an API endpoint to scan CloudFormation templates against the Well-Architected Framework, and that’s exactly how I know that template is not adhering to its best practices. This API can be implemented on your pipeline, just like the cfn-lint. However, I wanted to move this check further left, just like the cfn-lint extension I mentioned before.

The Cloud Conformity Template Scanner Extension

With that challenge in mind, but also with the need for scanning my templates for misconfigurations fast myself, I came up with a Visual Studio Code extension that, leveraging Conformity’s API, allows the developer to scan the template as it is coded. The Extension can be found here⁵ or searching for “Conformity” on your IDE.

After installing it, scanning a template is as easy as running a command on VS Code. Below it is running for our template example:

This tool allows anyone to shift misconfiguration and compliance checking as left as possible, right on developers’ hands. To use the extension, you’ll need a Conformity API key. If you don’t have one and want to try it out, Conformity provides a 14-day free trial, no credit card required. If you like it but feels that this time period is not enough for you, let me know and I’ll try to make it available to you.

But… What about my bucket template?

Oh, by the way, if you are wondering how a S3 bucket CloudFormation template looks like when following the best practices, take a look:

   
A Well-Architected bucket template

Not as simple, right? That’s exactly why this kind of tool is really powerful, allowing developers to learn as they code and organizations to fail the deployment of any resource that goes against the AWS recommendations.

References

[1] https://www.gartner.com/smarterwithgartner/why-cloud-security-is-everyones-business

[2] https://github.com/aws-cloudformation/cfn-python-lint

[3] https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint

[4] https://www.cloudconformity.com/

[5] https://marketplace.visualstudio.com/items?itemName=raphaelbottino.cc-template-scanner

The post Shift Well-Architecture Left. By Extension, Security Will Follow appeared first on .

5 reasons to move your endpoint security to the cloud now

By Chris Taylor

As the world has adopts work from home initiatives, we’ve seen many organizations accelerate their plans to move from on-premises endpoint security and Detection and Response (EDR/XDR) solutions to Software as a Service versions. And several customers who switched to the SaaS version last year, recently wrote us to tell how glad to have done so as they transitioned to working remote. Here are 5 reasons to consider moving to a cloud managed solution:

 

  1. No internal infrastructure management = less risk

If you haven’t found the time to update your endpoint security software and are one or two versions behind, you are putting your organization at risk of attack. Older versions do not have the same level of protection against ransomware and file-less attacks. Just as the threats are always evolving, the same is true for the technology built to protect against them.

With Apex One as a Service, you always have the latest version. There are no software patches to apply or Apex One servers to manage – we take care of it for you. If you are working remote, this is one less task to worry about and less servers in your environment which might need your attention.

  1. High availability, reliability

With redundant processes and continuous service monitoring, Apex One as a Services delivers the uptime you need with 99.9% availability. The operations team also proactively monitors for potential issues on your endpoints and with your prior approval, can fix minor issues with an endpoint agent before they need your attention.

  1. Faster Detection and Response (EDR/XDR)

By transferring endpoint telemetry to a cloud data lake, detection and response activities like investigations and sweeping can be processed much faster. For example, creating a root cause analysis diagram in cloud takes a fraction of the time since the data is readily available and can be quickly processed with the compute power of the cloud.

  1. Increased MITRE mapping

The unmatched power of cloud computing also enables analytics across a high volume of events and telemetry to identify a suspicious series of activities. This allows for innovative detection methods but also additional mapping of techniques and tactics to the MITRE framework.  Building the equivalent compute power in an on- premises architecture would be cost prohibitive.

  1. XDR – Combined Endpoint + Email Detection and Response

According to Verizon, 94% of malware incidents start with email.  When an endpoint incident occurs, chances are it came from an email message and you want to know what other users have messages with the same email or email attachment in their inbox? You can ask your email admin to run these searches for you which takes time and coordination. As Forrester recognized in the recently published report: The Forrester Wave™ Enterprise Detection and Response, Q1 2020:

“Trend Micro delivers XDR functionality that can be impactful today. Phishing may be the single most effective way for an adversary to deliver targeted payloads deep into an infrastructure. Trend Micro recognized this and made its first entrance into XDR by integrating Microsoft office 365 and Google G suite management capabilities into its EDR workflows.”

This XDR capability is available today by combining alerts, logs and activity data of Apex One as a Service and Trend Micro Cloud App Security. Endpoint data is linked with Office 365 or G Suite email information from Cloud App Security to quickly assess the email impact without having to use another tool or coordinate with other groups.

Moving endpoint protection and detection and response to the cloud, has enormous savings in customer time while increasing their protection and capabilities. If you are licensed with our Smart Protection Suites, you already have access to Apex One as a Service and our support team is ready to help you with your migration. If you are an older suite, talk to your Trend Micro sales rep about moving to a license which includes SaaS.

 

The post 5 reasons to move your endpoint security to the cloud now appeared first on .

Principles of a Cloud Migration – Security, The W5H

By Jason Dablow
cloud

Whosawhatsit?! –  WHO is responsible for this anyways?

For as long as cloud providers have been in business, we’ve been discussing the Shared Responsibility Model when it comes to customer operation teams. It defines the different aspects of control, and with that control, comes the need to secure, manage, and maintain.

While I often make an assumption that everyone is already familiar with this model, let’s highlight some of the requirements as well as go a bit deeper into your organization’s layout for responsibility.

During your cloud migration, you’ll no doubt come across a variety of cloud services that fits into each of these configurations. From running cloud instances (IaaS) to cloud storage (SaaS), there’s a need to apply operational oversight (including security) to each of these based on your level of control of the service.  For example, in a cloud instance, since you’re still responsible for the Operating System and Applications, you’ll still need a patch management process in place, whereas with file object storage in the cloud, only oversight of permissions and data management is required. I think Mark Nunnikhoven does a great job in going into greater detail of the model here: https://blog.trendmicro.com/the-shared-responsibility-model/.

shared responsibility model

I’d like to zero in on some of the other “WHO”s that should be involved in security of your cloud migration.

InfoSec – I think this is the obvious mention here. Responsible for all information security within an organization. Since your cloud migration is working with “information”, InfoSec needs to be involved with how they get access to monitoring the security and risk associated to an organization. 

Cloud Architect – Another no-brainer in my eyes but worth a mention; if you’re not building a secure framework with a look beyond a “lift-and-shift” initial migration, you’ll be doomed with archaic principles leftover from the old way of doing things. An agile platform built for automating every operation including security should be the focus to achieving success.

IT / Cloud Ops – This may be the same or different teams. As more and more resources move to the cloud, an IT team will have less responsibilities for the physical infrastructure since it’s now operated by a cloud provider. They will need to go through a “migration” themselves to learn new skills to operate and secure a hybrid environment. This adaptation of new skills needs to be lead by…

Leadership – Yes, leadership plays an important role in operations and security even if they aren’t part of the CIO / CISO / COO branch. While I’m going to cringe while I type it, business transformation is a necessary step as you move along your cloud migration journey. The acceleration that the cloud provides can not be stifled by legacy operation and security ideologies. Every piece of the business needs to be involved in accelerating the value you’re delivering your customer base by implementing the agile processes including automation into the operations and security of your cloud.

With all of your key players focused on a successful cloud migration, regardless of what stage you’re in, you’ll reach the ultimate stage: the reinvention of your business where operational and security automation drives the acceleration of value delivered to your customers.

This blog is part of a multi-part series dealing with the principles of a successful cloud migration.  For more information, start at the first post here: https://blog.trendmicro.com/principles-of-a-cloud-migration-from-step-one-to-done/

The post Principles of a Cloud Migration – Security, The W5H appeared first on .

Trend Micro Integrates with Amazon AppFlow

By Trend Micro

The acceleration of in-house development enabled by public cloud and Software-as-a-Service (SaaS) platform adoption in the last few years has given us new levels of visibility and access to data. Putting all of that data together to generate insights and action, however, can substitute one challenge for another.

Proprietary protocols, inconsistent fields and formatting combined with interoperability and connectivity hurdles can turn the process of answering simple questions into a major undertaking. When this undertaking is a recurrent requirement then that effort can seem overwhelming.

Nowhere is this more evident than in security teams, where writing code to integrate technologies is rarely a core competency and almost never a core project, but when a compliance or security event requires explanation, finding and making sense of that data is necessary.

Amazon is changing that with the release of AppFlow. Trend Micro Cloud One is a launch partner with this new service, enabling simple data retrieval from your Cloud One dashboard to be fed into AWS services as needed.

Amazon AppFlow is an application integration service that enables you to securely transfer data between SaaS applications and AWS services in just a few clicks. With AppFlow, you can data flows between supported SaaS applications, including Trend Micro, and AWS services like Amazon S3 and Redshift, and run flows on a schedule, in response to a business event, or on demand. Data transformation capabilities, such as data masking, validation, and filtering, empower you to enrich your data as part of the flow itself without the need for post-transfer manipulation. AppFlow keeps data secure in transit and at rest with the flexibility to bring your own encryption keys.

Audit automation

Any regularly scheduled export or query of Cloud One requires data manipulation before an audit can be performed.

You may be responsible for weekly or monthly reports on the state of your security agents. To create this report today, you’ve written a script to automate the data analysis process. However, any change to the input or output requires new code to be written for your script, and you have to find somewhere to actually run the script for it to work.

As part of a compliance team, this isn’t something you really have time for and may not be your area of expertise, so it takes significant effort to create the required audit report.

Using Amazon AppFlow, you can create a private data flow between RedShift, for example, and your Cloud One environment to automatically and regularly retrieve data describing security policies into an easy to digest format that can be stored for future review. Data flows can also be scheduled so regular reports can be produced without recurring user input.

This process also improves integrity and reduces overall effort by having reports always available, rather than needing to develop them in response to a request.

This eliminates the need for custom code and the subsequent frustration from trying to automate this regularly occurring task.

Developer Enablement

Developers don’t typically have direct access to security management consoles or APIs for Cloud One or Deep Security as a Service. However, they may need to retrieve data from security agents or check the state of agents that need remediation. This requires someone from the security team to pull data for the developer each time this situation arises.

While we encourage and enable DevOps cultures working closely with security teams to automate and deploy securely, no one likes unnecessary steps in their workflow. And having to wait on the security team to export data is adding a roadblock to the development team.

Fortunately, Amazon AppFlow solves this issue as well. By setting up a flow between Deep Security as a Service and Amazon S3, the security team can enable developers to easily access the necessary information related to security agents on demand.

This provides direct access to the needed data without expanding access controls for critical security systems.

Security Remediation

Security teams focus on identifying and remediating security alerts across all their tools and multiple SaaS applications. This often leads to collaborating with other teams across the organization on application-specific issues that must be resolved. Each system and internal team has different requirements and they all take time and attention to ensure everything is running smoothly and securely.

At Trend Micro, we are security people too. We understand the need to quickly and reliably scale infrastructure without compromising its security integrity. We also know that this ideal state is often hindered by the disparate nature of the solutions on which we rely.

Integrating Amazon AppFlow with your Cloud One – Workload Security solution allows you to obtain the security status from each agent and deliver them to the relevant development or cloud team. Data from all machines and instances can be sent on demand to the Amazon S3 bucket you indicate. As an added bonus, Amazon S3 can trigger a Lambda to automate how the data is processed, so what is in the storage bucket can be immediately useful. And all of this data is secured in transit and at rest by default, so you don’t have to worry about an additional layer of security controls to maintain.

Easy and secure remediation that doesn’t slow anyone down is the goal we’re collectively working toward.

It is always our goal to help your business securely move to and operate in the cloud. Our solutions are designed to enable security teams to seamlessly integrate with a DevOps environment, removing the “roadblock” of security.

As always, we’re excited to be part of this new Amazon service, and we believe our customers can see immediate value by leveraging Amazon AppFlow with their existing Trend Micro cloud solutions.

The post Trend Micro Integrates with Amazon AppFlow appeared first on .

Principles of a Cloud Migration – Security, The W5H – Episode WHAT?

By Jason Dablow
cloud

Teaching you to be a Natural Born Pillar!

Last week, we took you through the “WHO” of securing a cloud migration here, detailing each of the roles involved with implementing a successful security practice during a cloud migration. Read: everyone. This week, I will be touching on the “WHAT” of security; the key principles required before your first workload moves.  The Well-Architected Framework Security Pillar will be the baseline for this article since it thoroughly explains security concepts in a best practice cloud design.

If you are not familiar with the AWS Well-Architected Framework, go google it right now. I can wait. I’m sure telling readers to leave the article they’re currently reading is a cardinal sin in marketing, but it really is important to understand just how powerful this framework is. Wait, this blog is html ready – here’s the link: https://wa.aws.amazon.com/index.en.html. It consists of five pillars that include best practice information written by architects with vast experience in each area.

Since the topic here is Security, I’ll start by giving a look into this pillar. However, I plan on writing about each and as I do, each one of the graphics above will become a link. Internet Magic!

There are seven principles as a part of the security framework, as follows:

  • Implement a strong identity foundation
  • Enable traceability
  • Apply security at all layers
  • Automate security best practices
  • Protect data in transit and at rest
  • Keep people away from data
  • Prepare for security events

Now, a lot of these principles can be solved by using native cloud services and usually these are the easiest to implement. One thing the framework does not give you is suggestions on how to set up or configure these services. While it might reference turning on multi-factor authentication as a necessary step for your identity and access management policy, it is not on by default. Same thing with file object encryption. It is there for you to use but not necessarily enabled on the ones you create.

Here is where I make a super cool (and free) recommendation on technology to accelerate your learning about these topics. We have a knowledge base with hundreds of cloud rules mapped to the Well-Architected Framework (and others!) to help accelerate your knowledge during and after your cloud migration. Let us take the use case above on multi-factor authentication. Our knowledge base article here details the four R’s: Risk, Reason, Rationale, and References on why MFA is a security best practice.

Starting with a Risk Level and detailing out why this is presents a threat to your configurations is a great way to begin prioritizing findings.  It also includes the different compliance mandates and Well-Architected pillar (obviously Security in this case) as well as descriptive links to the different frameworks to get even more details.

The reason this knowledge base rule is in place is also included. This gives you and your teams context to the rule and helps further drive your posture during your cloud migration. Sample reason is as follows for our MFA Use Case:

“As a security best practice, it is always recommended to supplement your IAM user names and passwords by requiring a one-time passcode during authentication. This method is known as AWS Multi-Factor Authentication and allows you to enable extra security for your privileged IAM users. Multi-Factor Authentication (MFA) is a simple and efficient method of verifying your IAM user identity by requiring an authentication code generated by a virtual or hardware device on top of your usual access credentials (i.e. user name and password). The MFA device signature adds an additional layer of protection on top of your existing user credentials making your AWS account virtually impossible to breach without the unique code generated by the device.”

If Reason is the “what” of the rule, Rationale is the “why” supplying you with the need for adoption.  Again, perfect for confirming your cloud migration path and strategy along the way.

“Monitoring IAM access in real-time for vulnerability assessment is essential for keeping your AWS account safe. When an IAM user has administrator-level permissions (i.e. can modify or remove any resource, access any data in your AWS environment and can use any service or component – except the Billing and Cost Management service), just as with the AWS root account user, it is mandatory to secure the IAM user login with Multi-Factor Authentication.

Implementing MFA-based authentication for your IAM users represents the best way to protect your AWS resources and services against unauthorized users or attackers, as MFA adds extra security to the authentication process by forcing IAM users to enter a unique code generated by an approved authentication device.”

Finally, all the references for each of the risk, reason, and rationale, are included at the bottom which helps provide additional clarity. You’ll also notice remediation steps, the 5th ‘R’ when applicable, which shows you how to actually the correct the problem.

All of this data is included to the community as Trend Micro continues to be a valued security research firm helping the world be safe for exchanging digital information. Explore all the rules we have available in our public knowledge base: https://www.cloudconformity.com/knowledge-base/.

This blog is part of a multi-part series dealing with the principles of a successful cloud migration.  For more information, start at the first post here: https://blog.trendmicro.com/principles-of-a-cloud-migration-from-step-one-to-done/

The post Principles of a Cloud Migration – Security, The W5H – Episode WHAT? appeared first on .

Principles of a Cloud Migration – Security W5H – The When

By Jason Dablow
cloud

If you have to ask yourself when to implement security, you probably need a time machine!

Security is as important to your migration as the actual workload you are moving to the cloud. Read that again.

It is essential to be planning and integrating security at every single layer of both architecture and implementation. What I mean by that, is if you’re doing a disaster recovery migration, you need to make sure that security is ready for the infrastructure, your shiny new cloud space, as well as the operations supporting it. Will your current security tools be effective in the cloud? Will they still be able to do their task in the cloud? Do your teams have a method of gathering the same security data from the cloud? More importantly, if you’re doing an application migration to the cloud, when you actually implement security means a lot for your cost optimization as well.

NIST Planning Report 02-3

In this graph, it’s easy to see that the earlier you can find and resolve security threats, not only do you lessen the workload of infosec, but you also significantly reduce your costs of resolution. This can be achieved through a combination of tools and processes to really help empower development to take on security tasks sooner. I’ve also witnessed time and time again that there’s friction between security and application teams often resulting in Shadow IT projects and an overall lack of visibility and trust.

Start there. Start with bringing these teams together, uniting them under a common goal: Providing value to your customer base through agile secure development. Empower both teams to learn about each other’s processes while keeping the customer as your focus. This will ultimately bring more value to everyone involved.

At Trend Micro, we’ve curated a number of security resources designed for DevOps audiences through our Art of Cybersecurity campaign.  You can find it at https://www.trendmicro.com/devops/.

Also highlighted on this page is Mark Nunnikhoven’s #LetsTalkCloud series, which is a live stream series on LinkedIn and YouTube. Seasons 1 and 2 have some amazing content around security with a DevOps focus – stay tuned for Season 3 to start soon!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The When appeared first on .

Principles of a Cloud Migration – Security W5H – The WHERE

By Jason Dablow
cloud

“Wherever I go, there I am” -Security

I recently had a discussion with a large organization that had a few workloads in multiple clouds while assembling a cloud security focused team to build out their security policy moving forward.  It’s one of my favorite conversations to have since I’m not just talking about Trend Micro solutions and how they can help organizations be successful, but more so on how a business approaches the creation of their security policy to achieve a successful center of operational excellence.  While I will talk more about the COE (center of operational excellence) in a future blog series, I want to dive into the core of the discussion – where do we add security in the cloud?

We started discussing how to secure these new cloud native services like hosted services, serverless, container infrastructures, etc., and how to add these security strategies into their ever-evolving security policy.

Quick note: If your cloud security policy is not ever-evolving, it’s out of date. More on that later.

A colleague and friend of mine, Bryan Webster, presented a concept that traditional security models have been always been about three things: Best Practice Configuration for Access and Provisioning, Walls that Block Things, and Agents that Inspect Things.  We have relied heavily on these principles since the first computer was connected to another. I present to you this handy graphic he presented to illustrate the last two points.

But as we move to secure cloud native services, some of these are outside our walls, and some don’t allow the ability to install an agent.  So WHERE does security go now?

Actually, it’s not all that different – just how it’s deployed and implemented. Start by removing the thinking that security controls are tied to specific implementations. You don’t need an intrusion prevention wall that’s a hardware appliance much like you don’t need an agent installed to do anti-malware. There will also be a big focus on your configuration, permissions, and other best practices.  Use security benchmarks like the AWS Well-Architected, CIS, and SANS to help build an adaptable security policy that can meet the needs of the business moving forward.  You might also want to consider consolidating technologies into a cloud-centric service platform like Trend Micro Cloud One, which enables builders to protect their assets regardless of what’s being built.  Need IPS for your serverless functions or containers?  Try Cloud One Application Security!  Do you want to push security further left into your development pipeline? Take a look at Trend Micro Container Security for Pre-Runtime Container Scanning or Cloud One Conformity for helping developers scan your Infrastructure as Code.

Keep in mind – wherever you implement security, there it is. Make sure that it’s in a place to achieve the goals of your security policy using a combination of people, process, and products, all working together to make your business successful!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The WHERE appeared first on .

Is Cloud Computing Any Safer From Malicious Hackers?

By Rob Maynard

Cloud computing has revolutionized the IT world, making it easier for companies to deploy infrastructure and applications and deliver their services to the public. The idea of not spending millions of dollars on equipment and facilities to host an on-premises data center is a very attractive prospect to many. And certainly, moving resources to the cloud just has to be safer, right? The cloud provider is going to keep our data and applications safe for sure. Hackers won’t stand a chance. Wrong. More commonly than anyone should, I often hear this delusion from many customers. The truth of the matter is, without proper configuration and the right skillsets administering the cloud presence, as well as practicing common-sense security practices, cloud services are just (if not more) vulnerable.

The Shared Responsibility Model

Before going any further, we need to discuss the shared responsibility model of the cloud service provider and user.

When planning your migration to the cloud, one needs to be aware of which responsibilities belong to which entity. As the chart above shows, the cloud service provider is responsible for the cloud infrastructure security and physical security of such. By contrast, the customer is responsible for their own data, the security of their workloads (all the way to the OS layer), as well as the internal network within the companies VPC’s.

One more pretty important aspect that remains in the hands of the customer is access control. Who has access to what resources? This is really no different than it’s been in the past, exception being the physical security of the data center is handled by the CSP as opposed to the on-prem security, but the company (specifically IT and IT security) are responsible for locking down those resources efficiently.

Many times, this shared responsibility model is overlooked, and poor assumptions are made the security of a company’s resources. Chaos ensues, and probably a firing or two.

So now that we have established the shared responsibility model and that the customer is responsible for their own resource and data security, let’s take a look at some of the more common security issues that can affect the cloud.

Amazon S3 

Amazon S3 is a truly great service from Amazon Web Services. Being able to store data, host static sites or create storage for applications are widely used use cases for this service. S3 buckets are also a prime target for malicious actors, since many times they end up misconfigured.

One such instance occurred in 2017 when Booz Allen Hamilton, a defense contractor for the United States, was pillaged of battlefield imagery as well as administrator credentials to sensitive systems.

Yet another instance occurred in 2017, when due to an insecure Amazon S3 bucket, the records of 198 million American voters were exposed. Chances are if you’re reading this, there’s a good chance this breach got you.

A more recent breach of an Amazon S3 bucket (and I use the word “breach,” however most of these instances were a result of poor configuration and public exposure, not a hacker breaking in using sophisticated techniques) had to do with the cloud storage provider “Data Deposit Box.” Utilizing Amazon S3 buckets for storage, a configuration issue caused the leak of more than 270,000 personal files as well as personal identifiable information (PII) of its users.

One last thing to touch on the subject of cloud file storage has to do with how many organizations are using Amazon S3 to store uploaded data from customers as a place to send for processing by other parts of the application. The problem here is how do we know if what’s being uploaded is malicious or not? This question comes up more and more as I speak to more customers and peers in the IT world.

API

APIs are great. They allow you to interact with programs and services in a programmatic and automated way. When it comes to the cloud, APIs allow administrators to interact with services, an in fact, they are really a cornerstone of all cloud services, as it allows the different services to communicate. As with anything in this world, this also opens a world of danger.

Let’s start with the API gateway, a common construct in the cloud to allow communication to backend applications. The API gateway itself is a target, because it can allow a hacker to manipulate the gateway, and allow unwanted traffic through. API gateways were designed to be integrated into applications. They were not designed for security. This means untrusted connections can come into said gateway and perhaps retrieve data that individual shouldn’t see. Likewise, the API requests to the gateway can come with malicious payloads.

Another attack that can affect your API gateway and likewise the application behind it, is a DDOS attack. The common answer to defend against this is Web Application Firewall (WAF). The problem is WAFs struggle to deal with low, slow DDOS attacks, because the steady stream of requests looks like normal traffic. A really great way to deter DDOS attacks at the API gateway however is to limit the number of requests for each method.

A great way to prevent API attacks lies in the configuration. Denying anonymous access is huge. Likewise, changing tokens, passwords and keys limit the chance effective credentials can be used. Lastly, disabling any type of clear-text authentication. Furthermore, enforcing SSL/TLS encryption and implementing multifactor authentication are great deterrents.

Compute

No cloud service would be complete without compute resources. This is when an organization builds out virtual machines to host applications and services. This also introduces yet another attack surface, and once again, this is not protected by the cloud service provider. This is purely the customers responsibility.

Many times, in discussing my customers’ migration from an on-premises datacenter to the cloud, one of the common methods is the “lift-and-shift” approach. This means customers take the virtual machines they have running in their datacenter and simply migrating those machines to the cloud. Now, the question is, what kind of security assessment was done on those virtual machines prior to migrating? Were those machines patched? Were discovered security flaws fixed? In my personal experience the answer is no. Therefore, these organizations are simply taking their problems from one location to the next. The security holes still exist and could potentially be exploited, especially if the server is public facing or network policies are improperly applied. For this type of process, I think a better way to look at this is “correct-and-lift-and-shift”.

Now once organizations have already established their cloud presence, they will eventually need to deploy new resources, and this can mean developing or building upon a machine image. The most important thing to remember here is that these are computers. They are still vulnerable to malware, so regardless of being in the cloud or not, the same security controls are required including things like anti-malware, host IPS, integrity monitoring and application control just to name a few.

Networking

Cloud services make it incredibly easy to deploy networks and divide them into subnets and even allow cross network communication. They also give you the ability to lock down the types of traffic that are allowed to traverse those networks to reach resources. This is where security groups come in. These security groups are configured by people, so there’s always that chance that a port is open that shouldn’t be, opening a potential vulnerability. It’s incredibly important from this perspective to really have a grasp on what a compute resource is talking to and why, so the proper security measures can be applied.

So is the cloud really safe from hackers? No safer than anything else unless organizations make sure they’re taking security in their hands and understand where their responsibility begins, and the cloud service provider’s ends. The arms war between hackers and security professionals is still the same as it ever was, the battleground just changed.

The post Is Cloud Computing Any Safer From Malicious Hackers? appeared first on .

Principles of a Cloud Migration – Security W5H – The HOW

By Jason Dablow
cloud

“How about… ya!”

Security needs to be treated much like DevOps in evolving organizations; everyone in the company has a responsibility to make sure it is implemented. It is not just a part of operations, but a cultural shift in doing things right the first time – Security by default. Here are a few pointers to get you started:

1. Security should be a focus from the top on down

Executives should be thinking about security as a part of the cloud migration project, and not just as a step of the implementation. Security should be top of mind in planning, building, developing, and deploying applications as part of your cloud migration. This is why the Well Architected Framework has an entire pillar dedicated to security. Use it as a framework to plan and integrate security at each and every phase of your migration.

2. A cloud security policy should be created and/or integrated into existing policy

Start with what you know: least privilege permission models, cloud native network security designs, etc. This will help you start creating a framework for these new cloud resources that will be in use in the future. Your cloud provider and security vendors, like Trend Micro, can help you with these discussions in terms of planning a thorough policy based on the initial migration services that will be used. Remember from my other articles, a migration does not just stop when the workload has been moved. You need to continue to invest in your operation teams and processes as you move to the next phase of cloud native application delivery.

3. Trend Micro’s Cloud One can check off a lot of boxes!

Using a collection of security services, like Trend Micro’s Cloud One, can be a huge relief when it comes to implementing runtime security controls to your new cloud migration project. Workload Security is already protecting thousands of customers and billions of workload hours within AWS with security controls like host-based Intrusion Prevention and Anti-Malware, along with compliance controls like Integrity Monitoring and Application Control. Meanwhile, Network Security can handle all your traffic inspection needs by integrating directly with your cloud network infrastructure, a huge advantage in performance and design over Layer 4 virtual appliances requiring constant changes to route tables and money wasted on infrastructure. As you migrate your workloads, continuously check your posture against the Well Architected Framework using Conformity. You now have your new infrastructure secure and agile, allowing your teams to take full advantage of the newly migrated workloads and begin building the next iteration of your cloud native application design.

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html. To have a more personalized conversation, please add me to LinkedIn!

The post Principles of a Cloud Migration – Security W5H – The HOW appeared first on .

The Fear of Vendor Lock-in Leads to Cloud Failures

By Trend Micro

 

Vendor lock-in has been an often-quoted risk since the mid-1990’s.

Fear that by investing too much with one vendor, an organization reduces their options in the future.

Was this a valid concern? Is it still today?

 

The Risk

Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.

This way, as the vendor’s tools mature, they continue to support your business.

The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.

If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.

Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.

All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.

Cloud Maturity

When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.

Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.

In isolation, that’s a smart move.

Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.

Automation

The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.

Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.

Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.

Missing Out

When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.

The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.

You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.

Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.

When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.

 

But Multi-cloud…

In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.

As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.

They use automation to reduce the impact if they have to change providers at some point in the future.

This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.

 

Cloud Spanning Tools

There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.

Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.

The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.

This means automation playbooks can easily support multi-cloud (e.g.,  Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One™).  Observability tools can easily support multi-cloud (e.g., Honeycomb.io).

The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.

In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.

The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .

Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations

By Trend Micro

 

Trend Micro is excited to launch new Trend Micro Cloud One™ – Conformity capabilities that will strengthen protection for Azure resources.

 

As with any launch, there is a lot of new information, so we decided to sit down with one of the founders of Conformity, Mike Rahmati. Mike is a technologist at heart, with a proven track record of success in the development of software systems that are resilient to failure and grow and scale dynamically through cloud, open-source, agile, and lean disciplines. In the interview, we picked Mike’s brain on how these new capabilities can help customers prevent or easily remediate misconfigurations on Azure. Let’s dive in.

 

What are the common business problems that customers encounter when building on or moving their applications to Azure or Amazon Web Services (AWS)?

The common problem is there are a lot of tools and cloud services out there. Organizations are looking for tool consolidation and visibility into their cloud environment. Shadow IT and business units spinning up their own cloud accounts is a real challenge for IT organizations to keep on top of. Compliance, security, and governance controls are not necessarily top of mind for business units that are innovating at incredible speeds. That is why it is so powerful to have a tool that can provide visibility into your cloud environment and show where you are potentially vulnerable from a security and compliance perspective.

 

Common misconfigurations on AWS are an open Amazon Elastic Compute Cloud (EC2) or a misconfigured IAM policy. What is the equivalent for Microsoft?

The common misconfigurations are actually quite similar to what we’ve seen with AWS. During the product preview phase, we’ve seen customers with many of the same kinds of misconfiguration issues as we’ve seen with AWS. For example, Microsoft Azure Blobs Storage is the equivalent to Amazon S3 – that is a common source of misconfigurations. We have observed misconfiguration in two main areas: Firewall and Web Application Firewall (WAF),which is equivalent to AWS WAF. The Firewall is similar to networking configuration in AWS, which provides inbound protection for non-HTTP protocols and network related protection for all ports and protocols. It is important to note that this is based on the 100 best practices and 15 services we currently support for Azure and growing, whereas, for AWS, we have over 600 best practices in total, with over 70 controls with auto-remediation.

 

Can you tell me about the CIS Microsoft Azure Foundation Security Benchmark?

We are thrilled to support the CIS Microsoft Azure Foundation Security Benchmark. The CIS Microsoft Azure Foundations Benchmark includes automated checks and remediation recommendations for the following: Identity and Access Management, Security Center, Storage Accounts, Database Services, Logging and Monitoring, Networking, Virtual Machines, and App Service. There are over 100 best practices in this framework and we have rules built to check for all of those best practices to ensure cloud builders are avoiding risk in their Azure environments.

Can you tell me a little bit about the Microsoft Shared Responsibility Model?

In terms of shared responsibility model, it’s is very similar to AWS. The security OF the cloud is a Microsoft responsibility, but the security IN the cloud is the customers responsibility. Microsoft’s ecosystem is growing rapidly, and there are a lot of services that you need to know in order to configure them properly. With Conformity, customers only need to know how to properly configure the core services, according to best practices, and then we can help you take it to the next level.

Can you give an example of how the shared responsibility model is used?

Yes. Imagine you have a Microsoft Azure Blob Storage that includes sensitive data. Then, by accident, someone makes it public. The customer might not be able to afford an hour, two hours, or even days to close that security gap.

In just a few minutes, Conformity will alert you to your risk status, provide remediation recommendations, and for our AWS checks give you the ability to set up auto-remediation. Auto-remediation can be very helpful, as it can close the gap in near-real time for customers.

What are next steps for our readers?

I’d say that whether your cloud exploration is just taking shape, you’re midway through a migration, or you’re already running complex workloads in the cloud, we can help. You can gain full visibility of your infrastructure with continuous cloud security and compliance posture management. We can do the heavy lifting so you can focus on innovating and growing. Also, you can ask anyone from our team to set you up with a complimentary cloud health check. Our cloud engineers are happy to provide an AWS and/or Azure assessment to see if you are building a secure, compliant, and reliable cloud infrastructure. You can find out your risk level in just 10-minutes.

 

Get started today with a 60-day free trial >

Check out our knowledge base of Azure best practice rules>

Learn more >

 

Do you see value in building a security culture that is shifted left?

Yes, we have done this for our customers using AWS and it has been very successful. The more we talk about shifting security left the better, and I think that’s where we help customers build a security culture. Every cloud customer is struggling with implementing earlier on in the development cycle and they need tools. Conformity is a tool for customers which is DevOps or DevSecOps friendly and helps them build a security culture that is shifted left.

We help customers shift security left by integrating the Conformity API into their CI/CD pipeline. The product also has preventative controls, which our API and template scanners provide. The idea is we help customers shift security left to identify those misconfigurations early on, even before they’re actually deployed into their environments.

We also help them scan their infrastructure-as-code templates before being deployed into the cloud. Customers need a tool to bake into their CI/CD pipeline. Shifting left doesn’t simply mean having a reporting tool, but rather a tool that allows them to shift security left. That’s where our product, Conformity, can help.

 

The post Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations appeared first on .

8 Cloud Myths Debunked

By Trend Micro

Many businesses have misperceptions about cloud environments, providers, and how to secure it all. We want to help you separate fact from fiction when it comes to your cloud environment.

This list debunks 8 myths to help you confidently take the next steps in the cloud.

The post 8 Cloud Myths Debunked appeared first on .

Principles of a Cloud Migration

By Jason Dablow
cloud

Development and application teams can be the initial entry point of a cloud migration as they start looking at faster ways to accelerate value delivery. One of the main things they might use during this is “Infrastructure as Code,” where they are creating cloud resources for running their applications using lines of code.

In the below video, as part of a NADOG (North American DevOps Group) event, I describe some additional techniques on how your development staff can incorporate the Well Architected Framework and other compliance scanning against their Infrastructure as Code prior to it being launched into your cloud environment.

If this content has sparked additional questions, please feel free to reach out to me on my LinkedIn. Always happy to share my knowledge of working with large customers on their cloud and transformation journeys!

The post Principles of a Cloud Migration appeared first on .

Perspectives Summary – What You Said

By William "Bill" Malik (CISA VP Infrastructure Strategies)

 

On Thursday, June 25, Trend Micro hosted our Perspectives 2-hour virtual event. As the session progressed, we asked our attendees, composed of +5000 global registrants, two key questions. This blog analyzes those answers.

 

First, what is your current strategy for securing the cloud?

Rely completely on native cloud platform security capabilities (AWS, Azure, Google…) 33%

Add on single-purpose security capabilities (workload protection, container security…) 13%

Add on security platform with multiple security capabilities for reduced complexity 54%

 

This result affirms IDC analyst Frank Dickson’s observation that most cloud customers will benefit from a suite offering a range of security capabilities covering multiple cloud environments. For the 15% to 20% of organizations that rely on one cloud provider, purchasing a security solution from that vendor may provide sufficient coverage. The quest for point products (which may be best-of-breed, as well) introduces additional complexity across multiple cloud platforms, which can obscure problems, confuse cybersecurity analysts and business users, increase costs, and reduce efficiency.  The comprehensive suite strategy compliments most organizations’ hybrid, multi-cloud approach.

Second, and this is multiple choice, how are you enabling secure digital transformation in the cloud today?

 

This shows that cloud users are open to many available solutions for improving cloud security. The adoption pattern follows traditional on-premise security deployment models. The most commonly cited solution, Network Security/Cloud IPS, recognizes that communication with anything in the cloud requires a trustworthy network. This is a very familiar technique, dating back in the on-premise environment to the introduction of firewalls in the early 1990s from vendors like CheckPoint and supported by academic research as found in Cheswick and Bellovin’s Firewalls and Internet Security (Addison Wesley, 1994).

 

The frequency of data exposure due to misconfigured cloud instances surely drives Cloud Security Posture Management, certainly aided by the ease of deployment of tools like Cloud One conformity.

 

The newness of containers in the production environment most likely explains the relatively lower deployment of container security today.

 

The good news is that organizations do not have to deploy and manage a multitude of point products addressing one problem on one environment. The suite approach simplifies today’s reality and positions the organization for tomorrow’s challenges.

 

Looking ahead, future growth in industrial IoT and increasing deployments of 5G-based public and non-public networks will drive further innovations, increasing the breadth of the suite approach to securing hybrid, multi-cloud environments.

 

What do you think? Let me know @WilliamMalikTM.

 

The post Perspectives Summary – What You Said appeared first on .

Risk Decisions in an Imperfect World

By Mark Nunnikhoven (Vice President, Cloud Research)

Risk decisions are the foundation of information security. Sadly, they are also one of the most often misunderstood parts of information security.

This is bad enough on its own but can sink any effort at education as an organization moves towards a DevOps philosophy.

To properly evaluate the risk of an event, two components are required:

  1. An assessment of the impact of the event
  2. The likelihood of the event

Unfortunately, teams—and humans in general—are reasonably good at the first part and unreasonably bad at the second.

This is a problem.

It’s a problem that is amplified when security starts to integration with teams in a DevOps environment. Originally presented as part of AllTheTalks.online, this talk examines the ins and outs of risk decisions and how we can start to work on improving how our teams handle them.

 

The post Risk Decisions in an Imperfect World appeared first on .

Survey: Employee Security Training is Essential to Remote Working Success

By Trend Micro

Organisations have been forced to adapt rapidly over the past few months as government lockdowns kept most workers to their homes. For many, the changes they’ve made may even become permanent as more distributed working becomes the norm. This has major implications for cybersecurity. Employees are often described as the weakest link in the corporate security chain, so do they become an even greater liability when working from home?

Unfortunately, a major new study from Trend Micro finds that, although many have become more cyber-aware during lockdown, bad habits persist. CISOs looking to ramp up user awareness training may get a better return on investment if they try to personalize strategies according to specific user personas.

What we found

We polled 13,200 remote workers across 27 countries to compile the Head in the Clouds study. It reveals that 72% feel more conscious of their organisation’s cybersecurity policies since lockdown began, 85% claim they take IT instructions seriously, and 81% agree that cybersecurity is partly their responsibility. Nearly two-thirds (64%) even admit that using non-work apps on a corporate device is a risk.

Yet in spite of these lockdown learnings, many employees are more preoccupied by productivity. Over half (56%) admit using a non-work app on a corporate device, and 66% have uploaded corporate data to it; 39% of respondents “often” or “always” access corporate data from a personal device; and 29% feel they can get away with using a non-work app, as IT-backed solutions are “nonsense.”

This is a recipe for shadow IT and escalating levels of cyber-risk. It also illustrates that current approaches to user awareness training are falling short. In fact, many employees seem to be aware of what best practice looks like, they just choose not to follow it.

Four security personas

This is where the second part of the research comes in. Trend Micro commissioned Dr Linda Kaye, Cyberpsychology Academic at Edge Hill University, to profile four employee personas based on their cybersecurity behaviors: fearful, conscientious, ignorant and daredevil.

In this way: Fearful employees may benefit from training simulation tools like Trend Micro’s Phish Insight, with real-time feedback from security controls and mentoring.

Conscientious staff require very little training but can be used as exemplars of good behavior, and to team up with “buddies” from the other groups.

Ignorant users need gamification techniques and simulation exercises to keep them engaged in training, and may also require additional interventions to truly understand the consequences of risky behavior.

Daredevil employees are perhaps the most challenging because their wrongdoing is the result not of ignorance but a perceived superiority to others. Organisations may need to use award schemes to promote compliance, and, in extreme circumstances, step up data loss prevention and security controls to mitigate their risky behavior.

By understanding that no two employees are the same, security leaders can tailor their approach in a more nuanced way. Splitting staff into four camps should ensure a more personalized approach than the one-size-fits-all training sessions most organisations run today.

Ultimately, remote working only works if there is a high degree of trust between managers and their teams. Once the pandemic recedes and staff are technically allowed back in the office, that trust will have to be re-earned if they are to continue benefiting from a Work From Home environment.

The post Survey: Employee Security Training is Essential to Remote Working Success appeared first on .

Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response

By Trend Micro

The endpoint has long been a major focal point for attackers targeting enterprise IT environments. Yet increasingly, security bosses are being forced to protect data across the organization, whether it’s in the cloud, on IoT devices, in email, or on-premises servers. Attackers may jump from one environment to the next in multi-stage attacks and even hide between the layers. So, it pays to have holistic visibility, in order to detect and respond more effectively.

This is where XDR solutions offer a convincing alternative to EDR and point solutions. But unfortunately, not all providers are created equal. Trend Micro separates themselves from the pack by providing mature security capabilities across all layers, industry-leading threat intelligence, and an AI-powered analytical approach that produces fewer, higher fidelity alerts.

Under pressure

It’s no secret that IT security teams today are under extreme pressure. They’re faced with an enemy able to tap into a growing range of tools and techniques from the cybercrime underground. Ransomware, social engineering, fileless malware, vulnerability exploits, and drive-by-downloads, are just the tip of the iceberg. There are “several hundred thousand new malicious programs or unwanted apps registered every day,” according to a new Osterman Research report. It argues that, while endpoint protection must be a “key component” in corporate security strategy, “It can only be one strand” —complemented with protection in the cloud, on the network, and elsewhere.

There’s more. Best-of-breed approaches have saddled organizations with too many disparate tools over the years, creating extra cost, complexity, management headaches, and security gaps. This adds to the workload for overwhelmed security teams.

According to Gartner, “Two of the biggest challenges for all security organizations are hiring and retaining technically savvy security operations staff, and building a security operations capability that can confidently configure and maintain a defensive posture as well as provide a rapid detection and response capacity. Mainstream organizations are often overwhelmed by the intersectionality of these two problems.”

XDR appeals to organizations struggling with all of these challenges as well as those unable to gain value from, or who don’t have the resources to invest in, SIEM or SOAR solutions. So what does it involve?

What to look for

As reported by Gartner, all XDR solutions should fundamentally achieve the following:

  • Improve protection, detection, and response
  • Enhance overall productivity of operational security staff
  • Lower total cost of ownership (TCO) to create an effective detection and response capability

However, the analyst urges IT buyers to think carefully before choosing which provider to invest in. That’s because, in some cases, underlying threat intelligence may be underpowered, and vendors have gaps in their product portfolio which could create dangerous IT blind spots. Efficacy will be a key metric. As Gartner says, “You will not only have to answer the question of does it find things, but also is it actually finding things that your existing tooling is not.”

A leader in XDR

This is where Trend Micro XDR excels. It has been designed to go beyond the endpoint, collecting and correlating data from across the organization, including; email, endpoint, servers, cloud workloads, and networks. With this enhanced context, and the power of Trend Micro’s AI algorithms and expert security analytics, the platform is able to identify threats more easily and contain them more effectively.

Forrester recently recognized Trend Micro as a leader in enterprise detection and response, saying of XDR, “Trend Micro has a forward-thinking approach and is an excellent choice for organizations wanting to centralize reporting and detection with XDR but have less capacity for proactively performing threat hunting.”

According to Gartner, fewer than 5% of organizations currently employ XDR. This means there’s a huge need to improve enterprise-wide protection. At a time when corporate resources are being stretched to the limit, Trend Micro XDR offers global organizations an invaluable chance to minimize enterprise risk exposure whilst maximizing the productivity of security teams.

The post Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response appeared first on .

Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower

By Trend Micro

Things fail. It happens. A core principle of building well in the AWS Cloud is reliability. Dr. Vogels said it best, “How can you reduce the impact of failure on your customers?” He uses the term “blast radius” to describe this principle.

One of the key methods for reducing blast radius is the AWS account itself. Accounts are free and provide a strong barrier between resources, and thus, failures or other issues. This type of protection and peace of mind helps teams innovate by reducing the risk of running into another team’s work. The challenge is managing all of these accounts in a reasonable manner. You need to strike a balance between providing security guardrails for teams while also ensuring that each team gets access to the resources they need.

AWS Services & Features

There are a number of AWS services and features that help address this need. AWS Organizations, AWS Firewall Manager, IAM Roles, tagging, AWS Resource Access Manager, AWS Control Tower, and more, which all play a role in helping your team manage multiple accounts.

For this post, we’ll look at AWS Control Tower a little closer. AWS Control Tower was made generally available at AWS re:Inforce. The service provides an easy way to setup and govern AWS accounts in your environment. You can configure strong defaults for all new accounts, pre-populate IAM Roles, and more. Essentially, AWS Control Tower makes sure that any new account starts off on the right foot.

For more on the service, check out this excellent talk from the launch.

Partner Integrations

With almost a year under its belt, AWS Control Tower is now expanding to provide partner integrations. Now, in addition to setting up AWS services and features, you can pre-config supported APN solutions as well. Trend Micro is among the first partners to support this integration by providing the ability to add Trend Micro Cloud One™Workload Security and Trend Micro Cloud One™Conformity to your Control Tower account factory. Once configured, any new account that is created via the factory will automatically be configured in your Trend Micro Cloud One account.

Integration Advantage

This integration not only reduces the friction in getting these key security tools setup, it also provides immediate visibility into your environment. Workload Security will now be able show you any Amazon EC2 instances or Amazon ECS hosts within your accounts. You’ll still need to install and apply a policy to the Workload Security agent to protect these instances, but this initial visibility provides a map for your teams, reducing the time to protection. Conformity will start generating information within minutes. This information from Conformity will allow your teams to get a quick handle on their security posture and more with fast and ongoing security and compliance checks.

Integrating this from the beginning of every new account will allow each team to track their progress against a huge set of recommended practices across all five pillars of the Well-Architected Framework.

What’s Next?

One of the biggest challenges in cloud security is integrating it early in the development process. We know that the earlier security is factored into your builds, the better the result. You can’t get much earlier than the initial creation on an account. That’s why this new integration with AWS Control Tower is so exciting. Having security in every account within your organization from day zero provides much needed visibility and a fantastic head start.

The post Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower appeared first on .

Cloud Security Is Simple, Absolutely Simple.

By Mark Nunnikhoven (Vice President, Cloud Research)

“Cloud security is simple, absolutely simple. Stop over complicating it.”

This is how I kicked off a presentation I gave at the CyberRisk Alliance, Cloud Security Summit on Apr 17 of this year. And I truly believe that cloud security is simple, but that does not mean easy. You need the right strategy.

As I am often asked about strategies for the cloud, and the complexities that come with it, I decided to share my recent talk with you all. Depending on your preference, you can either watch the video below or read the transcript of my talk that’s posted just below the video. I hope you find it useful and will enjoy it. And, as always, I’d love to hear from you, find me @marknca.

For those of you who prefer to read rather than watch a video, here’s the transcript of my talk:

Cloud security is simple, absolutely simple. Stop over complicating it.

Now, I know you’re probably thinking, “Wait a minute, what is this guy talking about? He is just off his rocker.”

Remember, simple doesn’t mean easy. I think we make things way more complicated than they need to be when it comes to securing the cloud, and this makes our lives a lot harder than they need to be. There’s some massive advantages when it comes to security in the cloud. Primarily, I think we can simplify our security approach because of three major reasons.

The first is integrated identity and access management. All three major cloud providers, AWS, Google and Microsoft offer fantastic identity, and access management systems. These are things that security, and [inaudible 00:00:48] professionals have been clamouring for, for decades.

We finally have this ability, we need to take advantage of it.

The second main area is the shared responsibility model. We’ll cover that more in a minute, but it’s an absolutely wonderful tool to understand your mental model, to realize where you need to focus your security efforts, and the third area that simplifies security for us is the universal application of APIs or application programming interfaces.

These give us as security professionals the ability to orchestrate. and automate a huge amount of the grunt work away. These three things add up to, uh, the ability for us to execute a very sophisticated, uh, or very difficult to pull off, uh, security practice, but one that ultimately is actually pretty simple in its approach.

It’s just all the details are hard and we’re going to use these three advantages to make those details simpler. So, let’s take a step back for a second and look at what our goal is.

What is the goal of cybersecurity? That’s not something you hear quite often as a question.

A lot of the time you’ll hear the definition of cybersecurity is, uh, about, uh, securing the confidentiality, integrity, and availability of information or data. The CIA triad, different CIA, but I like to phrase this in a different way. I think the goal is much clearer, and the goal’s much simpler.

It is to make sure that whatever you’re building works as intended and only as intended. Now, you’ll realize you can’t accomplish this goal just as a security team. You need to work with your, uh, developers, you need to work with operations, you need to work with the business units, with the end users of your application as well.

This is a wonderful way of phrasing our goal, and realizing that we’re all in this together to make sure whatever you’re building works as intended, and only as intended.

Now, if we move forward, and we look at who are we up against, who’s preventing our stuff from working, uh, well?

You look at normally, you think of, uh, who’s attacking our systems? Who are the risks? Is it nation states? Is it maybe insider threats? While these are valid threats, they’re really overblown. You’re… don’t have to worry about nation state attacks.

If you’re a nation state, worry about it. If you’re not a nation state, you don’t have to worry about it because frankly, there’s nothing you can do to stop them. You can slow them down a little bit, but by definition, they’re going to get through your resources.

As far as insider attacks, this is an HR problem. Treat your people well. Um, check in with them, and have a strong information management policy in place, and you’re going to reduce this threat naturally. If you go hunting for people, you’re going to create the very threats that you’re looking at.

So, it brings us to the next set. What about cyber criminals? You know, we do have to worry about cyber criminals.

Cyber criminals are targeting systems simply because these systems are online, these are profit motivated criminals who are organized, and have a good set of tools, so we absolutely need to worry about them, but there’s a more insidious or more commonplace, maybe a simpler threat that we need to worry about, and that’s one of mistakes.

The vast majority of issues that happen around data breaches around security vulnerabilities in the cloud are mistake driven. In fact, to the point where I would not even worry about cyber criminals simply because all the work we’re going to do to focus on, uh, preventing mistakes.

And catching, and rectifying the stakes really, really quickly is going to uh, you a cover all the stuff that we would have done to block out cyber criminals as well, so mistakes are very common because people are using a lot more services in the cloud.

You have a lot more, um, parts and moving, uh, complexity in your deployment, um, and you’re going to make a mistake, which is why you need to put automated systems in place to make sure that those mistakes don’t happen, or if they do happen that they’re caught very, very quickly.

This applies to standard DevOps, the philosophies for building. It also applies to security very, very wonderfully, so this is the main thing we’re going to focus on.

So, if we look at that sum up together, we have our goal of making sure whatever we’re building works as intended, and only as intended, and our major issue here, the biggest risk to this is simple mistakes and misconfigurations.

Okay, so we’re not starting from ground zero here. We can learn from others, and the first place we’re going to learn is the shared responsibility model. The shared responsibility applies to all cloud service providers.

If you look on the left hand side of the slide here, you’ll see the traditional on premise model. We roughly have six areas where something has to be done roughly daily, whether it’s patching, maintenance, uh, just operational visibility, monitoring, that kind of thing, and in a traditional on premise environment, you’re responsible for all of it, whether it’s your team, or a team underneath your organization.

Somewhere within your tree, people are on the hook for doing stuff daily. Here when we move into an infrastructure, so getting a virtual machine from a cloud provider right off the bat, half of the responsibilities are pushed away.

That’s a huge, huge win.

And, as we move further and further to the right to more managed service, or staff level services, we have less and less daily responsibilities.

Now, of course, you always still have to verify that the cloud service provider’s doing what they, uh, say they’re doing, which is why certifications and compliance frameworks come into play, uh, but the bottom line is you’re doing less work, so you can focus on fewer areas.

Um, that is, or I should say not less work, but you’re doing, uh, less broad of a work.

So you can have that deeper focus, and of course, you always have to worry about service configuration. You are given knobs and dials to turn to lock things down. You should use them like things like encrypting, uh, all your data at rest.

Most of the time it’s an easy check box, but it’s up to you to check it ‘cause it’s your responsibility.

We also have the idea of an adoption framework, and this applies for Azure, for AWS and for Google, uh, and what they do is they help you map out your business processes.

This is important to security, because it gives you the understanding of where your data is, what’s important to the business, where does it lie, who needs to touch it, and access it and process it.

That also gives us the idea, uh, or the ability to identify the stakeholders, so that we know, uh, you know, who’s concerned about this data, who is, has an investment in this data, and finally it helps to, to deliver an action plan.

The output of all of these frameworks is to deliver an action plan to help you migrate into the cloud and help you to continuously evolve. Well, it’s also a phenomenal map for your security efforts.

You want to prioritize security, this is how you do it. You get it through the adoption framework, understanding what’s important to the business, and that lets you identify critical systems and areas for your security.

Again, we want to keep things simple, right? And, the third, uh, the o- other things we want to look at is the CIS foundations. They have them for AWS, Azure and GCP, um, and these provide a prescriptive guidance.

They’re really, um, a strong baseline, and a checklist of tasks that you can accomplish, um, or take on, on your, uh, take on, on your own, excuse me, uh, in order to, um, you know, basically cover off the really basics is encryption at rest on, um, you know, do I make sure that I don’t have, uh, things needlessly exposed to the internet, that type of thing.

Really fantastic reference point and a starting point for your security practice.

Again, with this idea of keeping things as simple as possible, so when it comes to looking at our security policy, we’ve used the frameworks, um, and the baseline to kind of set up a strong, uh, start to understand, uh, where the business is concerned, and to prioritize.

And, the first question we need to ask ourselves as security practitioners, what happened? If we, if something happens, and we ask what happened?

Do we have the ability to answer this question? So, that starts us off with logging and auditing. This needs to be in place before something happened. Let me just say that again, before something happened, you need [laughs] to be able to have this information in place.

Now, uh, this is really, uh, to ask these key questions of what happened in my account, and who, or what made that thing happen?

So, this starts in the cloud with some basic services. Uh, for AWS it’s cloud trail, for Azure, it’s monitor, and for Google Cloud it used to be called Stackdriver, it is now the Google Cloud operations suite, so these need to be enabled on at full volume.

Don’t worry, you can use some lifecycle rules on the data source to keep your costs low.

But, this gives you that layer, that basic auditing and logging layer, so that you can answer that question of what happened?

So, the next question you want to ask yourself or have the ability to answer is who’s there, right? Who’s doing what in my account? And, that comes down to identity.

We’ve already mentioned this is one of the key pillars of keeping security simple, and getting that highly effective security in your cloud.

[00:09:00] So here you’re answering the questions of who are you, and what are you allowed to do? This is where we get a very simple privilege, uh, or principle in security, which is the principle of least privilege.

You want to give an identity, so whether that’s a user, or a role, or a service, uh, only the privileges they, uh, require that are essential to perform the task that, uh, they are intended to do.

Okay?

So, basically if I need to write a file into a storage, um, folder or a bucket, I should only have the ability to write that file. I don’t need to read it, I don’t need to delete it, I just need to write to it, so only give me that ability.

Remember, that comes back to the other pillar of simple security here of, of key cloud security, is integrated identity.

This is where it really takes off, is that we start to assign very granular access permissions, and don’t worry, we’re going to use the APIs to automate all this stuff, so that it’s not a management headache, but the principle of these privilege is absolutely critical here.

The services you’re going to be using, amazingly, all three cloud providers got in line, and named them the same thing. It’s IAM, identity access management, whether that’s AWS, Azure or Google Cloud.

Now, the next question we’re going to a- ask ourselves are the areas where we’re going to be looking at is really where should I be focusing security controls? Where should I be putting stuff in place?

Because up until now we’ve really talked about leveraging what’s available from the cloud service providers, and you absolutely should available, uh, maximize your usage of their, um, native and primitive, uh, structures primitive as far as base concepts, not as, um, refined.

They’re very advanced controls and, but there are times where you’re going to need to put in your own controls, and these are the areas you’re going to focus on, so you’re going to start with networking, right?

So, in your networking, you’re going to maximize the native structures that are available in the cloud that you’re in, so whether that’s a project structure in Google Cloud, whether that’s a service like transit gateway in AWS, um, and all of them have this idea of a VPC or virtual private cloud or virtual network that is a very strong boundary for you to use.

Remember, most of the time you’re not charged for the creation of those. You have limits in your accounts, but accounts are free, and you can keep adding more, uh, virtual networks. You may be saying, wait a minute, I’m trying to simplify things.

Actually, having multiple virtual networks or virtual private clouds ends up being far simpler because each of them has a task. You go, this application runs in this virtual private cloud, not a big shared one in this specific VPC, and that gives you this wonderfully strong security boundaries, and a very simple way of looking at one VPC, one action, very much the Unix philosophy in play.

Key here though is understanding that while all of the security controls in place for your service provider, um, give you, so, you know, whether it’s VPCs, routing tables, um, uh, access control lists, security groups, all the SDN features that they’ve got in place.

These really help you figure out whether service A or system A is allowed to talk to B, but they don’t tell you what they’re saying.

And, that’s where additional controls called an IPS, or intrusion prevention system come into play, and you may want to look at getting a third party control in to do that, because none of the th- big three cloud providers offer an IPS at this point.

[00:12:00] But that gives you the ability to not just say, “Hey, you’re allowed to talk to each other.” But, to monitor that conversation, to ensure that there’s not malicious code being passed back and forth between systems that nobody’s trying a denial of service attack.

A whole bunch of extra things on there have, so that’s where IPS comes into play in your network defense. Now, we look at compute, right?

We can have compute in various forms, whether that’s in serverless functions, whether that’s in containers, manage containers, whether that’s in traditional virtual machines, but all the principles are the same.

You want to understand where the shared responsibility line is, how much is on your plate, how much is on the CSPs?

You want to understand that you need to harden the EOS, or the service, or both in some cases, make sure that, that’s locked down, so have administrator passwords. Very, very complicated.

Don’t log into these systems, uh, you know, because you want to be fixing things upstream. You want to be fixing things in the build pipeline, not logging into these systems directly, and that’s a huge thing for, uh, systems people to get over, but it’s absolutely essential for security, and you know what?

It’s going to take a while, but there’s some tricks there you can follow with me. You can see, uh, on the slides, uh, at Mark, that is my social everywhere, uh, happy to walk you through the next steps.

This idea of this presentation’s really just the simple basics to start with, to give you that overview of where to focus your time, and, dispel that myth that cloud security is complicating things.

It is a huge path is simplicity, which is a massive lens, or for security.

So, the last area you want to focus here is in data and storage. Whether this is databases, whether this is big blob storage, or, uh, buckets in AWS, it doesn’t really matter the principles, again, all the same.

You want to encrypt your data at rest using the native cloud provided, uh, cloud service provider, uh, features functionality, because most of the time it’s just give it a key address, and give it a checkbox, and you’re good to go.

It’s never been easier to encrypt things, and there is no excuse for it and none of the providers charge extra for, uh, encryption, which is amazing, and you absolutely want to be taking advantage of that, and you want to be as granular as possible with your IAM, uh, and as reasonable, okay?

So, there’s a line here, and a lot of the data stores that are native to the cloud service providers, you can go right down to the data cell level and say, Mark has access, or Mark doesn’t have access to this cell.

That can be highly effective, and maybe right for your use case. It might be too much as well.

But, the nice thing is that you have that option. It’s integrated, it’s pretty straightforward to implement, and then, uh, when we look here, uh, sorry. and then, finally you want to be looking at lifecycle strategies to keep your costs under control.

Um, data really spins out of control when you don’t have to worry about capacity. All of the cloud service providers have some fantastic automations in place.

Basically, just giving you, uh, very simple rules to say, “Okay, after 90 days, move this over to cheaper storage. After 180 days, you know, get rid of it completely, or put it in cold storage.”

Take advantage of those or your bill’s going to spiral out of control, and, and that relates to availability ‘cause uh, uh, and reliability, ‘cause the more you’re spending on that kind of stuff, the less you have to spend on other areas like security and operational efficiency.

So, that brings us to our next big security question. Is this working?

[00:15:00] How do you know if any of this stuff is working? Well, you want to talk about the concept of traceability. Traceability is a, you know, somewhat formal definition, but for me it really comes down to where did this come from, who can access it, and when did they access it?

That ties very closely with the concept of observability. Basically, the ability to look at, uh, closed systems and to infer what’s going on inside based on what’s coming into that system, and what’s leaving that system, really what’s going on.

There’s some great tools here from the service providers. Again, you want to look at, uh, Amazon CloudWatch, uh, Azure Monitor and the Google Cloud operations, uh, suite. Um, and here this leads us to the key, okay?

This is the key to simplifying everything, and I know we’ve covered a ton in this presentation, but I really want you to take a good look at this slide, and again, hit me up, uh, @marknca, happy to answer any questions with, questions afterwards as well here, um, that this will really, really make this simple, and this will really take your security practice to the next level.

If the idea of something happened in your, cloud system, right? In your deployment, there’s a trigger, and then, it either is generating an event or a log.

If you go the bottom row here, you’ve got a log, which you can then react to in a function to deliver some sort of result. That’s the slow-lane on the bottom.

We’re talking minutes here. You also have the top lane where your trigger fires off an event, and then, you react to that with a function, and then, you get a result in the fast lane.

These things happen in seconds, sub-second time. You start to build out your security practice based on this model.

You start automating more and more in these functions, whether it’s, uh, Lambda, whether it’s Cloud Functions, whether it’s Azure Functions, it doesn’t matter.

The CSPs all offer the same core functionality here. This is the critical, critical success metric, is that when you start reacting in the fast lane automatically to things, so if you see that a security event is triggered from like your malware, uh, on your, uh, virtual machine, you can lock that off, and have a new one spin up automatically.

Um, if you’re looking for compliance stuff, the slow lane is the place to go, because it takes minutes.

Reactions happen up top, more, um, stately or more sedate things, so somebody logging into a system is both up top and down low, so up top, if you logged into a VPC or into, um, an instance, or a virtual machine, you’d have a trigger fire off and maybe ask me immediately, “Mark, did you log into the system? Uh, ‘cause you’re, you know, you’re not supposed to be.”

But then I’d respond and say, “Yeah, I, I did log in.” So, immediately you don’t have to respond. It’s not an incident response scenario, but on the bottom track, maybe you’re tracking how many times I’ve logged in.

And after the three or fourth time maybe someone comes by, and has a chat with me, and says, “Hey, do you keep logging into these systems? Can’t you fix it upstream in the deployment, uh, and build a pipeline ‘cause that’s where we need to be moving?”

So, you’ll find this balance, and this concept, I just wanted to get into your heads right now of automating your security practice. If you have a checklist, it should be sitting in a model like this, because it’ll help you, uh, reduce your workload, right?

The idea is to get as much automated possible, and keep things in very clear, and simple boundaries, and what’s more simple than having every security action listed as an automated function, uh, sitting in a code repository somewhere?

[00:18:00] Fantastic approach to modern security practice in the cloud. Very simple, very clear. Yes, difficult to implement. It can be, but it’s an awesome, simple mental model to keep in your head that everything gets automated as a function based on a trigger somewhere.

So, what are the keys to success? What are the keys to keeping this cloud security thing simple? And, hopefully you’ve realized the difference between a simple mental model, and the challenges, uh, in, uh, implementation.

It can be difficult. It’s not easy to implement, but the mental model needs to be kept simple, right? Keep things in their own VPCs, and their own accounts, automate everything. Very, very simple approach. Everything fits into this s- into this structure, so the keys here are remembering the goal.

Make sure that cybersecurity, uh, is making sure that whatever you build works as intended and only as intended. It’s understanding the shared responsibility model, and it’s really looking at, uh, having a plan through cloud adoption frameworks, how to build well, which is a, uh, a concept called the Well-Architected Framework.

It’s specific to AWS, but it’s generic, um, its principles, it can be applied everywhere. We didn’t cover it here, but I’ll put the links, um, in the materials for you, uh, as well as remembering systems over people, right?

Adding the right controls at the right time, uh, and then, finally observing and react. Be vigilant, practice. You’re not going to get this right out of the gates, uh, perfect.

You’re going to have to refine, iterate, and then it’s extremely cloud friendly. That is the cloud model is, get it out there, iterate quickly, but putting the structures in place, you’re not going to make sure that you’re not doing that in an insecure manner.

Thank you very much, uh, here’s a couple of links that’ll help you out before we take some Q&A here, um, trendmicro.com/cloud will get you to the products to learn more. We’re also doing this really cool streaming.

Uh, I host a show called Let’s Talk Cloud. Um, we uh, interview experts, uh, and have a great conversation around, um, what they’re talking about, uh, in the cloud, what they’re working on, and not just around security, but just in building in general.

You can hit that up at trendtalks.fyi. Um, and again, hit me up on social @marknca.

So, we have a couple of questions to kick this off, and you can put more questions in the webinar here, and they will send them along, or answer them in kind if they can.

Um, and that’s really what these are about, is the interaction is getting that, um, to and from. So, the first question that I wanted to tackle is an interesting one, and it’s really that systems over people.

Um, you heard me mention it in the, uh, in the end and the question is really what does that mean systems over people? Isn’t security really about people’s expertise?

And, yes and no, so if you are a SOC analyst, if you are working in a security, uh, role right now, I am really confident saying that 80%, 90% of what you do right now could be delegated out to a system.

So, if you were looking at log lines, and stuff that should be done by systems and bubble up, just the goal for you to investigate to do what people are good at in systems are bad at, so systems mean, uh, you know, putting in, uh, to build pipeline, putting in container scanning in the build pipeline, so that you have to manually scan stuff, right to get rid of the basics. Is that a pen test? 100% no.

Um, but it gets rid of that, hey, you didn’t upgrade to, um, you know, this version of this library.

[00:21:00] That’s all automated, and those, the more systems you get in place, the more you as a security professional, or your security team will be able to focus on where they can really deliver value and frankly, where it’s more interesting work, so that’s what systems over people mean, is basically automate as much as you can to get people doing what people are really good at, and to make sure that the systems catch what we make as mistakes all the time.

If you accidentally try to push an old build out, you know that systems should stop that, if you push a build that hasn’t been checked by that container scanning or by, um, you know, it doesn’t have the appropriate security policy in place.

Systems should catch all that humans shouldn’t have to worry about it at all. That’s systems over processing. You saw that on the, uh, keys to success slide here. I’ll just pull it up. Um, you know, is that, that’s absolutely key.

Another question that we had, uh, was what we didn’t get into here, which was around the Well-Architected Framework. Now, this is a document that was published by AWS, uh, a number of years back, and they’ve kept it going.

They’ve evolved it and essentially it has five pillars. Um, performance, efficiency, uh, op- reliability, security, cost optimization, and operational excellence. Hey, I’ve got all five.

Um, and really [laughs] what that is, is it’s about how to take advantage of these cloud tools.

Now, AWS publishes it, but honestly it applies to Azure, it applies to Google Cloud as well. It’s not service specific. It teaches you how to build in the cloud, and obviously security is one of those big pillars, but it’s… so talking about teaching you how to make those trade offs, how to build an innovation flywheel, so that you have an idea, test it, uh, get the feedback from it, and move forward.

Um, and that’s really, really key. Again, now you should be reading that even if you are an Azure, or GCP customer or, uh, that’s where you’re putting your most of your stuff, because it’s really about the principles, and everything we do, and encourage people to build well, it means that there’s less security issues, right?

Especially we know that the number one problem is mistakes.

That leads to the last question we have here, which is about that, how can I say that cyber criminals, you don’t need to worry about them.

You need to worry about mistakes? That’s a good question. It’s valid, and, um, Trend Micro does a huge amount of research around cyber criminals. I do a whole huge amount of research around cyber criminals.

Uh, my training, by training, and by professional experience. I’m a forensic investigator. This is what I do is take down cyber crimes. Um, but I think mistakes are the number one thing that we deal with in the cloud simply because of the underlying complexity.

I know it’s ironic, and to talk about simplicity, to talk about complexity, but the idea is, um, is that you look at all the major breaches, especially around s3 buckets, those are all m- based on mistake.

There’ve been billions, and billions, and billions of records, and, uh, millions of dollars of damage exposed because of simple mistakes, and that is far more common, uh, than cyber criminals.

And yes, cyber crimes you have [inaudible 00:23:32] worry. You have to worry about them, but everything you’re going to do to fix mistakes, and to put systems in place to stop those mistakes from happening is also going to be for your pr- uh, protection up against cyber criminals, and honestly, if you’re the guy who runs around your organization’s screaming about cyber criminals all the time, you’re far less credible than if you’re saying, “Hey, I want to make sure that we build really, really well, and don’t make mistakes.”

Thank you for taking the time. My name’s Mark Nunnikhoven. I’m the vice president of cloud research at Trend Micro. I’m also an AWS community hero, and I love this stuff. Hit me up on social @marknca. Happy to chat more.

The post Cloud Security Is Simple, Absolutely Simple. appeared first on .

Are You Promoting Security Fluency in your Organization?

By Trend Micro

 

Migrating to the cloud is hard. The PowerPoint deck and pretty architectures are drawn up quickly but the work required to make the move will take months and possibly years.

 

The early stages require significant effort by teams to learn new technologies (the cloud services themselves) and new ways of the working (the shared responsibility model).

 

In the early days of your cloud efforts, the cloud center of expertise is a logical model to follow.

 

Center of Excellence

 

A cloud center of excellence is exactly what it sounds like. Your organization forms a new team—or an existing team grows into the role—that focuses on setting cloud standards and architectures.

 

They are often the “go-to” team for any cloud questions. From the simple (“What’s an Amazon S3 bucket?”), to the nuanced (“What are the advantages of Amazon Aurora over RDS?”), to the complex (“What’s the optimum index/sort keying for this DynamoDB table?”).

 

The cloud center of excellence is the one-stop shop for cloud in your organization. At the beginning, this organizational design choice can greatly accelerate the adoption of cloud technologies.

 

Too Central

 

The problem is that accelerated adoption doesn’t necessarily correlate with accelerated understanding and learning.

 

In fact, as the center of excellent continues to grow its success, there is an inverse failure in organizational learning which create a general lack of cloud fluency.

 

Cloud fluency is an idea introduced by Forrest Brazeal at A Cloud Guru that describes the general ability of all teams within the organization to discuss cloud technologies and solutions. Forrest’s blog post shines a light on this situation and is summed up nicely in this cartoon;

 

Our own Mark Nunnikhoven also spoke to Forrest on episode 2 of season 2 for #LetsTalkCloud.

 

Even though the cloud center of excellence team sets out to teach everyone and raise the bar, the work soon piles up and the team quickly shifts away from an educational mandate to a “fix everything” one.

 

What was once a cloud accelerator is now a place of burnout for your top, hard-to-replace cloud talent.

 

Security’s Past

 

If you’ve paid attention to how cybersecurity teams operate within organizations, you have probably spotted a number of very concerning similarities.

 

Cybersecurity teams are also considered a center of excellence and the central team within the organization for security knowledge.

 

Most requests for security architecture, advice, operations, and generally anything that includes the prefix “cyber”, word “risk”, or hints of “hacking” get routed to this team.

 

This isn’t the security team’s fault. Over the years, systems have increased in complexity, more and more incidents occur, and security teams rarely get the opportunity to look ahead. They are too busy stuck in “firefighting mode” to take as step back and re-evaluate the organizational design structure they work within.

 

According to Gartner, for every 750 employees in an organization, one of those is dedicated to cybersecurity. Those are impossible odds that have lead to the massive security skills gap.

 

Fluency Is The Way Forward

 

Security needs to follow the example of cloud fluency. We need “security fluency” in order to import the security posture of the systems we built and to reduce the risk our organizations face.

 

This is the reason that security teams need to turn their efforts to educating development teams. DevSecOps is a term chock full of misconceptions and it lacks context to drive the needed changes but it is handy for raising awareness of the lack of security fluency.

 

Successful adoption of a DevOps philosophy is all about removing barriers to customer success. Providing teams with the tools and autonomy they require is a critical factor in their success.

 

Security is just one aspect of the development team’s toolkit. It’s up to the current security team to help educate them on the principles driving modern cybersecurity and how to ensure that the systems they build work as intended…and only as intended.

The post Are You Promoting Security Fluency in your Organization? appeared first on .

Fixing cloud migration: What goes wrong and why?

By Trend Micro

 

The cloud space has been evolving for almost a decade. As a company we’re a major cloud user ourselves. That means we’ve built up a huge amount of in-house expertise over the years around cloud migration — including common challenges and perspectives on how organizations can best approach projects to improve success rates.

As part of our #LetsTalkCloud series, we’ve focused on sharing some of this expertise through conversations with our own experts and folks from the industry. To kick off the series, we discussed some of the security challenges solution architects and security engineers face with customers when discussing cloud migrations. Spoiler…these challenges may not be what you expect.

 

Drag and drop

 

This lack of strategy and planning from the start is symptomatic of a broader challenge in many organizations: There’s no big-picture thinking around cloud, only short-term tactical efforts. Sometimes we get the impression that a senior exec has just seen a ‘cool’ demo at a cloud vendor’s conference and now wants to migrate a host of apps onto that platform. There’s no consideration of how difficult or otherwise this would be, or even whether it’s necessary and desirable.

 

These issues are compounded by organizational siloes. The larger the customer, the larger and more established their individual teams are likely to be, which can make communication a major challenge. Even if you have a dedicated cloud team to work on a project, they may not be talking to other key stakeholders in DevOps or security, for example.

 

The result is that, in many cases, tools, applications, policies, and more are forklifted over from on-premises environments to the cloud. This ends up becoming incredibly expensive. as these organizations are not really changing anything. All they are doing is adding an extra middleman, without taking advantage of the benefits of cloud-native tools like microservices, containers, and serverless.

 

There’s often no visibility or control. Organizations don’t understand they need to lockdown all their containers and sanitize APIs, for example. Plus, there’s no authority given to cloud teams around governance, cost management, and policy assignment, so things just run out of control. Often, shared responsibility isn’t well understood, especially in the new world of DevOps pipelines, so security isn’t applied to the right areas.

 

Getting it right

 

These aren’t easy problems to solve. From a security perspective, it seems we still have a job to do in educating the market about shared responsibility in the cloud, especially when it comes to newer technologies, like serverless and containers. Every time there’s a new way of deploying an app, it seems like people make the same mistakes all over again — presuming the vendors are in charge of security.

 

Automation is a key ingredient of successful migrations. Organizations should be automating everywhere, including policies and governance, to bring more consistency to projects and keep costs under control. In doing so, they must realize that this may require a redesign of apps, and a change in the tools they use to deploy and manage those apps.

 

Ultimately, you can migrate apps to the cloud in a couple of clicks. But the governance, policy, and management that must go along with this is often forgotten. That’s why you need clear strategic objectives and careful planning to secure more successful outcomes. It may not be very sexy, but it’s the best way forward.

 

To learn more about cloud migration, check out our blog series. And catch up on all of the latest trends in DevOps to learn more about securing your cloud environment.

The post Fixing cloud migration: What goes wrong and why? appeared first on .

Have You Considered Your Organization’s Technical Debt?

By Madeline Van Der Paelt

TL;DR Deal with your dirty laundry.

Have you ever skipped doing your laundry and watched as that pile of dirty clothes kept growing, just waiting for you to get around to it? You’re busy, you’re tired and you keep saying you’ll get to it tomorrow. Then suddenly, you realize that it’s been three weeks and now you’re running around frantically, late for work because you have no clean socks!

That is technical debt.

Those little things that you put off, which can grow from a minor inconvenience into a full-blown emergency when they’re ignored long enough.

Piling Up

How many times have you had an alarm go off, or a customer issue arise from something you already knew about and meant to fix, but “haven’t had the time”? How many times have you been working on something and thought, “wow, this would be so much easier if I just had the time to …”?

That is technical debt.

But back to you. In your craze to leave for work you manage to find two old mismatched socks. One of them has a hole in it. You don’t have time for this! You throw them on and run out the door, on your way to solve real problems. Throughout the day, that hole grows and your foot starts to hurt.

This is really not your day. In your panicked state this morning you actually managed to add more pain to your already stressed system, plus you still have to do your laundry when you get home! If only you’d taken the time a few days ago…

Coming Back to Bite You

In the tech world where one seemingly small hole – one tiny vulnerability – can bring down your whole system, managing technical debt is critical. Fixing issues before they become emergent situations is necessary in order to succeed.

If you’re always running at full speed to solve the latest issue in production, you’ll never get ahead of your competition and only fall further behind.

It’s very easy to get into a pattern of leaving the little things for another day. Build optimizations, that random unit test that’s missing, that playbook you meant to write up after the last incident – technical debt is a real problem too! By spending just a little time each day to tidy up a few things, you can make your system more stable and provide a better experience for both your customers and your fellow developers.

Cleaning Up

Picture your code as that mountain of dirty laundry. Each day that passes, you add just a little more to it. The more debt you add on, the more daunting your task seems. It becomes a thing of legend. You joke about how you haven’t dealt with it, but really you’re growing increasingly anxious and wary about actually tackling it, and what you’ll find when you do.

Maybe if you put it off just a little bit longer a hero will swoop in and clean up for you! (A woman can dream, right?) The more debt you add, the longer it will take to conquer it, and the harder it will be and the higher the risk is of introducing a new issue.

This added stress and complexity doesn’t sound too appealing, so why do we do it? It’s usually caused by things like having too much work in progress, conflicting priorities and (surprise!) neglected work.

Managing technical debt requires only one important thing – a cultural change.

As much as possible we need to stop creating technical debt, otherwise we will never be able to get it under control. To do that, we need to shift our mindset. We need to step back and take the time to see and make visible all of the technical debt we’re drowning in. Then we can start to chip away at it.

Culture Shift

My team took a page out of “The Unicorn Project” (Kim, 2019) and started by running “debt days” when we caught our breath between projects. Each person chose a pain point, something they were interested in fixing, and we started there. We dedicated two days to removing debt and came out the other side having completed tickets that were on the backlog for over a year.

We also added new metrics and dashboards for better incident response, and improved developer tools.

Now, with each new code change, we’re on the lookout. Does this change introduce any debt? Do we have the ability to fix that now? We encourage each other to fix issues as we find them whether it’s with the way our builds work, our communication processes or a bug in the code.

We need to give ourselves the time to breathe, in both our personal lives or our work day. Taking a pause between tasks not only allows us to mentally prepare for the next one, but it gives us time to learn and reflect. It’s in these pauses that we can see if we’ve created technical debt in any form and potentially go about fixing it right away.

What’s Next?

The improvement of daily work ultimately enables developers to focus on what’s really important, delivering value. It enables them to move faster and find more joy in their work.

So how do you stay on top of your never-ending laundry? Your family chooses to makes a cultural change and decides to dedicate time to it. You declare Saturday as laundry day!

Make the time to deal with technical debt –your developers, security teams, and your customers will thank you for it.

 

The post Have You Considered Your Organization’s Technical Debt? appeared first on .

ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value

By Trend Micro

This material was published by ESG Research Insights Report, Validating Trend Micro’s Approach and Enhancing GTM Intelligence, 2020.

 

 

 

The post ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value appeared first on .

This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

By Jon Clay (Global Threat Communications)
week in security

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, read about one of Microsoft’s largest Patch Tuesday updates ever, including fixes for 120 vulnerabilities and two zero-days. Also, learn about Trend Micro’s new integrations with Amazon Web Services (AWS).

 

Read on:

 

Microsoft Patches 120 Vulnerabilities, Two Zero-Days

This week Microsoft released fixes for 120 vulnerabilities, including two zero-days, in 13 products and services as part of its monthly Patch Tuesday rollout. The August release marks its third-largest Patch Tuesday update, bringing the total number of security fixes for 2020 to 862. “If they maintain this pace, it’s quite possible for them to ship more than 1,300 patches this year,” says Dustin Childs of Trend Micro’s Zero-Day Initiative (ZDI).

 

XCSSET Mac Malware: Infects Xcode Projects, Performs UXSS Attack on Safari, Other Browsers, Leverages Zero-day Exploits

Trend Micro has discovered an unusual infection related to Xcode developer projects. Upon further investigation, it was discovered that a developer’s Xcode project at large contained the source malware, which leads to a rabbit hole of malicious payloads. Most notable in our investigation is the discovery of two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 1

We’re all now living in a post-COVID-19 world characterized by uncertainty, mass home working and remote learning. To help you adapt to these new conditions while protecting what matters most, Trend Micro has developed a two-part blog series on ‘the new normal’. Part one identifies the scope and specific cyber-threats of the new normal. 

 

Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

Trend Micro enhances agility and automation in cloud security through integrations with Amazon Web Services (AWS). Through this collaboration, Trend Micro Cloud One offers the broadest platform support and API integration to protect AWS infrastructure whether building with Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, AWS Fargate, containers, Amazon Simple Storage Service (Amazon S3), or Amazon Virtual Private Cloud (Amazon VPC) networking.

 

Shedding Light on Security Considerations in Serverless Cloud Architectures

The big shift to serverless computing is imminent. According to a 2019 survey, 21% of enterprises have already adopted serverless technology, while 39% are considering it. Trend Micro’s new research on serverless computing aims to shed light on the security considerations in serverless environments and help adopters in keeping their serverless deployments as secure as possible.

 

In One Click: Amazon Alexa Could be Exploited for Theft of Voice History, PII, Skill Tampering

Amazon’s Alexa voice assistant could be exploited to hand over user data due to security vulnerabilities in the service’s subdomains. The smart assistant, which is found in devices such as the Amazon Echo and Echo Dot — with over 200 million shipments worldwide — was vulnerable to attackers seeking user personally identifiable information (PII) and voice recordings.

 

New Attack Lets Hackers Decrypt VoLTE Encryption to Spy on Phone Calls

A team of academic researchers presented a new attack called ‘ReVoLTE,’ that could let remote attackers break the encryption used by VoLTE voice calls and spy on targeted phone calls. The attack doesn’t exploit any flaw in the Voice over LTE (VoLTE) protocol; instead, it leverages weak implementation of the LTE mobile network by most telecommunication providers in practice, allowing an attacker to eavesdrop on the encrypted phone calls made by targeted victims.

 

An Advanced Group Specializing in Corporate Espionage is on a Hacking Spree

A Russian-speaking hacking group specializing in corporate espionage has carried out 26 campaigns since 2018 in attempts to steal vast amounts of data from the private sector, according to new findings. The hacking group, dubbed RedCurl, stole confidential corporate documents including contracts, financial documents, employee records and legal records, according to research published this week by the security firm Group-IB.

 

Walgreens Discloses Data Breach Impacting Personal Health Information of More Than 72,000 Customers

The second-largest pharmacy chain in the U.S. recently disclosed a data breach that may have compromised the personal health information (PHI) of more than 72,000 individuals across the United States. According to Walgreens spokesman Jim Cohn, prescription information of customers was stolen during May protests, when around 180 of the company’s 9,277 locations were looted.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 2

The past few months have seen radical changes to our work and home life under the Coronavirus threat, upending norms and confining millions of American families within just four walls. In this context, it’s not surprising that more of us are spending an increasing portion of our lives online. In the final blog of this two-part series, Trend Micro discusses what you can do to protect your family, your data, and access to your corporate accounts.

 

What are your thoughts on Trend Micro’s tips to make your home cybersecurity and privacy stronger in the COVID-19-impacted world? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions appeared first on .

Removing Open Source Visibility Challenges for Security Operations Teams

By Trend Micro

 

Identifying security threats early can be difficult, especially when you’re running multiple security tools across disparate business units and cloud projects. When it comes to protecting cloud-native applications, separating legitimate risks from noise and distractions is often a real challenge.

 

That’s why forward-thinking organizations look at things a little differently. They want to help their application developers and security operations (SecOps) teams implement unified strategies for optimal protection. This is where a newly expanded partnership from Trend Micro and Snyk can help.

 

Dependencies create risk

 

In today’s cloud-native development streams, the insatiable need for faster iterations and time-to-market can impact both downstream and upstream workflows. As a result, code reuse and dependence on third-party libraries has grown, and with it the potential security, compliance and reputational risk organizations are exposing themselves to.

 

Just how much risk is associated with open source software today? According to Snyk research, vulnerabilities in open source software have increased 2.5x in the past three years. https://info.snyk.io/sooss-report-2020. What’s more, a recent report claimed to have detected a 430% year-on-year increase in attacks targeting open source components, with the end goal of infecting the software supply chain. While open source code is therefore being used to accelerate time-to-market, security teams are often unaware of the scope and impact this can have on their environments.

 

Managing open source risk

 

This is why cloud security leader Trend Micro, and Snyk, a specialist in developer-first open source security, have extended their partnership with a new joint solution. It’s designed to help security teams manage the risk of open source vulnerabilities from the moment code is introduced, without interrupting the software delivery process.

 

This ambitious achievement helps improve security for your operations teams without changing the way your developer teams work. Trend Micro and Snyk are addressing open source risks by simplifying a bottom-up approach to risk mitigation that brings together developer and SecOps teams under one unified solution. It combines state-of-the-art security technology with collaborative features and processes to eliminate the security blind spots that can impact development lifecycles and business outcomes.

 

Available as part of Trend Micro Cloud One, the new solution being currently co-developed with Snyk will:

  • Scan all code repositories for vulnerabilities using Snyk’s world-class vulnerability scanning and database
  • Bridge the organizational gap between DevOps & SecOps, to help influence secure DevOps practices
  • Deliver continuous visibility of code vulnerabilities, from the earliest code to code running in production
  • Integrate seamlessly into the complete Trend Micro Cloud One security platform

CloudOne

 

 

This unified solution closes the gap between security teams and developers, providing immediate visibility across modern cloud architectures. Trend Micro and Snyk continue to deliver world class protection that fits the cloud-native development and security requirements of today’s application-focused organizations.

 

 

 

The post Removing Open Source Visibility Challenges for Security Operations Teams appeared first on .

The Life Cycle of a Compromised (Cloud) Server

By Bob McArdle

Trend Micro Research has developed a go-to resource for all things related to cybercriminal underground hosting and infrastructure. Today we released the second in this three-part series of reports which detail the what, how, and why of cybercriminal hosting (see the first part here).

As part of this report, we dive into the common life cycle of a compromised server from initial compromise to the different stages of monetization preferred by criminals. It’s also important to note that regardless of whether a company’s server is on-premise or cloud-based, criminals don’t care what kind of server they compromise.

To a criminal, any server that is exposed or vulnerable is fair game.

Cloud vs. On-Premise Servers

Cybercriminals don’t care where servers are located. They can leverage the storage space, computation resources, or steal data no matter what type of server they access. Whatever is most exposed will most likely be abused.

As digital transformation continues and potentially picks up to allow for continued remote working, cloud servers are more likely to be exposed. Many enterprise IT teams, unfortunately, are not arranged to provide the same protection for cloud as on-premise servers.

As a side note, we want to emphasize that this scenario applies only to cloud instances replicating the storage or processing power of an on-premise server. Containers or serverless functions won’t fall victim to this same type of compromise. Additionally, if the attacker compromises the cloud account, as opposed to a single running instance, then there is an entirely different attack life cycle as they can spin up computing resources at will. Although this is possible, however, it is not our focus here.

Attack Red Flags

Many IT and security teams might not look for earlier stages of abuse. Before getting hit by ransomware, however, there are other red flags that could alert teams to the breach.

If a server is compromised and used for cryptocurrency mining (also known as cryptomining), this can be one of the biggest red flags for a security team. The discovery of cryptomining malware running on any server should result in the company taking immediate action and initiating an incident response to lock down that server.

This indicator of compromise (IOC) is significant because while cryptomining malware is often seen as less serious compared to other malware types, it is also used as a monetization tactic that can run in the background while server access is being sold for further malicious activity. For example, access could be sold for use as a server for underground hosting. Meanwhile, the data could be exfiltrated and sold as personally identifiable information (PII) or for industrial espionage, or it could be sold for a targeted ransomware attack. It’s possible to think of the presence of cryptomining malware as the proverbial canary in a coal mine: This is the case, at least, for several access-as-a-service (AaaS) criminals who use this as part of their business model.

Attack Life Cycle

Attacks on compromised servers follow a common path:

  1. Initial compromise: At this stage, whether a cloud-based instance or an on-premise server, it is clear that a criminal has taken over.
  2. Asset categorization: This is the inventory stage. Here a criminal makes their assessment based on questions such as, what data is on that server? Is there an opportunity for lateral movement to something more lucrative? Who is the victim?
  3. Sensitive data exfiltration: At this stage, the criminal steals corporate emails, client databases, and confidential documents, among others. This stage can happen any time after asset categorization if criminals managed to find something valuable.
  4. Cryptocurrency mining: While the attacker looks for a customer for the server space, a target attack, or other means of monetization, cryptomining is used to covertly make money.
  5. Resale or use for targeted attack or further monetization: Based on what the criminal finds during asset categorization, they might plan their own targeted ransomware attack, sell server access for industrial espionage, or sell the access for someone else to monetize further.

 

lifecycle compromised server

The monetization lifecycle of a compromised server

Often, targeted ransomware is the final stage. In most cases, asset categorization reveals data that is valuable to the business but not necessarily valuable for espionage.

A deep understanding of the servers and network allows criminals behind a targeted ransomware attack to hit the company where it hurts the most. These criminals would know the dataset, where they live, whether there are backups of the data, and more. With such a detailed blueprint of the organization in their hands, cybercriminals can lock down critical systems and demand higher ransom, as we saw in our 2020 midyear security roundup report.

In addition, while a ransomware attack would be the visible urgent issue for the defender to solve in such an incident, the same attack could also indicate that something far more serious has likely already taken place: the theft of company data, which should be factored into the company’s response planning. More importantly, it should be noted that once a company finds an IOC for cryptocurrency, stopping the attacker right then and there could save them considerable time and money in the future.

Ultimately, no matter where a company’s data is stored, hybrid cloud security is critical to preventing this life cycle.

 

The post The Life Cycle of a Compromised (Cloud) Server appeared first on .

MVISION Cloud for Microsoft Teams

By Gopi Boyinapalli

McAfee MVISION Cloud for Microsoft Teams, now offers secure guest user collaboration features allowing the security admins to not only monitor sensitive content posted in the form of messages and files within Teams but also monitor guest users joining Teams to remove any unauthorized guests joining Teams.  

Working from home has become a new reality for many, as more and more companies are requesting that their staff work remotely. Already, we are seeing how solutions that enable remote work and learning across chat, video, and file collaboration have become central to the way we work. Microsoft has seen an unprecedented spike in Teams usage and they have more than 75 million daily users as of May 2020, a 70% increase in daily active users from the month of March1 

What’s New in MVISION Cloud for Microsoft Teams 

MVISION Cloud for Microsoft Teams now provides policy controls for security admins to monitor and remove unauthorized guest users based on their domains, the team guest users are joining etc. As organizations use Microsoft Teams to collaborate with trusted partners to exchange messages, participate in calls, and share files, it is critical to ensure that partners are joining teams designated for external communication and only guest users from trusted partner domains are joining the teams.  

 Organizations can configure policies in McAfee MVISION Cloud to:

  • Monitor guest users from untrusted domains and remove the guest users automatically. Security admins do not have to reach out to Microsoft Teams admin and ask them to remove any untrusted guest users manually.  
  • Define the list of teams designated for external communication and make sure that users from partner organizations are joining only those teams and not any internal teams. If the partner users join any internal-only teams, they will be removed by McAfee MVISION Cloud automatically.  

With these new features, McAfee offers complete data protection and collaboration control capabilities to enable organizations to safely collaborate with partners without having to worry about exposing confidential data to guest users 

Here is the comprehensive list of use cases organizations can enable by using MVISION Cloud for Microsoft Teams. 

  • Modern data security. IT can extend existing DLP policies to messages and files in all types of Teams channels, enforcing policies based on keywords, fingerprints, data identifiers, regular expressions and match highlighting for content and metadata. 
  • Collaboration control. Messages or files posted in channels can be restricted to specific users, including blocking the sharing of data to any external location. 
  • Guest user control. Guest users can be restricted to join only teams meant for external communication and unauthorized guest users from any domains other than trusted partner domains can be automatically removed.  
  • Comprehensive remediation. Enables auditing of regulated data uploaded to Microsoft Teams and remediates policy violations by coaching users, notifying administrators, quarantining, tombstoning, restoring and deleting user actions. End users can autonomously correct their actions, removing incidents from IT’s queue. 
  • Threat prevention. Empowers organizations to detect and prevent anomalous behavior indicative of insider threats and compromised accounts. McAfee captures a complete record of all user activity in Teams and leverages machine learning to analyze activity across multiple heuristics to accurately detect threats. 
  • Forensic investigations: With an auto-generated, detailed audit trail of all user activity, MVISION Cloud provides rich capabilities for forensics and investigations. 
  • On-the-go security, for on-the-go policies. Helps secure multiple access modes, including browsers and native apps, and applies controls based on contextual factors, including user, device, data and location. Personal devices lacking adequate control over data can be blocked from access. 

McAfee MVISION Cloud for Microsoft Teams is now in use with a substantial number of large enterprise customers to enable their security, governance and compliance capabilities. The solution fits all industry verticals due to the flexibility of policies and its ease of use. 

The post MVISION Cloud for Microsoft Teams appeared first on McAfee Blogs.

MITRE ATT&CK for Cloud: Adoption and Value Study by UC Berkeley CLTC

By Daniel Flaherty

Are you prepared to detect and defend against attacks that target your data in cloud services, or apps you’ve built that are hosted in the cloud? 

Background 

Nearly all enterprises and public sector customers we work with have enabled cloud use in their organization, with many seeing a 600%+ increase1 in use in the March-April timeframe of 2020, when the shift to remote work rapidly took shape. 

The first step to developing a strong cloud security posture is visibility over the often hundreds of services your employees use, what data is within these services, and then how they are being used collaboratively with third parties and other destinations outside of your control. 

With that visibility, you can establish full control over end-user activity and data in the cloud, applying your policy at every entry and exit point to the cloud.  

That covers your risk stemming from legitimate use by employees, external collaborators, and even API-connected marketplace apps, but what about your adversaries? If someone phished your CEO, stole their OneDrive credentials and exfiltrated data, would you know? What if your CEO used the same password across multiple accounts, and the adversary had access to apps like Smartsheet, Workday, or Salesforce? Are you set up to detect this kind of multi-cloud attack? 

Our Research to Uncover the Best Solution  

Most enterprise security operations centers (SOCs) use MITRE ATT&CK to map the events they see in their environment to a common language of adversary tactics and techniques. This helps to understand gaps in protection, model how attackers progress from access to exfiltration (or encryption/destruction), and to plan out security policy decisions.  

The original ATT&CK framework applied to Windows/Mac/Linux environments, with Android/iOS included as well. For cloud environments, the MITRE ATT&CK framework has a shorter history (released October 2019), but is quickly gaining adoption as the model for cloud threat investigation 

In collaboration with the University of California Berkeley’s Center for Long-Term Cybersecurity (CLTC) and MITRE, we sought to uncover how enterprises investigate threats in the cloud, with a focus on MITRE ATT&CK. In this initiative, researchers from UC Berkeley CLTC conducted a survey of 325 enterprises in a wide range of industries, with 1K employees or above, split between the US, UK, and Australia. The Berkeley team also conducted 10 in-depth interviews with security leaders in various cybersecurity functions.  

Findings 

MITRE has done an excellent job identifying and categorizing adversary tactics and techniques used in the cloud. When asked about the prevalence of these tactics observed in their environment, 81% of our survey respondents had experienced each of the tactics in the Cloud Matrix on average. 58% had experienced the initial access phase of an attack at least monthly. 

Given the frequency in which most enterprises experience these adversary tactics and techniques, we found widespread adoption of the ATT&CK Cloud Matrix, with 97% of our respondents either planning to or already using the Matrix. 

In the full report, we explore deeper implications of using MITRE ATT&CK for Cloud, including consensus on the value it brings to enterprise organizations, challenges with implementation, and many more interesting results from our investigation. Head to the full report here to dive in.  

One of the most promising benefits of MITRE ATT&CK is the unification of events derived from endpoints, network traffic, and the cloud together into a common language. Right now, only 39% of enterprises correlate events from these three environments in their threat investigation. Further adoption of MITRE ATT&CK over time will unlock the ability to efficiently investigate attacks that span multiple environments, such as a compromised endpoint accessing cloud data and exfiltrating to an adversary destination. 

This research demonstrates promising potential for MITRE ATT&CK in the enterprise SOC, with downstream benefits for the business. 87% of our respondents stated that adoption of MITRE ATT&CK will improve cloud security in their organization, with another 79% stating that it would also make them more comfortable with cloud adoption overall. A safer transition to cloud-based collaboration and app development can accelerate businesses, a subject we’ve investigated in the past2MITRE ATT&CK can play a key role in secure cloud adoption, and defense of the enterprise overall.  

Dive into the full research report for more on these findings! 

White Paper

MITRE ATT&CK® as a Framework for Cloud Threat Investigation

81% of enterprise organizations told us they experience the adversary techniques identified in the MITRE ATT&CK for Cloud Matrix – but are they defending against them effectively?

Download Now

 

1https://www.mcafee.com/enterprise/en-us/forms/gated-form.html?docID=3804edf6-fe75-427e-a4fd-4eee7d189265&eid=LAVVPBCF  

2https://www.mcafee.com/enterprise/en-us/forms/gated-form.html?docID=75e3a9dc-793e-488a-8d8a-8dbf31aa5d62&eid=5PES9QHP 

The post MITRE ATT&CK for Cloud: Adoption and Value Study by UC Berkeley CLTC appeared first on McAfee Blogs.

Top 10 Microsoft Teams Security Threats

By Nigel Hawthorn

2020 has seen cloud adoption accelerate with Microsoft Teams as one of the fastest growing collaboration apps, McAfee customers use of Teams increased by 300% between January and April 2020. When we looked into Teams use in more detail in June, we found these statistics, on average, in our customer base:

 

Teams Created                                                                 367

Members added to Teams                                      6,526

Number of Teams Meetings                              106,000

3rd Party Apps added to Teams                                 185

Guest users added to Teams                                  2,906

This means that a typical enterprise has a new guest user added to their teams every few minutes – you wouldn’t allow unknown people to walk into an office, straight past security and walk around the building unescorted looking at papers sitting on people’s desks, but at the same time you want to allow in those guests you trust. For Teams, you need the same controls – allow in those guests you trust, but confirm their identity and make sure that they don’t see confidential information.

Microsoft invests huge amounts of time and money in the security of their systems, but security of the data in those systems and how they are used by the users is the responsibility of the enterprise.

The breadth of options, including inviting guest users and integration with 3rd party applications can be the Achilles heel of any collaboration technology. It takes just seconds to add an external third party into an internal discussion without realizing the potential for data loss, so sadly the risk of misconfiguration, oversharing or misuse can be large.

IT security teams need the ability to manage and control use to reduce risk of data loss or malware entering through Teams.

After working with hundreds of enterprises and over 40 million MVISION Cloud users worldwide and discussing with IT security, governance and risk teams how they address their Microsoft Teams security concerns, we have published a paper that outlines the top ten security threats and how to address them.

Microsoft Teams: Top 10 Security Threats

This collaboration potentially increases threats such as data loss and malware distribution. In this paper, McAfee discusses the top threats resulting from Teams use along with recommended actions.
Download Now

A few of the 10 Top Microsoft Teams Security Threats are below, read the paper for the full list.

  1. Microsoft Teams Guest Users: Guests can be added to see internal/sensitive content. By setting allow and/or block list domains, security can be implemented with the flexibility to allow employees to collaborate with authorized guests via Teams.
  2. Screen sharing that includes sensitive data. Screen sharing is very powerful, but can inadvertently share confidential data, especially if communication applications such as email are showing alerts on the screen.
  3. Access from Unmanaged Devices: Teams can be used on unmanaged devices, potentially resulting in data loss. The ability to set policies for unmanaged devices can safeguard Teams content.
  4. Malware Uploaded via Teams: File uploads from guests or from unmanaged devices may contain malware. IT administrators need the ability to either block all file uploads from unmanaged devices or to scan content when it is uploaded and remove it from the channel, informing IT management of any incidents.
  5. Data Loss Via Teams Chat and File Shares: File shares in Teams can lose confidential data. Data loss prevention technologies with strong sensitive content identification and sharing control capabilities should be implemented on Teams chat and file shares.
  6. Data Loss Via Other Apps: Teams App integration can mean data may go to untrusted destinations. As some of these apps may transfer data via their services, IT administrators need a system to discover third-party apps in use, review their risk profile and provide a workflow to remediate, audit, allow, block or notify users on an app’s status and revoke access as needed.

McAfee has a wealth of experience helping customers security their cloud computing systems, built around the MVISION Cloud CASB and other technologies. We can advise you about Microsoft Teams security and discuss possible threats of taking no action. Contact us to let us help you.

Teams is just one of the many applications within the Microsoft 365 suite and it is important to deploy common security controls for all cloud apps. MVISION Cloud provides security for Microsoft 365 and other cloud-based applications such as Salesforce, Box, Workday, AWS, Azure, Google Cloud Platform and customers’ own internally developed applications.

 

The post Top 10 Microsoft Teams Security Threats appeared first on McAfee Blogs.

“Best of Breed” – CASB/DLP and Rights Management Come Together

By Nick Shelly

Securing documents before cloud

Before the cloud, organizations would collaborate and store documents on desktop/laptop computers, email and file servers. Private cloud use-cases such accessing and storing documents on intranet web servers and network attached storage (NAS) improved the end-user’s experience. The security model followed a layered approach, where keeping this data safe was just as important as not allowing unauthorized individuals into the building or data center. This was followed by a directory service to sign into to protect your personal computer, then permissions on files stored on file servers to assure safe usage.

Enter the cloud

Most organizations now consider cloud services to be essential in their business. Services like Microsoft 365 (Sharepoint, Onedrive, Teams), Box, and Slack are depended upon by all users. The same fundamental security concepts exist – however many are covered by the cloud service themselves. This is known as the “Shared Security Model” – essentially the Cloud Service Provider handles basic security functions (physical security, network security, operations security), but ultimately the end customer must correctly give access to data and is ultimately responsible for properly protecting it.

The big difference between the two is that in the first security model, the organization owned and controlled the entire process. In the second cloud model, the customer owns the controls surrounding the data they choose to put in the cloud. This is the risk that collaborating and storing data in the cloud brings; once the documents have been stored in M365, what happens if it is mishandled from this point forward? Who is handling these documents? What if my most sensitive information has left the safe confines of the cloud service, how can I protect that once it leaves? Fundamentally: How can I control data that lives hypothetically anywhere, including areas that I do not have control over?

Adding the protection layers that are cloud-native

McAfee and Seclore have extended an integration recently to address these cloud-based use cases. This integration fundamentally answers this question: If I put sensitive data in the cloud that I do not control, can I still protect the data regardless of where it lives?

The solution works like this:

The solution puts guardrails around end-user cloud usage, but also adds significant compliance protections, security operations, and data visibility for the organization.

Data visibility, compliance & security operations

Once an unprotected sensitive file has been uploaded to a cloud service, McAfee MVISION Cloud Data Loss Prevention (DLP) detects the file upload. Customers can assign a DLP policy to find sensitive data such as credit card data (PCI), customer data, personally identifiable information (PII) or any other data they find to be sensitive.

Sample MVISION Cloud DLP Policy

If data is found to be in violation of policy, it means the data must be properly protected. For example, if the DLP engine finds PII, rather than let it sit unprotected in the cloud service, the McAfee policy the customer sets should enact some protection on file. This action is known as an “Response”, and MVISION Cloud will properly show the detection, violating data, and actions taken in the incident data. In this case, McAfee will call Seclore to protect the file. These actions can be performed both in near real-time, or will enact protection whenever data already exists in the cloud service (on demand scan).

“Seclore-It” – Protection Beyond Encryption

Now that the file has been protected, downstream access to the file is managed by Seclore’s policy engine. Examples of policy-based access could be end-user location, data type, user group, time of day, or any other combination of policy choices. The key principle here is the file is protected regardless of where it goes and enforced by a Seclore policy that the organization sets. If a user accesses the file, an audit trail is recorded to assure that organizations have the confidence that data is properly protected. The audit logs show allows and denies, completing the data visibility requirements.

Addressing one last concern; if a file is “lost” or the need to restrict access to files that are no longer in direct control such as when a user leaves the company, or if the organization simply wants to update policies on protected files, the policy on those files can be dynamically updated. This addresses a major data loss concern that companies have for cloud service providers and general data use for remote users. Ensuring files are always protected, regardless of scenario is simple to achieve with Seclore by taking the action to update a policy. Once the policy has been updated, even files on a thumb drive stuffed in a drawer are now re-protected from accidental or intentional disclosure.

Conclusion

This article addresses several notable concerns for customers doing business in a cloud model. Important/sensitive data can now be effortlessly protected as it migrates to and through cloud services to its ultimate destination. The organization can prove compliance to auditors that the data was protected and continues to be protected. Security operations can track incidents and follow the access history of files. Finally, the joint solution is easy to use and enables businesses to confidently conduct business in the cloud.

Next Steps

McAfee and Seclore partner both at the endpoint and in the cloud as an integrated solution. To find out more and see this solution running in your environment, send an inquiry to cloud@mcafee.com

 

The post “Best of Breed” – CASB/DLP and Rights Management Come Together appeared first on McAfee Blogs.

Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust?

By Ned Miller

Over the last few months, Zero Trust Architecture (ZTA) conversations have been top-of-mind across the DoD. We have been hearing the chatter during industry events all while sharing conflicting interpretations and using various definitions. In a sense, there is an uncertainty around how the security model can and should work. From the chatter, one thing is clear – we need more time. Time to settle in on just how quickly mission owners can classify a comprehensive and all-inclusive, acceptable definition of Zero Trust Architecture.

Today, most entities utilize a multi-phased security approach. Most commonly, the foundation (or first step) in the approach is to implement secure access to confidential resources. Coupled with the shift to remote and distance work, the question arises, “are my resources and data safe, and are they safe in the cloud?”

Thankfully, the DoD is in the process of developing a long-term strategy for ZTA. Industry partners, like McAfee, have been briefed along the way. It has been refreshing to see the DoD take the initial steps to clearly define what ZTA is, what security objectives it must meet, and the best approach for implementation in the real-world. A recent DoD briefing states “ZTA is a data-centric security model that eliminates the idea of trusted or untrusted networks, devices, personas, or processes and shifts to a multi-attribute based confidence levels that enable authentication and authorization policies under the concept of least privilege access”.

What stands out to me is the data-centric approach to ZTA. Let us explore this concept a bit further. Conditional access to resources (such as network and data) is a well-recognized challenge. In fact, there are several approaches to solving it, whether the end goal is to limit access or simply segment access. The tougher question we need to ask (and ultimately answer) is how to do we limit contextual access to cloud assets? What data security models should we consider when our traditional security tools and methods do not provide adequate monitoring? And is securing data, or at least watching user behavior, enough when the data stays within multiple cloud infrastructures or transfers from one cloud environment to another?

Increased usage of collaboration tools like Microsoft 365 and Teams, SLACK and WebEx are easily relatable examples of data moving from one cloud environment to another. The challenge with this type of data exchange is that the data flows stay within the cloud using an East-West traffic model. Similarly, would you know if sensitive information created directly in Office 365 is uploaded to a different cloud service? Collaboration tools by design encourage sharing data in real-time between trusted internal users and more recently with telework, even external or guest users. Take for example a supply chain partner collaborating with an end user. Trust and conditional access potentially create a risk to both parties, inside and outside of their respective organizational boundaries. A data breach whether intentional or not can easily occur because of the pre-established trust and access. There are few to no limited default protection capabilities preventing this situation from occurring without intentional design. Data loss protection, activity monitoring and rights management all come into question. Clearly new data governance models, tools and policy enforcement capabilities for this simple collaboration example are required to meet the full objectives of ZTA.

So, as the communities of interest continue to refine the definitions of Zero Trust Architecture based upon deployment, usage, and experience, I believe we will find ourselves shifting from a Zero Trust model to an Advanced Adaptive Trust model. Our experience with multi-attribute-based confidence levels will evolve and so will our thinking around trust and data-centric security models in the cloud.

 

 

The post Data-Centric Security for the Cloud, Zero Trust or Advanced Adaptive Trust? appeared first on McAfee Blogs.

With No Power Comes More Responsibility

By Rich Vorwaller

You’ve more than likely heard the phrase “with great power comes great responsibility.” Alternatively called the “Peter Parker Principle” this phrase became well known in popular culture mostly due to Spider-Man comics and movies – where Peter Parker is the protagonist. The phrase is so well known today that it actually has its own article in Wikipedia. The gist of the phrase is that if you’ve been empowered to make a change for the better, you have a moral obligation to do so.

However, what I’ve noticed as I talk to customers about cloud security, especially security for the Infrastructure as a Service (IaaS) is a phenomenon I’m dubbing the “John McClane Principle” – the name has been changed to protect the innocent 🙂

The John McClane Principle happens when someone has been given responsibility for fixing something but at the same time has not been empowered to make necessary changes. At the surface this scenario may sound absurd, but I bet many InfoSec teams can sympathize with the problem. The conversation goes something like this:

  • CEO to InfoSec: You need to make sure we’re secure in the cloud. I don’t want to be the next [insert latest breach here].
  • InfoSec to CEO: Yeah, so I’ve looked at how we’re using the cloud and the vast majority of our problems are from a lack of processes and knowledge. We have a ton of teams that are doing their own thing in the cloud, and I don’t have complete visibility into what they’re doing.
  • CEO to InfoSec: Great, go fix it.
  • InfoSec to CEO: Well the problem is I don’t have any say over those teams. They can do whatever they want. To fix the problem they’re going to have change how they use the cloud. We need to get buy-in from managers, but those managers have told me they’re not interested in changing anything because it’ll slows things down.
  • CEO to InfoSec: I’m sure you’ll figure it out. Good luck, and we better not have a breach.

That’s when “with no power comes more responsibility” rings true.

And why is that? The reason being is that IaaS has fundamentally changed how we consume IT and along with that how we scale security. No longer do we submit purchase requests and go through a long, lengthy processes to spin up infrastructure resources. Now anyone with a credit card can spin up the equivalent of a data center within minutes across the globe.

The agility however introduced some unintended changes to InfoSec and in order to scale, cloud security cannot be the sole responsibility of one team. Rather cloud security must be embedded in process and depends on collaboration between development, architects, and operations. These teams now have a more significant role to play in cloud security, and in many cases are the only ones who can implement change in order to enhance security. InfoSec now acts as Sherpas instead of gatekeepers to make sure every team is marching to the same, secure pace.

However, as John McClane can tell you the fact that more teams look after cloud security doesn’t necessarily mean you have a better solution. In fact, having to coordinate across multiple teams with different priorities can make security even more complex and slow you down. Hence the need for a streamlined security solution that facilitates collaboration between developers, architects, and InfoSec but at the same time provides guardrails, so nothing slips throw the cracks.

With that, I’m excited to announce our new cloud security service built especially for customers moving and developing applications in the cloud. We call it MVISION Cloud Native Application Protection Platform – or just CNAPP because every service deserves an acronym.

What is CNAPP? CNAPP is a new security service we’ve just announced today that combines solutions from Cloud Security Posture Management (CSPM), Cloud Workload Protection Platform (CWPP), Data Loss Prevention (DLP), and Application Protection into a single solution. Now in beta with a target launch date of Q1, 2021, we built CNAPP to provide InfoSec teams broad visibility into their cloud native applications. For us, the goal wasn’t how do we slow things down to make sure everything is secure; rather how do we enable InfoSec teams the visibility and context they need for cloud security while allowing dev teams to move fast.

Let me briefly describe what features CNAPP has and list some features that are customer favorites.

CSPM

The vast majority of breaches in IaaS today are due to service misconfigurations. Gartner famously said in 2016 that “95% of cloud security failures will be the customer’s fault.”Just last year Gartner updated that quote to say “99% of cloud security failures will be the customers’ fault.” I’m waiting for the day when Gartner’s says “105% will be the customer’s fault.”

Why is the percentage so high? There are multiple reasons, but we hear a lot from our customers that there is a huge lack of knowledge on how to secure new services. Each cloud provider is releasing new services and capabilities at a dizzying pace with no blockers for adoption. Unfortunately, the industry hasn’t matched pace of having a workforce that knows and understands how best to configure these new services and capabilities. CNAPP provides customers with the ability to immediately audit all cloud services and benchmark those services against best security practices and industry standards like CIS Foundations, PCI, HIPPA, and NIST.

Within that audit (we call it a security incident), CNAPP provides detailed information on how to reconfigure services to improve security, but the service also provides the ability to assign the security incident to dev teams with SLAs so there’s no ambiguity on who owns what and what needs to change. All of these workflows can be automated so multiple teams are empowered in near real-time to find and fix problems.

Additionally, CNAPP has a custom policy feature where customers can create policies for identifying risky misconfigurations unique to their environments as well as integrations with developer tools like Jenkins, Bitbucket, and GitHub that provide feedback on deployments that don’t meet security standards.

CWPP

IaaS platforms have become catalysts for Open Source Software (OSS) like Linux (OS), Docker (container), and Kubernetes (orchestration). The challenge with using these tools is the inherit risk of Common Vulnerabilities and Exposures (CVE) found in software libraries and misconfigurations in deploying new services. Another famous quote by Gartner is that “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” But how does the InfoSec team quickly spot those vulnerabilities and misconfigurations, especially in ephemeral environments with multiple developer teams pushing frequent releases into CI/CD pipelines?

Based on our acquisition of NanoSec last year, CNAPP provides full workload protection by identifying all compute instances, containers, and container services running in IaaS while identifying critical CVEs, misconfigurations in both repository and production container services, and introducing some new protection features. These features include application allow listing, OS hardening, and file integrity monitoring with plans to introduce nano-segmentation and on-prem support soon.

Customer Favorites

We’ve had a great time working jointly with our customers to release CNAPP. I’d like to highlight some of the use cases that have proven to be game changers for our customers.

  • In-tenant DLP scans: many of our customers have legitimate use cases for publicly exposed cloud storage services (sometimes referred to as buckets), but at the same time need to ensure those buckets don’t have sensitive data. The challenge with using DLP for these services is many solutions available in the market copy the data into the vendor’s own environment. This increases customer costs with egress charges and also introduces security challenges with data transit. CNAPP allows customers to perform in-tenant DLP scans where the data never leaves the IaaS environment, making the process more secure and less expensive.
  • MITRE ATT&CK Framework for Cloud: the language of Security Operation Centers (SOC) is MITRE, but there is a lot of nuance in how cloud security incidents fit into this framework. With CNAPP we built an end-to-end process that maps all CSPM and CWPP security incidents to MITRE. Now InfoSec and developer teams can work more effectively together by automatically categorizing every cloud incident to MITRE, facilitating faster responses and better collaboration.
  • Unified Application Security: CNAPP is built on the same platform as our MVISION Cloud service, a Gartner Magic Quadrant Leader for Cloud Access Security Broker (CASB). Customers are now able to get detailed visibility and security control over their SaaS applications along with applications they are building in IaaS with the same solution. Our customers love having one console that provides a holistic picture of application risk across all teams – SaaS for consumers and IaaS for builders.

There are a lot more features I’d love to highlight, but instead I invite you to check out the solution for yourself. Visit https://mcafee.com/CNAPP for more information on our release or request a demo at https://mcafee.com/demo. We’d love to get your feedback and hear how MVISION CNAPP can help you become more empowered and responsible in the cloud.

This post contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Contact your McAfee representative to obtain the latest forecast, schedule, specifications, and roadmaps.

The post With No Power Comes More Responsibility appeared first on McAfee Blogs.

Catch the Most Sophisticated Attacks Without Slowing Down Your Users

By Michael Schneider

Most businesses cannot survive without being connected to the internet or the cloud. Websites and cloud services enable employees to communicate, collaborate, research, organize, archive, create, and be productive.

Yet, the digital connection is also a threat. External attacks on cloud accounts increased by an astounding 630% in 2019. Ransomware and phishing remain major headaches for IT security teams, and as users and resources have migrated outside of the traditional network security perimeter, it’s become increasingly difficult to protect users from clicking on a link or opening a malicious file.

This challenge has increased the tension between two IT mandates—allowing unfettered access to necessary services, while preventing attacks and blocking access to malicious sites. Automation helps significantly with modern security pipelines blocking about 99.5% of malicious and suspicious activity by filtering known bad files and sites, as well as using sophisticated anti-malware scanning and behavioral analytics.

Security is a lot of work

However, the remaining half of 1% still represents a significant number of sites and potential threats that require time for a team of security analysts to triage. Therefore, IT managers are faced with the challenge of devising balanced security policies. Many companies default to blocking unknown traffic, but over-blocking of web sites and content can hinder user productivity while creating a surge in help-desk tickets as users attempt to go to legitimate sites that have not yet been classified. On the flipside, web policies that allow access too freely greatly increases the likelihood of serious, business-threatening security incidents.

With a focus on digital transformation, accelerated by the change in work habits and locations during the pandemic, companies need flexible, transparent security controls that enable safe user access to critical web and cloud resources without overwhelming security teams with constant help desk calls, policy changes, and manual triaging. Remote Browser Isolation – if implemented properly – can help achieve this.

While security solutions leveraging URL categorization, domain reputation, antivirus, and sandboxes can stop 99.5% of threats, remote browser isolation (RBI) can handle the remaining unknown events, rather than the common strategy of choosing to rigidly block or allow everything. RBI allows web content to be delivered and viewed in a safe environment, while analysis is conducted in the background. Using RBI, any request to an unknown site or URL that remains suspicious after traversing the web protection defense-in-depth pipeline will be rendered remotely, preventing any impact to a user’s system in the event the content is malicious.

Relying on RBI

Remote browser isolation blocks malicious code from running on an employee’s system just because they clicked a link. The technology will also prevent pages from using unprotected cookies to try and gain access to protected web services and sites. Such protections are particularly important in the age of ransomware, when an inadvertent click on a malicious link can lead to significant damage to a company’s digital assets.

Given the benefits of remote browser isolation, some companies have deployed the technology to render every site. While this can very effectively mitigate security risk, isolating all web and cloud traffic demands considerable computing resources and is prohibitively expensive from a license cost point of view.

By integrating remote browser isolation (RBI) technology directly into our MVISION Unified Cloud Edge (UCE) solution, McAfee integrates RBI with the existing triage pipeline. This means that the rest of the threat protection stack – including global threat intelligence, anti-malware, reputation analysis, and emulation sandboxing – can filter out the majority of threats while only one out of every 200 requests needs to be handled using the RBI. This dramatically reduces overhead. McAfee’s UCE makes this approach dead simple: rather than positioning remote browser isolation as a costly and complicated add-on service, it is included with every MVISION UCE license.

Full Protection for High-Risk Individuals

However, there are specific people inside a company—such as the CEO or the finance department—with whom you cannot take chances. For those privileged users, full isolation from potential internet threats is also available. This approach ensures full virtual segmentation of the user’s system from the internet and shields it against any potential danger, enabling him to use the web and cloud freely and productively.

McAfee’s approach greatly reduces the risk of users being compromised by phishing campaigns or inadvertently getting infected by ransomware – such attacks can incur substantial costs and impact an organization’s ability to operate. At the same time, organizations benefit from a workforce that is freely able to access the web and cloud resources they need to be productive, while IT staff are freed from the burden of rigid web policies and constantly addressing help-desk tickets. .

Want to know more? Check out our RBI demonstration.

The post Catch the Most Sophisticated Attacks Without Slowing Down Your Users appeared first on McAfee Blogs.

McAfee Named a Leader in the 2020 Gartner Magic Quadrant for CASB

By McAfee

McAfee MVISION Cloud was the first to market with a CASB solution to address the need to secure corporate data in the cloud. Since then, Gartner has published several reports dedicated to the CASB market, which is a testament to the critical role CASBs play in enabling enterprise cloud adoption. Today, Gartner named McAfee a Leader in the 2020 annual Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) for the fourth time evaluating CASB vendors.

Cloud access security brokers have become an essential element of any cloud security strategy, helping organizations govern the use of cloud and protect sensitive data in the cloud. Security and risk management leaders concerned about their organizations’ cloud use should investigate CASBs.

In its fourth Magic Quadrant for Cloud Access Security Brokers, Gartner evaluated eight vendors that met its inclusion criteria. MVISION Cloud, as part of the MVISION family of products at McAfee, is recognized as a Leader in the report; and for the fourth year in a row. To learn more about how Gartner assessed the market and MVISION Cloud, download your copy of the report here.

This year, Gartner commissioned a highly rigorous process to compile its Gartner Magic Quadrant for Cloud Access Security Brokers (CASB) report and they relied on numerous inputs to compile the report, including these materials from vendors to understand their product offerings:

  • Questionnaire – A 300+ point questionnaire resulting in hundreds of pages of responses
  • Financials – Detailed company financial data covering CASB revenue
  • Documentation – Access to all product documentation
  • Customer Peer Reviews – Gartner encourages customers to submit anonymized reviews via their Peer Insights program. You can read them here.
  • Demo – Covering over 50 Gartner-defined use cases to validate product capabilities

In 2020, McAfee made several updates and additions to its solutions, strengthening its position as an industry experts  including:

 

McAfee also received recognition as the only vendor to be named the January 2020 Gartner Peer Insights Customers’ Choice for Cloud Access Security Brokers based on customer feedback and ratings for McAfee MVISION Cloud.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The Gartner Peer Insights Logo is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved

Gartner Peer Insights ‘Voice of the Customer’: Cloud Access Security Brokers, Peer Contributors, 13 March 2020. Gartner Peer Insights reviews constitute the subjective opinions of individual end users based on their own experiences and do not represent the views of Gartner or its affiliates. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

The post McAfee Named a Leader in the 2020 Gartner Magic Quadrant for CASB appeared first on McAfee Blogs.

SPOTLIGHT: Women in Cybersecurity

By McAfee

There are new and expanding opportunities for women’s participation in cybersecurity globally as women are present in greater numbers in leadership. In recent years, the international community has recognized the important contributions of women to cybersecurity, however, equal representation of women is nowhere near a reality, especially at senior levels.

The RSA Conference USA 2019 held in San Francisco — which is the world’s largest cybersecurity event with more than 40,000 people and 740 speakers — is a decent measuring stick for representation of women in this field. “At this year’s Conference 46 percent of all keynote speakers were women,” according to Sandra Toms, VP and curator, RSA Conference, in a blog she posted on the last day of this year’s event. “While RSAC keynotes saw near gender parity this year, women made up 32 percent of our overall speakers,” noted Toms.

Forrester also predicts that the number of women CISOs at Fortune 500 companies will rise to 20 percent in 2019, compared with 13 percent in 2017. This is consistent with new research from Boardroom Insiders which states that 20 percent of Fortune 500 global chief information officers (CIOs) are now women — the largest percentage ever.

Research from Cybersecurity Ventures, which first appeared in the media early last year, predicts that women will represent more than 20 percent of the global cybersecurity workforce by the end of 2019. This is based on in-depth discussions with numerous industry experts in cybersecurity and analyzing and synthesizing third-party reports, surveys, and media sources.

Either way, the 20 percent figure is still way too low, and our industry needs to continue pushing for more women in cyber. Heightened awareness on the topic — led by numerous women in cyber forums and initiatives — has helped move the needle in a positive direction.

Live Panel

Women in Cloud and Security – A Panel with McAfee, AWS, and Our Customers

Thursday, November 5, 2020
10am PT | 12pm CT | 1pm ET

Register Now

 

Join McAfee in our Women in Cloud and Security Panel

Please join McAfee, AWS, and our customers to discuss the impact women are having on information security in the cloud.  These remarkable women represent multiple roles in cloud and security, from technical leadership through executive management. Can’t make it? This same panel will reconvene later in the year during the AWS re:Invent.

 

Meet the speakers:

Alexandra Heckler
Chief Information Security Officer
Collins Aerospace

Alexandra Heckler is Chief Information Security Officer at Collins Aerospace, where she leads a diverse team of cyber strategy and defense experts to protect against cyber threats and ensure regulatory compliance. Prior to joining Collins, Alexandra led Booz Allen’s Commercial Aerospace practice, building and overseeing multi-disciplinary teams to advise C-level clients on cybersecurity and digital transformation initiatives. Her work centered on helping aerospace manufacturers manage the convergence of cyber risk across their increasingly complex business ecosystem, including IT, OT and connected products. Alexandra also helped build and led the firm’s automotive practice, working with OEMs, suppliers and the Auto-ISAC to drive industry-leading vehicle cyber security capabilities. During her first few years at Booz Allen, she supported technology, innovation and risk analysis initiatives across U.S. government clients. Throughout her tenure, she engaged in Booz Allen’s Women in Cyber—a company-wide initiative to attract, develop and retain female cyber talent—and supported the firm’s partnership with the Executive Women’s Forum. She also served as Finance and Audit Chair on the Executive Committee of the newly-founded Space-ISAC. Alexandra holds a B.S. in Foreign Service with an Honors Certificate in International Business Diplomacy, and a M.A. in Communication, Culture and Technology from Georgetown University.

Diane Brown
Sr. Director/CISO of IT Risk Management
Ulta Beauty

Diane Brown is the Sr. Director/CISO of IT Risk Management at Ulta Beauty located in Bolingbrook, IL. In this role, Diane is accountable for the security of the retail stores, cyber-security, infrastructure, security/network engineering, data protection, third-party risk assessments, Directory Services, SOX & PCI compliance, application security, security awareness and Identity Management. Diane has more than three decades of IT experience in the retail environment and has honed her expertise in information technology leadership with a focus on risk management for the past 15 years. She values her strategic alliances with the business focusing on delivery of secure means to deploy new technologies, motivating people and managing an expanding technology portfolio. She holds a Bachelor’s degree in Information Security and CISSP/ISSAP certifications and is a member of the Executive Security Council for NRF and one of the original members of the RH-ISAC.

Elizabeth Moon
Director, Industry Solutions Americas Solutions Architecture & Customer Success
Amazon Web Services

Elizabeth has been with AWS for 5-1/2 years and leads Industry Solutions within the Americas Solutions Architecture and Customer Success organization. Elizabeth’s team of Specialist Solutions Architects provide industry specific depth for customers in the following segments: Games, Private Equity, Media & Entertainment, Manufacturing/Supply Chain, Healthcare Life Sciences, Financial Services, and Retail. They focus on accelerating cloud migration and building customer confidence and capability on the AWS platform through expert, prescriptive guidance on Foundations (Security, Identity, and Networking), Cost Optimization, Developer Experience, Cloud Migrations and Modernization.

Prior to her role at AWS, Elizabeth led the pre-sales Oracle Enterprise Architecture team within Oracle’s North America Public Sector Consulting organization. She helped customers maximize their investment in Oracle technologies, align business initiatives with the right IT solutions, and mitigate risk of implementations, focused on Oracle Engineered Systems, Database, and Infrastructure solutions.

Elizabeth got her start in technology with Metropolitan Regional Information Systems (MRIS), the nation’s largest Multiple Listing Service (MLS) and real estate information provider. She spent 15 years at this small company across multiple functions: DBA, data architect, system administrator, technical program lead, and operations leader. Most notably, she led design, deployment and growth of the patented database behind the Cornerstone Universal Data Exchange.

She earned a bachelor’s degree in International Business from Eckerd College in St. Petersburg, Florida.

Deana Elizondo
Director of Cyber Risk & Security Services
American Electric Power

Deana Elizondo is the Director of Cyber Risk & Security Services at American Electric Power. She has been with AEP for 16 years and has spent the last 11 years in Cybersecurity. Deana’s organization includes Security Ambassadors, Security Education & Regional Support, Data Protection & Privacy, Enterprise Content Management, and Strategy, Risk & Policies. Deana’s passion is growing and developing her leaders and team members, as well as educating the entire AEP workforce on the value and benefits of reducing Security risk.

Aderonke (Addie) Adeniji
Director Information Assurance Office of Cybersecurity
House of Representatives

Addie Adeniji is a seasoned cybersecurity professional with expertise in Federal IT security governance, risk and compliance (GRC). Currently, she serves as the Director of Information Assurance, within the Office of Cybersecurity, for the U.S. House of Representatives. In this role, she oversees Information Assurance standard and process development and directs risk management and audit compliance efforts across the House. Ms. Adeniji works with House staff to identify, evaluate and report risks to ensure the House maintains a strengthened security risk posture. Her past experience includes security consulting within the Federal health (i.e., FDA, NIH, and HHS headquarters) and energy domains.

Brooke Noelke (Moderator)
Senior Enterprise Cloud Security Strategist/Architect
McAfee

Brooke joins McAfee’s Customer Cloud Security Architecture team after leading McAfee IT’s cloud technical architects and business-facing cloud service management efforts, driving McAfee’s cloud transformation and migration of 70% of our applications to the cloud. She’s spent most of her career in technical leadership roles in cloud strategy, architecture and engineering, spanning professional services strategy though IT delivery leadership. She believes cloud services have already rewritten our IT universe, and we’re all just catching up… but that the cloud “easy buttons” we’re handing developers and business functions aren’t as risk-free as commonly assumed. Her mission is to make the secure path, the easy path to deploying new products, solutions and intelligence in the cloud, through enablement of organizational change, agile automation and well-designed, reusable cloud security reference architectures

Source: https://cybersecurityventures.com/women-in-cybersecurity/

 

The post SPOTLIGHT: Women in Cybersecurity appeared first on McAfee Blogs.

How CASB and EDR Protect Federal Agencies in the Age of Work from Home

By John Amorosi

Malicious actors are increasingly taking advantage of the burgeoning at-home workforce and expanding use of cloud services to deliver malware and gain access to sensitive data. According to an Analysis Report (AR20-268A) from the Cybersecurity and Infrastructure Security Agency (CISA), this new normal work environment has put federal agencies at  risk of falling victim to cyber-attacks that exploit their use of Microsoft Office 365 (O365) and misuse their VPN remote access services.

McAfee’s global network of over a billion threat sensors affords its threat researchers the unique advantage of being able to thoroughly analyze dozens of cyber-attacks of this kind. Based on this analysis, McAfee supports CISA’s recommendations to help prevent adversaries from successfully establishing persistence in agencies’ networks, executing malware, and exfiltrating data. However, McAfee also asserts that the nature of this environment demands that additional countermeasures be implemented to quickly detect, block and respond to exploits originating from authorized cloud services.

Read on to learn from McAfee’s analysis of these attacks and understand how federal agencies can use cloud access security broker (CASB) and endpoint threat detection and response (EDR) solutions to detect and mitigate such attacks before they have a chance to inflict serious damage upon their organizations.

The Anatomy of a Cloud Services Attack

McAfee’s analysis supports CISA’s findings that adversaries frequently attempt to gain access to organizations’ networks by obtaining valid access credentials for multiple users’ O365 accounts and domain administrator accounts, often via vulnerabilities in unpatched VPN servers. The threat actor will then use the credentials to log into a user’s O365 account from an anomalous IP address, browse pages on SharePoint sites, and then attempt to download content. Next, the cyberthreat actor would connect multiple times from a different IP address to the agency’s Virtual Private Network (VPN) server, and eventually connect successfully.

Once inside the network, the attacker could:

  • Begin performing discovery and enumerating the network
  • Establish persistence in the network
  • Execute local command line processes and multi-stage malware on a file server
  • Exfiltrate data

Basic SOC Best Practices

McAfee’s comprehensive analysis of these attacks supports CISA’s proposed  best practices to prevent or mitigate such cyber-attacks. These recommendations include:

  • Hardening account credentials with multi-factor authentication,
  • Implementing the principle of “least privilege” for data access,
  • Monitoring network traffic for unusual activity,
  • Patching early and often.

While these recommendations provide a solid foundation for a strong cybersecurity program, these controls by themselves may not go far enough to prevent more sophisticated adversaries from exploiting and weaponizing cloud services to gain a foothold within an enterprise.

Why Best Practices Should Include CASB and EDR

Organizations will gain a running start to identifying and thwarting the attacks in question by implementing a full-featured CASB such as McAfee MVISION Cloud, and an advanced EDR solution, such as McAfee MVISION Endpoint Threat Detection and Response.

Deploying MVISION Cloud for Office 365 enables agencies’ SOC analysts to assert greater control over their data and user activity in Office 365—control that can hasten identification of compromised accounts and resolution of threats. MVISION Cloud takes note of all user and administrative activity occurring within cloud services and compares it to a threshold based either on the user’s specific behavior or the norm for the entire organization. If an activity exceeds the threshold, it generates an anomaly notification. For instance, using geo-location analytics to visualize global access patterns, MVISION Cloud can immediately alert agency analysts to anomalies such as instances of Office 365 access originating from IP addresses located in atypical geographic areas.

When specific anomalies appear concurrently—e.g., a Brute Force anomaly and an unusual Data Access event—MVISION Cloud automatically generates a Threat. In the attacks McAfee analyzed, Threats would have been generated early on since the CASB’s user behavior analytics would have identified the cyber actor’s various activities as suspicious. Using MVISION Cloud’s activity monitoring dashboard and built-in audit trail of all user and administrator activities, SOC analysts can detect and analyze anomalous behaviors across multiple dimensions to more rapidly understand what exactly is occurring when and to what systems—and whether an incident concerns a compromised account, insider threat, privileged user threat, and/or malware—to shrink the gap to remediation.

In addition, with MVISION Cloud, an agency security analyst can clearly see how each cloud security incident maps to MITRE ATT&CK tactics and techniques, which not only accelerates the entire forensics process but also allows security managers to defend against similar attacks with greater precision in the future.

Figure 1. Executed Threat View within McAfee MVISION Cloud

 

Figure 2. Gap Analysis & Investigations – McAfee MVISION Cloud Policy Recommendations

 

Furthermore, using MVISION Cloud for Office 365, agencies can create and enforce policies that prevent the uploading of sensitive data to Office 365 or downloading of sensitive data to unmanaged devices. With such policies in place, an attacker’s attempt to exfiltrate sensitive data will be mitigated.

In addition to deploying a CASB, implementing an EDR solution like McAfee MVISION EDR to monitor endpoints centrally and continuously—including remote devices—helps organizations defend themselves from such attacks. With MVISION EDR, agency SOC analysts have at their fingertips advanced analytics and visualizations that broaden detection of unusual behavior and anomalies on the endpoint. They are also able to grasp the implications of alerts more quickly since the information is presented in a format that reduces noise and simplifies investigation—so much so that even novice analysts can analyze at a higher level. AI-guided investigations within the solution can also provide further insights into attacks.

Figure 3. MITRE ATT&CK Alignment for Detection within McAfee MVISION EDR

With a threat landscape that is constantly evolving and attack surfaces that continue to expand with increased use of the cloud, it is now more important than ever to embrace CASB and EDR solutions. They have become critical tools to actively defend today’s government agencies and other large enterprises.

Learn more about the cloud-native, unified McAfee MVISION product family. Get your questions answered by tweeting @McAfee

The post How CASB and EDR Protect Federal Agencies in the Age of Work from Home appeared first on McAfee Blogs.

Think Beyond the Edge: Why SASE is Incomplete Without Endpoint DLP

By Shlomi Zrahia

The move to a distributed workforce came suddenly and swiftly. In February 2020, less than 40% of companies allowed most of their employees to work from home one day a week. By April, 77% of companies had most of their employees working exclusively from home.

Organizations have been in the midst of digital transformation projects for years, but this development represented a massive test. Most organizations were pleasantly surprised to see that their employees could remain productive while working from home thanks to successful cloud migration projects and the adoption of various mobility and remote access technologies, but companies have become more worried that they have far less visibility into data on employees’ systems when they are working remotely. Traditional Network DLP can protect data while it is traversing through the network up to the corporate edge, but it has little visibility to data once it is out of the corporate network and its effectiveness is further limited when the workforce is distributed.

Figure 1: Data protection gaps resulting from direct-to-cloud access.

More than three-quarters of CIOs are concerned with the impact that this increased data sprawl is having on security. Despite the fact that roughly half of all corporate data was stored in the cloud last year, only 36% of companies could enforce data protection policies there. Many organizations therefore forced home-based users to hairpin all traffic back to the corporate data center via VPN so that they could be protected by the network data loss prevention (DLP) system. This maintained security, but it came at the cost of poor performance and reduced worker productivity.

Cloud-native security is part of the solution

Organizations that employed cloud-based security technologies like a Cloud Access Security Broker (CASB), DLP, or Secure Web Gateway (SWG) could enable their users to perform their jobs with fast and secure direct-to-cloud access. However, this still leads to headaches: IT organizations have to manage multiple disparate solutions, while users face latency while their traffic needs to bounce between multiple siloed technologies before they can access their data.

The Secure Access Service Edge (SASE) presents a solution to this dilemma by providing a framework for organizations to bring all of these technologies together into a single integrated cloud service. End users enjoy low-latency access to the cloud, while IT management and costs are simplified. So everyone wins, right? Not entirely.

Many SASE proponents posit that the best way to architect a distributed Work From Home environment would be to have all security functionality in the cloud at the “service edge”, while end user devices have only a small agent to redirect traffic to that service edge. However, this model poses a data protection dilemma. While a cloud-delivered service can extend data protection to data centers, cloud applications, and web traffic, there are a number of blind spots:

  • Every remote worker’s home is now a remote office with a range of unmanaged, unsecured devices like printers, storage drives, and peripherals that can be compromised or be used to exfiltrate data.
  • Attached devices like USB keys can be used to get data off of a corporate device and beyond the reach of and data protection controls.
  • Cloud applications like Webex, Dropbox, and Zoom all have desktop companion apps that enable actions like file syncing or screen/file sharing; these websocket apps run locally on the user’s system and are not subject to cloud-based data protection policies.

These blind spots can only be addressed by endpoint-based data loss prevention (DLP) that enforces data protection policy on the user’s device. This is not dissimilar to how SASE frameworks rely on SD-WAN customer premises equipment (CPE) that perform essential network flow functionality at branch office locations. Therefore, it’s imperative to look for SASE solutions that include endpoint DLP coverage.

Figure 2: How endpoint DLP uniquely addresses home office security gaps.

Bringing it all together is the key

It’s great to say that to address the challenges of cloud transformation and the remote workforce, existing network DLP solutions – with their dedicated management interface, data classifications, and policy workflows – need to be accompanied by similar capabilities in the cloud, and then again on the endpoint. Of course, that’s completely impractical where IT organizations are already struggling to deal with the status quo due to finite budgets and skilled personnel. Not only is it impractical, but it undermines the consolidation, simplification, and cost reduction promised both by digital transformation and the SASE framework.

The answer to this dilemma is a comprehensive data protection solution that encompasses networks, devices, and the cloud, something that is uniquely delivered by McAfee MVISION Unified Cloud Edge (UCE). MVISION UCE is a cloud-native solution that seamlessly converges core security technologies such as Data Loss Prevention (DLP), cloud access security broker (CASB) and next-gen secure web gateway (SWG) to help accelerate SASE adoption. MVISION UCE features multi-vector data protection that features unified data classification and incident management across the network, sanctioned and unsanctioned Shadow IT cloud applications, web traffic, and equally important, endpoint DLP. This provides corporate information-security teams the necessary visibility, control and management capability to secure home-based and mobile workers as they access data anywhere.

Figure 3: Unified Multi-Vector Data Protection

To manage data security of a distributed workforce, linking device security to corporate policy becomes extremely important. With a managed DLP agent on the device, IT security can know where sensitive data exists, block untrusted services and removable media, protect against cloud services and desktop apps, and educate employees to potential dangers.

Historically, data protection has focused on a central point like the network or the cloud because implementing it on the device has been difficult. However, with McAfee’s Unified Computing Edge (UCE), DLP becomes an easy-to-deliver feature.

Centrally managed by McAfee MVISION ePO, McAfee DLP can be easily deployed to endpoints. With its unique device-to-cloud DLP features, on-prem DLP policies can be easily extended to the Cloud with a single click and as fast as under one minute.  Shared data classification tags ensure consistent multi-environment protection for your most sensitive data across endpoints, network and cloud. —

Incorporating security into the cloud and the edge, and delivering data protection at the endpoint, are the only way to really deliver on what SASE promises and unlock your remote workforce. Looking to the future, a widely distributed workforce is here to stay. Companies need to take steps to secure devices and data wherever they are.

To find out more, please visit www.mcafee.com/unifiedcloud.

The post Think Beyond the Edge: Why SASE is Incomplete Without Endpoint DLP appeared first on McAfee Blogs.

Securing Containers with NIST 800-190 and MVISION CNAPP

By Sunny Suneja

Government and Private Sector organizations are transforming their businesses by embracing DevOps principles, microservice design patterns, and container technologies across on-premises, cloud, and hybrid environments. Container adoption is becoming mainstream to drive digital transformation and business growth and to accelerate product and feature velocity. Companies have moved quickly to embrace cloud native applications and infrastructure to take advantage of cloud provider systems and to align their design decisions with cloud properties of scalability, resilience, and security first architectures. The declarative nature of these systems enables numerous advantages in application development and deployment, like faster development and deployment cycles, quicker bug fixes and patches, and consistent build and monitoring workflows. These streamlined and well controlled design principles in automation pipelines lead to faster feature delivery and drive competitive differentiation.

As more enterprises adapt to cloud-native architectures and embark on multi-cloud strategies, demands are changing usage patterns, processes, and organizational structures. However, the unique methods by which application containers are created, deployed, networked, and operated present unique challenges when designing, implementing, and operating security systems for these environments. They are ephemeral, often too numerous to count, talk to each other across nodes and clusters more than they communicate with the outside endpoints, and they are typically part of fast-moving continuous integration/continuous deployment (CI/CD) pipelines. Additionally, development toolchains and operations ecosystems continue to present new ways to develop and package code, secrets, and environment variables. Unfortunately, this also compounds supply chain risks and presents an ever-increasing attack surface.

Lack of a comprehensive container security strategy or often not knowing where to start can be a challenge to effectively address risks presented in these unique ecosystems. While teams have recognized the need to evolve their security toolchains and processes to embrace automation, it is imperative for them to integrate specific security and compliance checks early into their respective DevOps processes. There are legitimate concerns that persist about miscon­figurations and runtime risks in cloud native applications, and still too few organizations have a robust security plan in place.

These complex problem definitions have led to the development of a special publication from National Institute of Standards and Technology (NIST) – NIST SP 800-190 Application Security Container Guide. It provides guidelines for securing container applications and infrastructure components, including sectional review of the fundamentals of containers, key risks presented by core components of application container technologies, countermeasures, threat scenario examples, and actionable information for planning, implementing, operating, and maintaining container technologies.

MVISION Cloud Native Application Protection Platform (CNAPP) is a comprehensive device-to-cloud security platform for visibility and control across SaaS, PaaS, & IaaS platforms.  It provides deep coverage on cloud native security controls that can be implemented throughout the entire application lifecycle. By mapping all the applicable risk elements and countermeasures from Sections 3 and 4 of NIST SP 800-190 to capabilities within the platform, we want to provide an architectural point of reference to help customers and industry partners automate compliance and implement security best practices for containerized application workloads. This mapping and a detailed review of platform capabilities aligned with key countermeasures can be referenced here.

As outlined in one of the supporting charts in the whitepaper, CNAPP has capabilities that effectively address all the risk elements described in the NIST special publication guidance.

While the breadth of coverage is critical, it is worth noting that the most effective way to secure containerized applications requires embedding security controls into each phase of the container lifecycle. If we leverage Department of Defense’s Enterprise DevSecOps Reference Design guidance as a point of reference, it describes the DevSecOps lifecycle in terms of nine transition stages comprising of plan, develop, build, test, release, deliver, deploy, operate, and monitor.

DevSecOps Software Lifecycle: Referenced in DoD Enterprise DevSecOps Reference Design v1.0 Guidance

The foundational principle of DevSecOps implementations is that the software development lifecycle is not a monolithic linear process.  The “big bang” style delivery of the Waterfall SDLC process is replaced with small but more frequent deliveries, so that it is easier to change course as necessary. Each small delivery is accomplished through a fully automated process or semi-automated process with minimal human intervention to accelerate continuous integration and delivery. The DevSecOps lifecycle is adaptable and has many feedback loops for continuous improvement.

Specific to containerized applications and workloads, a more abstract view of a container’s lifecycle spans across three high-level phases of Build, Deploy, and Run.

Build

The “Build” phase centers on what ends up inside the container images in terms of the components and layers that make up an application. Usually created by the developers, security efforts are typically focused on reducing business risk later in the container lifecycle by applying best practices and identifying and eliminating known vulnerabilities early. These assessments can be conducted in an “inner” loop iteratively as developers perform incremental builds and add security linting and automated tests or can be driven via an “outer” feedback loop that’s driven by operational security reviews and penetration testing efforts.

Deploy

In the “Deploy” phase, developers configure containerized applications for deployment into production. Context grows beyond information about images to include details about configuration options available for orchestrated services. Security efforts in this phase often center around complying with operational best practices, applying least-privilege principles, and identifying misconfigurations to reduce the likelihood and impact of potential compromises.

Runtime

Runtime” is broadly classified as a separate phase wherein containers go into production with live data, live users, and exposure to networks that could be internal or external in nature. The primary purpose of implementing security during the runtime phase is to protect running applications as well as the underlying container infrastructure by finding and stopping malicious actors in real time.

Docker containerized application life cycle. 

By applying this understanding of container lifecycle stages to respective countermeasures that can be implemented and audited upon within MVISION Cloud, CNAPP customers can establish an optimal security posture and achieve synergies of shift left and runtime security models.   Security assessments are critically important early in planning and design, where important decisions are made about architecture approach, development tooling and technology platforms and where mistakes or misunderstandings can be dangerous and expensive. As DevOps teams move their workloads into the cloud, security teams will need to implement best practices that apply operations, monitoring and runtime security controls across public, private, and hybrid cloud consumption models.

CNAPP first discovers all the cloud-native components mapped to an application, including hosts, IaaS/PaaS services, containers, and the orchestration context that a container operates within.  With the use of native tagging and network flow log analysis, customers can visualize cloud infrastructure interactions including across compute, network, and storage components. Additionally, the platform scans cloud native object and file stores to assess presence of any sensitive data or malware. Depending on the configuration compliance of the underlying resources and data sensitivity, an aggregate risk score is computed per application which provides detailed context for an application owner to understand risks and prioritize mitigation efforts.

As a cloud security posture management platform, CNAPP provides a set of capabilities that ensure that assets comply with industry regulations, best practices, and security policies. This includes proactive scanning for vulnerabilities in container images and VMs and ensuring secure container runtime configurations to prevent non-compliant builds from being pushed to production.  The same principles apply to orchestrator configurations to help secure how containers get deployed using CI/CD tools. These baseline checks can be augmented with other policy types to ensure file integrity monitoring and configuration hardening of hosts (e.g., no insecure ports or unnecessary services), which help apply defense-in-depth by minimizing the overall attack surface.

Finally, the platform enforces policy-based immutability on running container instances (and hosts) to help identify process-, service-, and application-level whitelists. By leveraging the declarative nature of containerized workloads, threats can be detected during the runtime phase, including any exposure created as a result of misconfigurations, application package vulnerabilities, and runtime anomalies such as execution of reverse shell or other remote access tools. While segmentation of workloads can be achieved in the build and deploy phases of a workload using posture checks for constructs like namespaces, network policies, and container runtime configurations to limit system calls, the same should also be enforced in the runtime phase to detect and respond to malicious activity in an automated and scalable way.  The platform defines baselines and behavioral models that can specially be effective to investigate attempts at network reconnaissance, remote code execution due to zero-day application library and package vulnerabilities, and malware callbacks.  Additionally, by mapping these threats and incidents to the MITRE ATT&CK tactics and techniques, it provides a common taxonomy to cloud security teams regardless of the underlying cloud application or an individual component. This helps them extend their processes and security incident runbooks to the cloud, including their ability to remediate security misconfigurations and preemptively address all the container risk categories outlined in NIST 800-190.

The post Securing Containers with NIST 800-190 and MVISION CNAPP appeared first on McAfee Blogs.

3 Reasons Why Connected Apps are Critical to Enterprise Security

By McAfee

Every day, new apps are developed to solve problems and create efficiency in individuals’ lives.  Employees are continually experimenting with new apps to enhance productivity and simplify complex matters. When in a pinch, using DropBox to share large files or an online PDF editor for quick modifications are commonalities among employeesHowever, these apps, although useful, may not be sanctioned or observable by an IT department. The rapid adoption of this process, while bringing the benefit of increased productivity and agility, also raises the ‘shadow IT problem’ where IT has little to no visibility into the cloud services that employees are using or the risk associated with these services. Without visibility, it becomes very difficult for IT to manage both cost expenditure and risk in the cloud. Per the McAfee Cloud Adoption and Risk report, the average enterprise today uses 1950 cloud services, of which less than 10% are enterprise ready. To divert a data breach (with the average cost of a data breach in the US being $7.9 million), enterprises must exercise governance and control over their unsanctioned cloud usage. Does this sound all too familiar? It’s because these are many of the issues we face with Shadow IT, and are facing today regarding a similar security risk with connected apps.   

What are Connected Apps? Collaboration platforms such as Office 365 enable teams and end-users to install and connect third-party apps or create their own custom apps to help solve new and existing business problems. For example, Microsoft hosts the Microsoft Store, where end-users can browse througthousands of apps and install them into their company’s Office 365 environment. These apps help augment native Microsoft office capabilities and help increase enduser productivity. Some examples include WebEx to set up meetings from Outlook or Survey Monkey add-in to initiate surveys from Microsoft Teams.  When these apps are added, they will often ask the enduser to authorize access to their Cloud app resources. This could be data stored in the app, like in SharePoint, or calendar information or email content. Authorizing access to third party apps creates concerns for many organizations. 

Reason 1: Risky Data Exfiltrated to 3rd Party Apps 

What if the app itself is risky? For example, PDF converter apps ask for access to all data so they can generate PDF versions for sharing. Corporate data is moving out of the corporate cloud app into these risky applications. Or, even if the app is not risky, it may be accessing cloud resources such as mail, drive, calendar, which contain data considered highly sensitive by the company. For example, the Evernote app for Outlook can be used for saving email data. Now, the app itself is not risky, but the company may not have approved it for employees to use. If that is the case, an introduction of apps in this manner represents a data exfiltration of corporate data.    

Reason 2: No Coverage with Existing Controls 

Connected Apps establishes a cloud-to-cloud connection with your sanctioned cloud services that is not visible to existing network policies and controls. So, if a company has put in place controls on the web gateway or firewall to block unauthorized file sharing services, then it is still possible for employees to add the connected app from the marketplace and bypass these existing controls. Even the API based DLP policies do not apply to data moving into Connected Apps. All of this means that organizations need to exercise more oversight and control on the usage of Connected apps by their employees.  

Reason 3: Shared Responsibility 

The Shared Responsibility model applies to Connected Apps as wellCloud services like Google and Microsoft provide a marketplace for customers to add appsbut they expect the companies to take responsibility for their data and users and ensure that the usage of these connected apps is in line with security and compliance policies.  

MVISION Cloud provides comprehensive security solutions through visibility, control, and the ability to troubleshoot into third-party applications connected to sanctioned cloud services, such as these marketplace apps. With a database of over 30,000 cloud services, MVISION Cloud provides comprehensive and up to date information on Connected Apps plugged into corporate cloud services such as Microsoft 365 and G Suite. Customers can use this visibility to apply controls to block, allow, or selectively allow apps for some users. As large users deploy Connected Apps to their hundreds of thousands of users, MVISION Cloud also provides troubleshooting tools to track activities and add notes to allow for quick diagnosis and resolution of Support issues. To learn more, see the brief video below provides a deeper look into securing connected apps with MVISION Cloud.  

The post 3 Reasons Why Connected Apps are Critical to Enterprise Security appeared first on McAfee Blogs.

Finally, True Unified Multi-Vector Data Protection in a Cloud World

By Suhaas Kodagali

This week, we announced the latest release of MVISION Unified Cloud Edge, which included a number of great data protection enhancements. With working patterns and data workflows dramatically changed in 2020, this release couldn’t be more timely.

According to a report by Gartner earlier in 2020, 88% of organizations have encouraged or required employees to work from home. And a report from PwC found that, corporations have termed the remote work effort in 2020, by and large, a success. Many executives are reconfiguring office layouts to cut capacity by half or more, indicating that remote work is here to stay as a part of work life even after we come out of the restrictions placed on us by the pandemic.

Security teams, scrambling to keep pace with the work from home changes, are grappling with multiple challenges, a key one being how to protect corporate data from exfiltration and maintain compliance in this new work from home paradigm. Employees are working in less secure environments and using multiple applications and communication tools that may not have been permitted within the corporate environment. What if they upload sensitive corporate data to a less than secure cloud service? What if employees use their personal devices to download company email content or Salesforce contacts?

McAfee’s Unified Cloud Edge provides enterprises with comprehensive data and threat protection by bringing together its flagship secure web gateway, CASB, and endpoint DLP offerings into a single integrated Secure Access Service Edge (SASE) solution. The unified security solution offered by UCE features unified data classification and incident management across the network, sanctioned and unsanctioned (Shadow IT) cloud applications, web traffic, and endpoints, thereby covering multiple key exfiltration vectors.

UCE Protects Against Multiple Data Exfiltration Vectors

1. Exfiltration to High Risk Cloud Services

According to a recent McAfee report, 91% of cloud services do not encrypt data at rest and 87% of cloud services do not delete data upon account termination, allowing the cloud service to own customer data in perpetuity. McAfee UCE detects the usage of risky cloud services using over 75 security attributes and enforces policies, such blocking all services with a risk score over 7, which helps prevent exfiltration of data into high risk cloud services.

2. Exfiltration to permitted cloud services

Some cloud services, especially the high risk ones, can be blocked. But there are others which may not be fully sanctioned by IT, but fulfill a business need or improve productivity and thus may have to be allowed. To protect data while enabling these services, security teams can enforce partial controls, such as allowing users to download data from these services but blocking uploads. This way, employees remain productive while company data remains protected.

3. Exfiltration from sanctioned cloud services

Digital transformation and cloud-first initiatives have led to significant amounts of data moving to cloud data stores such as Office 365 and G Suite. So, companies are comfortable with sensitive corporate data living in these data stores but are worried about it being exfiltrated to unauthorized users. For example, a file in OneDrive can be shared with an unauthorized external user, or a user can download data from a corporate SharePoint account and then upload it to a personal OneDrive account. MVISION Cloud customers commonly apply collaboration controls to block unauthorized third party sharing and use inline controls like Tenant Restrictions to ensure employees always login with their corporate accounts and not with their personal accounts.

4. Exfiltration from endpoint devices

An important consideration for all security teams, especially given most employees are now working from home, is the plethora of unmanaged devices such as storage drives, printers, and peripherals that data can be exfiltrated into. In addition, services that enable remote working, like Zoom, WebEx, and Dropbox, have desktop apps that enable file sharing and syncing actions that cannot be controlled by network policies because of web socket or certificate pinning considerations. The ability to enforce data protection policies on endpoint devices becomes crucial to protect against data leakage to unauthorized devices and maintain compliance in a WFH world.

5. Exfiltration via email

Outbound email is one of the critical vectors for data loss. The ability to extend and enforce DLP policies to email is an important consideration for security teams. Many enterprises choose to apply inline email controls, while some choose to use the off-band method, which surfaces policy violations in a monitoring mode only.

UCE provides a Unified and Comprehensive Data Protection Offering

Using point security solutions for data protection raises multiple challenges. Managing policy workflows in multiple consoles, rewriting policies, and aligning incident information in multiple security products result in operational overhead and coordination challenges that slow down the teams involved and hurt the company’s ability to respond to a security incident. UCE brings web, CASB, and endpoint DLP into a converged offering for data protection. By providing a unified experience, UCE increases consistency and efficiencies for security teams in multiple ways.

1. Reusable classifications

A single set of classifications can be reused across different McAfee platforms, including ePO, MVISION Cloud, and Unified Cloud Edge. For example, if a classification is implemented to identify Brazilian driver’s license information to apply DLP policies on endpoint devices, the same classification can be applied in DLP policies on collaboration policies in Office 365 or outgoing emails in Exchange Online. Alternatively, if the endpoint and cloud were secured by two separate products, it would require creating disparate classifications and policies on both platforms and then ensuring the 2 policies have the same underlying regex rules to keep policy violations consistent. This increases operational complexity and overhead for security teams.

2. Converged incident infrastructure

Customers using MVISION Cloud have a unified view of cloud, web, and endpoint DLP incidents in a single unified console. This can be extremely helpful in scenarios where a single exfiltration act by an employee is spread across multiple vectors. For example, an employee attempts to share a company document with his personal email address, and then tries to upload it to a shadow service like WeTransfer. When both these attempts don’t work, he uses a USB drive to copy the document from his office laptop. Each of these fires an incident, but when we present a consolidated view of these incidents based on the file, your admins have a unique perspective and possibly a different remediation action as opposed to trying to parse these incidents from separate solutions.

3. Consistent experience

McAfee data protection platforms provide customers with a consistent experience in creating a DLP policy, whether it is securing sanctioned cloud services, protecting against malware, or preventing data exfiltration to shadow cloud services. Having a familiar workflow makes it easy for multiple teams to create and manage policies and remediate incidents.

As the report from PwC states, the work from home paradigm is likely not going away anytime soon. As enterprises prepare for the new normal, a solution like Unified Cloud Edge enables the security transformation they need to gain success in a remote world.

The post Finally, True Unified Multi-Vector Data Protection in a Cloud World appeared first on McAfee Blogs.

Domain Age as an Internet Filter Criteria

By Jeff Ebeling

Use of “domain age” is a feature being promoted by various firewall and web security vendors as a method to protect users and systems from accessing malicious internet destinations. The concept is to use domain age as a generic traffic filtering parameter. The thought is that hosts associated with newly registered domains should be either completely blocked, isolated, or treated with high suspicion. This blog will describe what domain age is, how domains are created and registered, domain age value, and how domain age can be used most effectively as a compliment to other web security tools.

Domain Age Feature Definition

The sites and domains of the internet are constantly changing and evolving. In the first quarter of 2020 an average of over 40,000 domains were registered per day. If the domain of a target host is known that domain has a registration date available for lookup from various sources. Domain age is a simple calculation of the time between initial domain registration and the current date.

A domain age feature is designed for use in policy control, where an administrator can set a minimum domain age that should be necessary to allow access to a given internet destination. The idea is that since domains are so easy and cheap to establish, new domains should be treated with great care, if not blocked outright. Unfortunately, with most protocols and implementations, domain age policy selection is a binary decision to allow or block. This is not very useful when the ultimate destinations are hosts, subdomains, and destination addresses that can be rapidly activated, changed, and deactivated without ever changing the domain age. As a result, binary security decisions based solely on domain name or domain age will naturally result in both false positives and false negatives that are detrimental to security, user experience, and productivity.

Domain Registration

IANA (Internet Assigned Numbers Authority) is the department of ICANN (Internet Corporation for Assigned Names and Numbers) responsible for managing the registries of, protocol parameters, domain names, IP addresses, and Autonomous System Numbers.

IANA manages the DNS root zone and TLDs (Top Level Domains like .com, .org, .edu, etc.) and registrars are responsible for working with the Internet Registry and IANA to register individual subdomains within the top-level domains.

Details of the registration process and definitions can be found on the IANA site (iana.org). Additional details can be found here: https://whois.icann.org/en/domain-name-registration-process This location includes the following statement:

“In some cases, a person or organization who does not wish to have their information listed in WHOIS may contract with a proxy service provider to register domain names on their behalf. In this case, the service provider is the domain name registrant, not the end customer.”

This means that service providers, and end customers are free to register a domain once and reuse, reassign or sell that domain without changing the registration date or changing any other registration information. Registrars can and do auction addresses creating a vast market for domain “squatters and trolls.” An attacker can cheaply purchase an established domain of a defunct business or register a completely new legitimate sounding domain and leave it unused for weeks, months or years.  For example, as of this writing airnigeria.com is up for sale on godaddy.com for just $65 USD. The domain airnigeria.com was originally registered in 2003. IANA and the registrars have no responsibility or control over usage of domains.

Determining Domain Age

Domain age is determined from the domain record in the Internet Registry managed by the registry operator for a TLD (Top Level Domain). Ultimately the registrar is responsible for the establishment of a domain registration and updating related data. The record in the registry will have an original creation date but that date doesn’t change unless the registration for a specific domain expires and the domain name is re-registered. Because of this, domain age is an extremely inaccurate measure of when an individual destination became active.

And what if only the destination IP address is known at the time of the filtering decision? This could be the case for filtering the first packet sent to a specific destination (TCP SYN or first UDP packet of some other network or transport level protocol). One way to get the domain for the destination would be a reverse DNS lookup, but the domain for the host may not match the domain that was originally submitted for resolution, so what value is domain age there?

For example, www.mcafee.com can currently resolve to 172.224.15.98 which reverse resolves to a172-224-15-98.deploy.static.akamaitechnologies.com. While the mcafee.com domain was registered on 1992-08-05, akamaitechnologies.com was registered on 1998-08-18. Both are long established domains, but just because this destination, in the well-established mcafee.com domain, is hosted on the well-established akamaitechnologies.com domain, this doesn’t provide any indication of when the www.mcafee.com, or 172.224.15.98 destination became active, or the risk of communicating with that IP address. Domain age becomes even less useful when we consider destinations hosted in the public cloud (IaaS and SaaS) using the providers’ domains.

Obtaining the wrong domain and therefore wrong domain age from reverse lookup could be somewhat mitigated by tracking the DNS queries of the client and attempting to map those domains back to the requested destination IP. However, doing this would also be dependent on having full visibility into all DNS requests from the client, and assumes that the destination IP address was determined using standard DNS or by the system providing the domain age filtering.

Challenges with Using Domain Age as a Generic Filter Criteria

Even if the correct domain for the transmission can be established, and the domain age can be accurately retrieved, there are still issues that should be considered.

Registrars are free to maintain, change, and reassign established domains to any customer, and resellers can do the same. This greatly diminishes the usefulness of domain age as a stand-alone filtering parameter because a malicious actor can easily acquire an existing well-established domain with a neutral or even positive reputation. A malicious actor can also register a new domain long before it is put into use as a command and control or attack domain.

Legitimate and perfectly safe sites are constantly being registered and established in many cases within days or even hours of being put into use. When using domain age as filter criteria there will always be a tradeoff between false positive and false negative rates.

It should also be noted that domain age provides little value relative to when an individual hostname record was created within a domain. Well established domains can have an infinite number of subdomains and individual hosts within those domains, and there is no way to accurately determine hostname age or even when the name was associated with an active IP. All that could possibly be determined is that the destination hostname is part of a domain that was registered at some earlier date.

The bottom line is that domain age is not nearly granular or substantive enough to make a useful filtering decision on its own. However, domain age could provide some limited security value in the complete absence of more specific criteria, provided the false positive rate and false negative rate associated with the selected recency threshold can be tolerated. Domain age can provide supplemental value when combined with other more definitive filter criteria for example protocol, content type, host category, host reputation, host first seen, frequency of host access, web service attributes, and others.

Domain Age in the Context of HTTP/S and Proxy Based Filtering

More specific criteria are always available when the HTTP protocol is in use. HTTP and HTTPS filtering is most effectively handled via explicit or transparent proxy. If the protocol is followed (enforced by the device or service), information cannot be transferred, and a compromise or attack cannot be initiated, until after TCP connection establishment.

Given that the traffic is being proxied, and HTTPS can be decrypted, accurate Fully Qualified Domain Name (FQDNs) for the host, URL path, and URL parameters can be identified and verified by the proxy for use in filtering decisions. The ability to lookup information on the FQDN, full URL path, and URL parameters provides much more valuable information relative to the history, risk level, and usage of the specific site, destination, and service independent of the domain or the domain’s date of registration Such contextual data can be further enhanced when the proxy associates the request with a specific service and its data security attributes (such as type of service, intellectual property ownership, breach history, etc.).

Industry leading web proxy vendors maintain extensive and comprehensive databases of the most frequently used sites, domains, applications, services, and URLs. The McAfee Global Threat Intelligence and Cloud Registry databases associate sites, domains, and URLs with geolocation, category, service, service attributes, applications, data risk reputations, threat reputations and more. As a side benefit, lack of an entry in the databases for a specific host, domain, service, or URL is an extremely strong, and much more accurate, indication that the site is newly established or little used and therefore should not be inherently trusted. Such sites should be treated with caution and blocked or coached or isolated (the latter two options are uniquely available with proxied HTTP/S) based on that criteria alone, regardless of domain age.

McAfee’s Unified Cloud Edge provides all of the above functionality and includes remote browser isolation (RBI) for uncategorized, unverified, and otherwise risky sites. This virtually eliminates the risks of browsers or other applications accessing uncategorized sites, without adding the complications of false positives and false negatives from a domain age filter.

When using HTTP/S, hostname age, or even first and/or last hostname seen date could provide additional value, but domain age is pretty much useless when the FQDN and more specific site or service related information is available. Best practice is to block, isolate, or at a minimum, coach unverified sites and services without regard to domain age. Allowing unverified sites or services based on domain age adds significant risk of false negatives (risky sites and services being allowed simply because the domain was not recently registered). Generically blocking sites and services based on domain age alone would lead to over-blocking sites that have established good reputations and should not be blocked.

Conclusion

Domain age can be somewhat useful for supplementing filter decisions in situations where no other more accurate and specific information is available about the destination of a network packet. When considering use of domain age for HTTP/S filtering, it is an extremely poor substitute for a more comprehensive threat intelligence and service database. If the decision is made to deviate from best practice and allow HTTP/S connections to unverified sites, without isolation, then domain age can provide limited supplemental value by blocking unverified sites that are in newly registered domains. This comes at the expense of a false sense of security and much greater risk of false negatives when compared to the best practice of using comprehensive web threat intelligence, performing thorough request and response analysis, and simply blocking, isolating, or coaching unverified sites.

 

The post Domain Age as an Internet Filter Criteria appeared first on McAfee Blogs.

The Fastest Route to SASE

By Robert Arandjelovic

Shortcuts aren’t always the fastest or safest route from Point A to Point B. Providing faster “direct to cloud” access for your users to critical applications and cloud services can certainly improve productivity and reduce costs, but cutting corners on security can come with huge consequences. The Secure Access Service Edge (SASE) framework shows how to achieve digital transformation without compromising security, but organizations still face a number of difficult choices in how they go about it. Now, McAfee can help your organization take the shortest, fastest, and most secure path to SASE with its MVISION Unified Cloud Edge solution delivered alongside SD-WAN.

Decision makers seek a faster, more efficient high road to cloud and network transformation without compromising security. The need for speed and scalability is crucial, but corners cannot be cut when it comes to maintaining data and threat protection. Safety and security cannot be left behind in a cloud of transformation dust. This blog will look at the major trends driving SASE adoption, and will then discuss how a complete SASE deployment can deliver improved performance, superior threat & data security, lower complexity, and cost savings. We’ll then explain why fast AND secure cloud transformation requires an intelligent, hyperscale platform to accelerate SASE adoption.

Dangerous Detours, Potholes, and Roadblocks

While digital transformation promises substantial gains in productivity and efficiencies, the journey is littered with security and efficiency challenges that can detour your organization from its desired upgrades and safe destination.

Digital transformation challenges that must be addressed include:

  • The Big Shift – Shifting your organization’s applications and data out of corporate data centers and into the cloud.
  • Going More Mobile – The proliferation of mobile devices leaves your corporate resources more vulnerable as they are being accessed by a growing number of devices many of which are personally owned and unmanaged.
  • Work from Anywhere– The seemingly permanent shift towards “Work from Home” creates an increased demand for more efficient distributed access to cloud-based corporate resources that secures visibility and control amidst the eroding traditional network.
  • Costly Infrastructure – MPLS connections, VPN concentrators, and huge centralized network security infrastructure represent major investments with significant operational expense. The fact that multiple security solutions typically operate in distinct siloes compounds management effort and costs.
  • Slow Performance, High Latency, and Low Productivity – Dedicated MPLS and VPN lines are also slow and architecturally inefficient, requiring all traffic to go to the data center for security and then all the way back out to internet resources – NOT a straight line.
  • Data Vulnerability – Data resides and moves completely outside the scope of perimeter security through collaboration from the cloud to third parties, between cloud services, and access by unmanaged devices, leaving it prone to incidents without security teams knowing.
  • Evolving Threats and Techniques – Staying ahead of the latest malware remains a priority, but many modern attacks are emerging that use techniques like social engineering to exploit the features of cloud providers and mimic user behavior with legitimate credentials. Detecting these seemingly legitimate behaviors is extremely difficult for traditional security tools.

Feel the Need for Safe, But Less Costly Speed

The increasingly difficult challenge of providing a fast and safe cloud environment to an increasingly distributed workforce has become a major detour in the drive to transform from traditional enterprise networks and local data centers. Companies have had to meet the challenge to “adapt or die” in connecting their employees and devices to corporate resources, but many have generally needed to choose between two unsatisfactory compromises: secure but slow and expensive, or fast and affordable but not secure. Adopting a SASE framework is the way to achieve all of the benefits of cloud transformation without compromise:

  • Reduction in Cost and Complexity – A great benefit for your SOC and IT teams, SASE promotes a network transformation that simplifies your technology stack, reducing costs and complexity.
  • Increased Speed and Productivity – Fast, uninterrupted access to applications and data boosts the user experience and improves productivity. SASE provides ubiquitous, low-latency connectivity for your workforce – even remote workers – via a fast and ubiquitous cloud service, and uses a streamlined “single pass” inspection model that ensures they aren’t bogged down by security.
  • Multi-Vector Data Protection – SASE mandates the protection of data traveling through the internet, within the cloud, and moving cloud to cloud, enabling Zero Trust policy decisions at every control point.
  • Comprehensive Threat Defense – A SASE framework fortifies an organization’s threat defense capabilities for detecting both cloud-native and advanced malware attacks within the cloud and from any web destination.

Selecting the Best Path to Transformation

When network and security decision makers come to the proverbial fork in the road to network transformation, what is the best path that enables fast and affordable access without leading to unacceptable security risk? A recent blog by McAfee detailed four architectural approaches based on the willingness to embrace new technologies and bring them together. After examining the pros and cons of these four paths, the ideal solution to achieve fast, secure, and cost-effective access to web and cloud resources is a SASE model that brings together a ubiquitous, tightly integrated security stack with a robust, direct-to-cloud SD-WAN integrated networking solution. This combination provides a secure network express lane to the cloud, cruising around the latency challenges of slow, expensive MPLS links for connectivity to your applications and resources.

MVISION Unified Cloud Edge (UCE) + SD-WAN: Fast, Furious and Secure

Fast Network. Data Protection. Threat Protection. Speed, security and safety turbocharged connectivity throughout a hyperscale cloud network without compromise.

MVISION UCE is the best framework for implementing a SASE architecture to accelerate digital transformation with cloud services, enabling cloud and internet access from any device while empowering ultimate workforce productivity. MVISION UCE brings SASE’s most important security technologies – Cloud Access Security Broker (CASB), Next-gen Secure Web Gateway (SWG), Data Loss Prevention (DLP), and Remote Browser Isolation (RBI) – together in a single cloud-native hyperscale service edge that delivers single-pass security inspection with ultra-low latency and 99.999% availability.

With MVISION Unified Cloud Edge and our SD-WAN integration partners, you can lead a network transformation that reduces costs and speeds up the user experience by using fast, affordable broadband connections instead of expensive MPLS.

MVISION UCE and SD-WAN transforms your network architecture by enabling users to directly access cloud resources without having to go back through their corporate network through MLPS or VPN connection. Now users can directly access cloud resources, and the McAfee cloud infrastructure is so well-optimized that they can often access resources even FASTER than if there was no intervening security stack! Read how Peering POPs make negative latency possible in this McAfee White Paper.

Because of the way we’ve delivered our product, MVISION UCE + SD-WAN unleashes SASE’s benefits, with data and threat protection that other vendors can’t match.

Reduction in Cost and Complexity, Increased Speed and Agility

  • The resulting converged cloud service is substantially more efficient than building your own SASE by manually integrating separate cloud-based technologies
  • Minimize inefficient traffic backhauling with intelligent, efficient, and secure direct-to-cloud access
  • Protect remote sites via SD-WAN using industry standard Dynamic IPSec and GRE protocols leveraging SD-WAN technology that gets office sites to cloud resources faster and more directly than ever before
  • Enjoy low latency and unlimited scalability with a global cloud footprint and cloud-native architecture that includes global Peering POPs (Point of Presence) reducing delays
  • As a cloud service with 99.999% uptime (Maintained Service Availability) and internet speeds faster than a direct connection, you improve the productivity of your workforce while reducing the cost of your network infrastructure.

Multi-Vector Data Protection

  • The McAfee approach to data protection is unified, meaning each control point works as part of a whole solution.
  • All access points are covered using the same data loss prevention (DLP) engine, giving you an easily traceable path from device to cloud
  • Your data classifications can be set once, and applied in policies that protect the endpoint, web traffic and any cloud interaction
  • All incidents are centralized in one management console for a single view of your data protection practice, giving you a streamlined incident management experience

Comprehensive Threat Defense

  • Intelligence-driven unified protection – CASB, Next-gen SWG, DLP – against the most sophisticated cyberattacks and data loss
  • Remote Browser Isolation (RBI) protection from web-based threats and malware through the remote exclusion and containment of all browsing activities to a remote server hosted in the cloud
  • The industry’s most effective in-line emulation sandbox, capable of removing zero-day malware at line speed
  • User and entity behavior analytics (UEBA) monitoring all cloud activity for anomalies and threats to your data

If you are looking for improved productivity and lower costs of cloud transformation without cutting corners, McAfee MVISION UCE offers the fastest route to SASE — without compromising your data and threat security.

 

The post The Fastest Route to SASE appeared first on McAfee Blogs.

Lessons We Can Learn From Airport Security

By Nigel Hawthorn
Remote Learning

Most of us don’t have responsibility for airports, but thinking about airport security can teach us lessons about how we consider, design and execute IT security in our enterprise. Airports have to be constantly vigilant from a multitude of threats; terrorists, criminals, rogue employees and their security defenses need to combat major attacks, individual threats, stowaways, smuggling as well as considering the safety of passengers and none of this can stop the smooth flow of travelers as every delay has business knock on effects. Whew! And this is just the start.

The airport operators are a lesson in supply-chain and 3rd party communications. They cooperate with airlines, retailers and government agencies, and their threats can be catastrophic. They also need to consider mundane problems like how do you move a large number of people around quickly, what to do when someone leaves a bag to go shopping and how to balance risk reduction with traveler comfort – many needs to be considered, planned for and the execution when a risk is identified needs to be immediate. All this before thinking about IT-related issues, thefts from retailers, employee assessments and training, building safety, people tracking and … the list seems almost endless.

Our business IT security needs might not seem so complex; however every enterprise has its external and internal attackers; hackers, ransomware, DDoS attacks to take down your systems and rogue employees or inadvertent actions by good employees who don’t realize what link they are clicking on or data they are over-sharing. At the same time, the business needs to be able to enable the newest and most effective apps and systems and employees hate anything that appears to get in their way.

So, let’s see what airports can teach us about thinking about possible threats and appropriate safeguards to deploy a layered approach that protects your data, users and infrastructure.

If you take just one threat; terrorism as – this image shows that US airports have more than 20 layers of security – a mixture of human and technological measures.

There’s no silver bullet, there’s not one piece of security awareness or technology that will solve all problems – but if integrated, they can all build together to draw a picture of the possible threat.  Our defenses shouldn’t rely on just one technology either, but when we have multiple capabilities working together, we can evaluate, identify and address our security needs.

Here’s my table of some of the needs of an airport and equivalent areas in general IT security. Just as in an airport, individual pieces are of limited benefit unless they are brought together. Even though each item improves overall security, a single management console that can correlate all these pieces of knowledge and suggest or make policy decisions is crucial to ensure you get maximum benefit.

Airport Enterprise IT
Check ticket against passport Global SSO and multi-factor authentication for every app (including cloud)
X-ray baggage Scan attachments for malware
Security gates and handbaggage check DLP for confidential data loss control
Facial recognition comparing security gate and plane gate with ticket Zero trust – keep checking at all times
Baggage weight check Review email attachments – treat previously unseen executables as suspect
CCTV as passengers move around airport User behavior analytics for risky behavior
Database of travellers, prior travel, destination information Logging / analytics
Temperature tests for COVID Block surfing to high risk web sites
Visa requirements Access control to sensitive areas or sensitive data
Check expiry date on passport Reconfirm credentials after a period
History of prior travel User behavior analytics to understand “normal traffic” for each individual user and alert on unusual patterns.
Open Skies Initative – sharing data with destination – allowing arrest on landing Insights to check and implement defences before attacks based on other organization’s threats
Landing card (where staying, reason etc.) Employee justification for actions – feedback loops when challenged
Finger prints on landing – check against previous travel history Insights
Security guards, customs agents, check in staff, people monitoring CCTV The personal touch – the SOC team investigating threats and defining and implementing policies
Different security lines for additional checks Remote Browser Isolation
Overall SOC center to correlate all inputs Global management

 

What have we learned?

Firstly, the job of securing an airport is complex and involves a lot of planning, cooperation with 3rd parties and a vast mixture of people and technology-based security.

Secondly, we cannot rely on one defense, just like airports.

Thirdly, concepts like zero trust, MITRE ATT&CK framework, Cyber Kill Chain are all aiming to look at threats in the round – we need look at threats from every angle we can and implement the best technology we can.

The best solutions will be integrated, you need to be able to collate activity patterns to evaluate risks and define defenses.  McAfee’s Device to Cloud Suites are designed to bring together multiple systems all under one umbrella and let you accelerate cloud adoption, improve productivity and bring together more than ten different security technologies all managed by McAfee ePO.

 

Device to Cloud Suites

Easy, comprehensive protection that spans endpoints, web, and cloud

Learn more

 

The post Lessons We Can Learn From Airport Security appeared first on McAfee Blogs.

You Don’t Have to Give Up Your Crown Jewels in Hopes of Better Cloud Security

By Rich Vorwaller

If you’re like me, you love a good heist film. Movies like The Italian Job, Inception, and Ocean’s 11 are riveting, but outside of cinema these types of heists don’t really happen anymore, right? Think again. In 2019, the Green Vault Museum in Dresden, Germany reported a jewel burglary worthy of its own film.

On November 25, 2019 at 4am, the Berlin Clan Network started a fire that destroyed the museum’s power box, disabling some of the alarm systems. The clan then cut through iron bars and broke into the vault. Security camera footage published online shows two suspects entering the room with flashlights, across a black-and-white-tiled floor. After grabbing 37 sets of stolen jewelry in a couple of minutes, the thieves exited through the same window, replacing the bars in order to delay detection. Then they fled in a car which was later found torched.[1]

Since then, there’s been numerous police raids and a couple of arrests, but an international manhunt is still underway and none of the stolen jewels have been recovered. What’s worse is that the museum didn’t insure the jewelry, resulting in a $1.2 billion-dollar loss. Again, this is a story ripe for Hollywood.

Although we may not read about jewelry heists like this one every day, we do see daily headlines about security breaches resulting in companies losing their own crown jewels – customer data. In fact, the concept of protecting crown jewels is so well known in the cybersecurity industry, that MITRE has created a process called Crown Jewels Analysis (CJA), which helps organizations identify the most important cyber assets and create mitigation processes for protecting those assets.[2] Today exposed sensitive data has become synonymous with cloud storage breaches and there is no shortage of victims.

To be fair all of these breaches have a common factor – the human element in charge of managing cloud storage misconfigured or didn’t enable the correct settings. However, at the same time we can’t always blame people when security fails. If robbers can so easily access multiple crown jewels again and again, you can’t keep blaming the security guards. Something is wrong with the system.

Some of the most well-versed cloud native companies like Netflix, Twilio, and Uber have suffered security breaches with sensitive data stored in cloud storage.[3] This has gotten to the point that in 2020, the Verizon Data Breach Report listed Errors as the second highest cause for data breaches due “in large part, associated with internet-exposed storage.”[4]

So why is securing cloud storage services so hard? Why do so many different companies struggle with this concept? As we’ve talked to our customers and asked what makes protecting sensitive data in the cloud so challenging, many simply don’t know if they had sensitive data in the cloud or struggle with handling the countless permissions and available overrides for each service.[5] Most of them have taken the approach that someone – whether that be an internal employee, a third-party contractor, or a technology partner – will eventually fail in setting the right permissions for their data, and they need a solution that will continuously check for sensitive data and prevent it from being accessed regardless of the location or service-level permissions.

Enter in Cloud Native Application Protection Platform (CNAPP). Last month our new CNAPP service dedicated to securing hybrid cloud infrastructure and cloud native applications became generally available. One of the core pillars behind CNAPP is Apps & Data – meaning that along with Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP), CNAPP provides a cohesive Data Loss Prevention (DLP) service.

Figure 1: CNAPP Pillars

Typically, the way security vendors perform DLP scans for cloud storage is by copying down customer data to their platform. They do this because in order to scan for sensitive data, the vendor needs access to your data from a platform that can run their DLP engine. However, this solution presents some challenges:

  • Costs – copying down storage objects means customers incur charges for every bit of data that goes across the wire which include but aren’t limited to requests charges, egress charges, and data transfer charges. For some customers these charges are significant enough where they have to pick and choose which objects to scan instead of protecting their entire data store in the cloud.
  • Operational burden – customers who aren’t comfortable sending the data over the public internet have to create tunnels or direct connections to vendor solutions. This means additional overhead, architectural changes, and sometimes backhauling large amounts of data across those connections.
  • Defeats the Purpose of DLP – this was a lesson learned from our MVISION Cloud DLP scanning; for some customers performing DLP scans over network connections was convenient but for other customers it was a huge security risk. Essentially, these solutions require customers to hand over their crown jewels in order to determine if that data has the crown jewels. Ultimately, we arrived at the conclusion that data should be local, but DLP policies should be global.

This is where we came up with the concept of in-tenant DLP scanning. In-tenant DLP scanning works by launching a small software stack inside the customers’ AWS, Azure, or GCP account. The stack is a headless, microservice (called a Micro Point of Presence or Micro PoP) that pushes out workload protection policies to compute and storage services. The Micro PoP connects to the CNAPP console for management purposes but allows customers to perform local DLP scans within each virtual network segment using direct access. No customer data ever leaves the customers’ tenant.

Figure 2: In-tenant DLP Scanning

Customers can also choose to connect multiple virtual network segments to a single Micro PoP using services like AWS PrivateLink if they want to consolidate DLP scans for multiple S3 buckets. There’s no capacity limit or license limitation to how many Micro PoPs customers can deploy. CNAPP supports in-tenant DLP scanning for Amazon S3, Azure Blob, and GCP storage today with on-prem storage coming soon. Lastly, customers don’t have to pick and choose only one deployment model – they can use our traditional DLP scans (called API scans) over network connections or select our in-tenant DLP scans for more sensitive workloads.

In-tenant DLP scanning is just one of the many innovate features we’ve launched with CNAPP. I invite you to check out the solution for yourself. Visit https://mcafee.com/CNAPP for more information or request a demo at https://mcafee.com/demo. We’d love to get your feedback and see how MVISION CNAPP can help your company stay out of the headlines and make sure your crown jewels are right where they should be.

 

Disclaimer: this blog post contains information on products, services and/or processes in development. All information provided here is subject to change without notice at McAfee’s sole discretion. Contact your McAfee representative to obtain the latest forecast, schedule, specifications, and roadmaps.

[1] https://www.dw.com/en/germanys-heist-that-shocked-the-museum-world-the-green-vault-theft/a-55702898

[2] https://www.mitre.org/publications/systems-engineering-guide/enterprise-engineering/systems-engineering-for-mission-assurance/crown-jewels-analysis

[3] https://www.darkreading.com/cloud/twilio-security-incident-shows-danger-of-misconfigured-s3-buckets/d/d-id/1338447

[4] https://enterprise.verizon.com/resources/reports/dbir/

[5] https://www.upguard.com/blog/s3-security-is-flawed-by-design

The post You Don’t Have to Give Up Your Crown Jewels in Hopes of Better Cloud Security appeared first on McAfee Blogs.

New Security Approach to Cloud-Native Applications

By Boubker Elmouttahid

With on-premises infrastructure, securing server workloads and applications involves putting security controls between an organization’s network and the outside world. As organisations migrate workloads (“lift and shift”) to the cloud, the same approach was often used. On the contrary to lift and shift, many enterprise businesses had realized that in order to use the cloud efficiently they need to redesign their apps to become cloud-native. Cloud native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud native development incorporates the concepts of DevOps, continuous delivery, microservices, and containers.

IDC predicts, by 2025, nearly two-thirds of enterprises will be prolific software producers with code deployed daily, over 90% of new apps cloud native, 80% of code externally sourced, and 1.6 times more developers”

Monolithic Apps vs Cloud Native Apps                         

So, how do you ensure the security of your cloudnative applications?

Successful protection of cloud-native applications will require a combination of multiple security controls working together and managed from one security platform. First, the cloud infrastructure where is the cloud-native application is running (containers, serverless functions and virtual machines) should be assessed for security misconfigurations (security posture ), compliance and for known vulnerabilities.  Second, securing the workloads needs a different security approach. Workloads are becoming more granular with shorter life spans as development organizations adopt DevOps-style development patterns. DevOps delivers faster software releases , in some cases, several times per day. The best way to secure these rapidly changing and short-lived cloud-native workloads is to start their protection proactively and build security into every part of the DevOps lifecycle.

Cloud Security Posture Management (CSPM):

The biggest cloud breaches are caused by customer misconfiguration, mismanagement, and mistakes. CSPM is a class of security tools to enable compliance monitoring, DevOps integration, incident response, risk assessment, and risk visualization. It is imperative for security and risk management leaders to enable cloud security posture management processes to proactively identify and address data risks.

Cloud Workload Protection Platforms (CWPP):

CWPP is an agent-based workload security protection technology. CWPP addresses unique requirements of server workload protection in modern hybrid data center architectures including on-premises, physical and virtual machines (VMs), and multiple public cloud infrastructure. This includes support for container-based application architectures.

 

What is MVISION CNAPP

MVISION CNAPP is the industry’s first platform to bring application and risk context to converge Cloud Security Posture Management (CSPM) for multi public cloud infrastructure, and Cloud Workload Protection (CWPP) to protect hybrid, multi cloud workloads including VMs, containers, and serverless functions. McAfee MVISION CNAPP extends MVISION Cloud’s data protection – both Data Loss Prevention and malware detection – threat prevention, governance and compliance to comprehensively address the needs of this new cloud-native application world thereby improving security capabilities and reducing the Total Cost of Ownership of cloud security.

7 Key elements of MVISION CNAPP:

1. Single Hybrid multi cloud security platform: McAfee MVISION Cloud simplify multi-cloud complexity by using a single, cloud-native enforcement point. It’s a comprehensive cloud security solution that protects and prevents enterprise and customer data, assets and applications from advanced security threats and cyberattacks across multiple cloud infrastructures and environments.

2. Cloud Security Posture Management: McAfee MVISION Cloud provide a continuous monitoring for multi cloud IaaS / PaaS environments to identify gaps between their stated security policy and the actual security posture. At the heart of CSPM is the detection of cloud misconfiguration vulnerabilities that can lead to compliance violations and data breaches.

3. Deep discovery and risk based application:You can’t protect what you can’t see. Discovering all cloud resources and prioritise them based on the risk. MVISION CNAPP uniquely provided deep discovery of all workloads, data, and infrastructure across endpoint, networks, and cloud. If you can quickly understand those risks relative to each other, you can quickly prioritize your remediation reducing overall riskMas quickly as possible.

4. Shift Left posture and vulnerability:By moving security into the CI/CD pipeline and make it easy for developers to incorporate into their normal application development processes and ensuring that applications are secure before they are ever published reduces the chance of introducing new vulnerabilities and minimizing threats to the organization.

5. Zero Trust policy control: McAfee’s CNAPP solution supported by CWPP focus on Zero Trust network and workload policies. This approach not only allows you to gain analytics about who is accessing your environment and how an important component of your SOC strategy but it also ensures that people and services have appropriate permissions to perform necessary tasks.

6. Unified Threat Protection:CWPP unifies threat protection across workloads in the cloud and on-premise. Including OS Hardening, Configuration and Vulnerability Management, Application Control/Allow-Listing and File Integrity control. It also synthesizes workload protections and account permissions into the same motion. Finally, by connecting cloud-native application protection to XDR, you are able to have full visibility, risk management, and remediation across your on-premise and cloud infrastructures.

7. Governance and Compliance:The ideal solution for protecting cloud-native applications includes the ability to manage privileged access and address threat protection for both workloads and sensitive data, regardless of where they reside

Business value:

  • One Cloud Security Platform for all your CSPs
  • Scan workloads and configurations in development and protect workloads and configurations at runtime.
  • Better security by enabling standardization and deeper layered defenses.
  • The convergence of CSPM and CWPP

 

IDC FutureScape: Worldwide IT Industry 2020 Predictions

https://www.idc.com/research/viewtoc.jsp?containerId=US45599219

The post New Security Approach to Cloud-Native Applications appeared first on McAfee Blogs.

McAfee Recognised in 2021 Gartner Solution Scorecard Report

By Nigel Hawthorn

Industry analysts perform a huge service in evaluating markets, technology, vendors and sharing their insights with customers via one-on-one discussions and regular publications and events. Gartner publishes Magic Quadrant reports that review a particular market and evaluate vendors for their Completeness of Vision and Ability to Execute.

Gartner also has a separate team of analysts that evaluates single products in greater depth. Their reports review each product or product family across hundreds of criteria and produce a scorecard, key findings and customer recommendations.

We are proud to read the new Solution Scorecard for McAfee MVISION Cloud by Gartner, where we scored “94 out of 100 against Gartner’s 480-point Solution Criteria for Cloud Access Security Brokers”. MVISION Cloud was the only CASB product to score 94 out of 100 in the 2021 scorecards.”

We have licensed it for anyone to read.

We believe, for this review, they reviewed 480 sets of criteria across eleven areas from architecture, management and functions such as data security, threat protection and Cloud Security Posture Management. Once they had reviewed and weighted each attribute, MVISION Cloud came out with a total blended total score of 94 out of 100.

The framework that they used splits each of the criteria into one of three categories – Required, Preferred and Optional. We are pleased to see that they consider MVISION Cloud provides 97% of the Required functionality.

We have also licensed the Magic Quadrant for Cloud Access Security Brokers report from October 2020 – available here.

 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from McAfee. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner, Solution Scorecard for McAfee MVISION Cloud, 5 April 2021, Sushil Aryal, Dennis Xu, Patrick Hevesi

 

The post McAfee Recognised in 2021 Gartner Solution Scorecard Report appeared first on McAfee Blogs.

Cloud Native Security Approach Comparisons

By Vishwas Manral

Vinay Khanna, Ashwin Prabhu & Sriranga Seetharamaiah also contributed to this article. 

In the Cloud, security responsibilities are shared between the Cloud Service Provider (CSP) and Enterprise Security teams. To enable Security teams to provide compliance, visibility, and control across the application stack, CSPs and security vendors have added various innovative approaches across the different layers. In this blog we compare the approaches and provide a framework for Enterprises to think of these approaches.

Overview

Cloud Service Providers are launching new services at a breakneck pace to enable enterprise application developers to bring in new business value to the marketplace faster. For each of these services the CSPs are taking up more and more of the security responsibility while letting the enterprise security teams focus more on the application. To be able to provide visibility, security and enhance existing tools in such diverse and fast changing environments CSPs enable logs, APIs, Native agents and other technologies, that can be used by Enterprise security teams.

Comparison

There are many different approaches to security and each have varying tradeoffs in terms of the depth of visibility and security they provide, the ease of deployment, permissions required, the costs, and the scale they work at.

APIs and logs are the best approach to do get started with discovering your Cloud accounts and finding anomalous activity interesting to security teams in those accounts. It is easy to get access to data from various accounts using these mechanisms, without the security teams having to do much more than get cross account access to the numerous accounts in the organization. The approach provides great visibility but needs to be complemented with protection approaches.

Image and snapshot analysis are a good approach to get deeper data of the workloads both before the application starts and as they run. In this method the image/ snapshot of the disk of the running system can be analyzed to detect any anomalies, vulnerabilities, config incidents etc. Snapshots provide deep data of workloads but may not detect memory resident issues like fileless malware. Also, as we move to ephemeral workloads, analyzing snapshots periodically may have limited usage. The mechanism may not work for cloud services for which disk snapshots may not be possible to obtain. The approach provides deep data of snapshots but needs to be complemented with some protection approaches to be useful.

Native agents and scripts are a good approach to enable deeper visibility and controls by providing an easy way to enhance Cloud native agents like SSM on a machine. Based on the functionality agents can have high resource usage. Native agent support is limited by the CSP provided capabilities, like OS support/ features provided. In a lot of cases the native agents run commands that log the information needed, which implies we need to have the logging approach working in parallel.

DaemonSet and Sidecar containers is an approach to deploying agents easily in Container and serverless environments. Sidecar allow running one container per pod which provide deep data but the resource usage and the cost as a result are high, because multiple sidecars would run on a single server. Sidecars can work in Container Serverless models in which case DaemonSet containers do not work. As the functionality of a Sidecar and DaemonSet is like that of an agent, many of the agent limitations mentioned apply here too.

Agent approach provides the deepest visibility and best control of the environment in which an application runs, by running code coresident with the application. This approach is however harder because the security teams need to have deep discovery capabilities beforehand to be able to deploy these agents.  There is also friction in adding agents as it has to run on every machine and security teams do not have rights to run software on every machine, especially in the cloud. The resource usage and cost of a solution can be high depending on the use cases supported. Newer technologies like Extended Berkley Packet Filters (eBPF) enable reducing resource usage of agents to make them more palatable for broader usage.

Built-into-Image/ Build-into-code approach allows for the security being built into the application image deployed. This allows security functionality to be deployed without having to work on deploying an agent with each workload. This approach provides deep visibility of the application and works even for serverless workloads. Compiling in code adds immense friction by having to add code into the build process, and code libraries need to be available in every application language.

MVISION CNAPP

MVISION Cloud takes a Multi-pronged approach to securing applications and enable security teams to gain control of their Cloud environments.

  1. Security teams often lack visibility into their ephemeral Cloud infrastructures and MVISION Cloud provides a seamless way by using Cross-Account IAM access and then using API and Logs to provide visibility into Cloud environments.
  2. Using the same access MVISION Cloud can not only provide an Audit of the configuration of customer environment but also do image scans to identify vulnerabilities in the components of the workload.
  3. MVISION Cloud can then help identify risk against resources, so security teams can focus on securing the right resources. All of this without having to deploy an agent.
  4. Then using approaches like Sidecars, DaemonSet containers and agents MVISION CNAPP helps provide deep visibility and protect the applications against the most sophisticated attacks by providing File Integrity Monitoring (FIM), Application Allow Listing (AAL), Anti-Malware, run time Vulnerability analysis and performing hardening checks.
  5. Using the data from all the sources MVISION CNAPP provides a Risk score against incidents to help security teams prioritize incidents and focus on the biggest risks.

Conclusion

The various approaches to security have their own unique tradeoffs and no one approach can satisfy all the requirements for the various teams, for the diverse set of platforms they support.

At any point of time different cloud services will be at different levels of adoption maturity. Security teams need to take an incremental approach where they start off adopting solutions that are easy to insert and can provide the basic guardrail of security and visibility, at the start of the service adoption cycle. As applications on a service mature and more high value apps come online, an approach to security that provides deeper discovery and control will be necessary to complement the existing approaches.

No one approach will be able to satisfy all customer use cases and at any time there will be different sets of security solutions that will be active. We are headed to a world of even more diverse security approaches, that have to all work seamlessly to help secure the Enterprise.

 

The post Cloud Native Security Approach Comparisons appeared first on McAfee Blogs.

Adding Security to Smartsheet with McAfee CASB Connect

By Nick Shelly

The Smartsheet enterprise platform has become an essential part of most organizations, as it has done much to transform the way customers conduct business and collaborate, with numerous services available to increase productivity and innovation. Within the McAfee customer base, customers had expressed their commitment to Smartsheet, but wanted to inject the security pedigree of McAfee to make their Smartsheet environments even stronger.

In June 2021, McAfee MVISION Cloud released support for Smartsheet – providing cornerstone CASB services to Smartsheet through the CASB Connect framework, which makes it possible to provide API-based security controls to cloud services, such as:

  • Data Loss Prevention (find and remediate sensitive data)
  • Activity Monitoring & Behavior Analytics (set baselines for user behavior)
  • Threat Detection (insider, compromised accounts, malicious/anomalous activities)
  • Collaboration Policies (assure sensitive data gets shared properly)
  • Device Access Policies (only authorized devices connect)

How does it work?

Utilizing the CASB Connect framework, McAfee MVISION Cloud becomes an authorized third party to a customer’s Smartsheet Event Reporting service. This is an API-based method for McAfee to ingest event/audit logs from Smartsheet.

These logs contain information about what activities occur in Smartsheet. This information has value; McAfee will see user logon activity, sheet creation, user creation activity, sheet updates, deletions, etc. Overall, over 120 unique items are stored in the activity warehouse where intelligence is inferred from it. When an inference is made (example: Insider Threat), the platform can show all the forensics data that lead to that conclusion. This provides value to the Smartsheet customer since it shows potential threats that could lead to data loss, either unintended by a well-meaning end-user or not.

Policies for content detection are another important use-case. Most McAfee customers will utilize Data Loss Prevention (DLP) across their endpoint devices as well as in the cloud utilizing policies that are important to them. Examples of DLP policies could be uncovering credit card numbers, health records, customer lists, specific intellectual property, price lists, and more. Each customer will have some kind of data that is critical for their business, a DLP policy can be crafted to support finding it.

In Smartsheet, when an event from the Event Reporting service is captured that relates to DLP – a field is updated, a file is uploaded, or a sheet is shared, the DLP service in MVISION Cloud will perform an inspection of the event. Should the content or sharing violate a policy, an incident will be raised with forensic details describing what user performed the action and why the violation was flagged. This is important for customers because it operationalizes security in Smartsheet and other cloud applications that MVISION Cloud protects. The same DLP policies can be utilized across all of their critical cloud services, including Smartsheet.

Lastly, MVISION Cloud integrates with most popular Identity Providers (IDP). Through standards-based authentication, MVISION Cloud can enforce policies such as location and device policies that assure that only authorized users connect to Smartsheet; for regulated industries this can be important to ensure no compliance issues are violated as they conduct business.

Summary

Smartsheet enterprise customers benefit significantly from MVISION Cloud’s support. Visibility of user activity, threats and sensitive data give users a chance to further entrench their business processes in a cloud app they want to use. Adding security tools to an enterprise platform like Smartsheet reduces overall risk and gives organizations the confidence to more deeply depend on their critical cloud services.

Next Steps:

Trying out Smartsheet and McAfee MVISION Cloud is easy. Contact McAfee directly at cloud@mcafee.com or visit resources related to this blog post:

 

 

The post Adding Security to Smartsheet with McAfee CASB Connect appeared first on McAfee Blogs.

❌