FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayhttp://blog.trendmicro.com/feed

The Life Cycle of a Compromised (Cloud) Server

By Bob McArdle

Trend Micro Research has developed a go-to resource for all things related to cybercriminal underground hosting and infrastructure. Today we released the second in this three-part series of reports which detail the what, how, and why of cybercriminal hosting (see the first part here).

As part of this report, we dive into the common life cycle of a compromised server from initial compromise to the different stages of monetization preferred by criminals. It’s also important to note that regardless of whether a company’s server is on-premise or cloud-based, criminals don’t care what kind of server they compromise.

To a criminal, any server that is exposed or vulnerable is fair game.

Cloud vs. On-Premise Servers

Cybercriminals don’t care where servers are located. They can leverage the storage space, computation resources, or steal data no matter what type of server they access. Whatever is most exposed will most likely be abused.

As digital transformation continues and potentially picks up to allow for continued remote working, cloud servers are more likely to be exposed. Many enterprise IT teams, unfortunately, are not arranged to provide the same protection for cloud as on-premise servers.

As a side note, we want to emphasize that this scenario applies only to cloud instances replicating the storage or processing power of an on-premise server. Containers or serverless functions won’t fall victim to this same type of compromise. Additionally, if the attacker compromises the cloud account, as opposed to a single running instance, then there is an entirely different attack life cycle as they can spin up computing resources at will. Although this is possible, however, it is not our focus here.

Attack Red Flags

Many IT and security teams might not look for earlier stages of abuse. Before getting hit by ransomware, however, there are other red flags that could alert teams to the breach.

If a server is compromised and used for cryptocurrency mining (also known as cryptomining), this can be one of the biggest red flags for a security team. The discovery of cryptomining malware running on any server should result in the company taking immediate action and initiating an incident response to lock down that server.

This indicator of compromise (IOC) is significant because while cryptomining malware is often seen as less serious compared to other malware types, it is also used as a monetization tactic that can run in the background while server access is being sold for further malicious activity. For example, access could be sold for use as a server for underground hosting. Meanwhile, the data could be exfiltrated and sold as personally identifiable information (PII) or for industrial espionage, or it could be sold for a targeted ransomware attack. It’s possible to think of the presence of cryptomining malware as the proverbial canary in a coal mine: This is the case, at least, for several access-as-a-service (AaaS) criminals who use this as part of their business model.

Attack Life Cycle

Attacks on compromised servers follow a common path:

  1. Initial compromise: At this stage, whether a cloud-based instance or an on-premise server, it is clear that a criminal has taken over.
  2. Asset categorization: This is the inventory stage. Here a criminal makes their assessment based on questions such as, what data is on that server? Is there an opportunity for lateral movement to something more lucrative? Who is the victim?
  3. Sensitive data exfiltration: At this stage, the criminal steals corporate emails, client databases, and confidential documents, among others. This stage can happen any time after asset categorization if criminals managed to find something valuable.
  4. Cryptocurrency mining: While the attacker looks for a customer for the server space, a target attack, or other means of monetization, cryptomining is used to covertly make money.
  5. Resale or use for targeted attack or further monetization: Based on what the criminal finds during asset categorization, they might plan their own targeted ransomware attack, sell server access for industrial espionage, or sell the access for someone else to monetize further.

 

lifecycle compromised server

The monetization lifecycle of a compromised server

Often, targeted ransomware is the final stage. In most cases, asset categorization reveals data that is valuable to the business but not necessarily valuable for espionage.

A deep understanding of the servers and network allows criminals behind a targeted ransomware attack to hit the company where it hurts the most. These criminals would know the dataset, where they live, whether there are backups of the data, and more. With such a detailed blueprint of the organization in their hands, cybercriminals can lock down critical systems and demand higher ransom, as we saw in our 2020 midyear security roundup report.

In addition, while a ransomware attack would be the visible urgent issue for the defender to solve in such an incident, the same attack could also indicate that something far more serious has likely already taken place: the theft of company data, which should be factored into the company’s response planning. More importantly, it should be noted that once a company finds an IOC for cryptocurrency, stopping the attacker right then and there could save them considerable time and money in the future.

Ultimately, no matter where a company’s data is stored, hybrid cloud security is critical to preventing this life cycle.

 

The post The Life Cycle of a Compromised (Cloud) Server appeared first on .

Removing Open Source Visibility Challenges for Security Operations Teams

By Trend Micro

 

Identifying security threats early can be difficult, especially when you’re running multiple security tools across disparate business units and cloud projects. When it comes to protecting cloud-native applications, separating legitimate risks from noise and distractions is often a real challenge.

 

That’s why forward-thinking organizations look at things a little differently. They want to help their application developers and security operations (SecOps) teams implement unified strategies for optimal protection. This is where a newly expanded partnership from Trend Micro and Snyk can help.

 

Dependencies create risk

 

In today’s cloud-native development streams, the insatiable need for faster iterations and time-to-market can impact both downstream and upstream workflows. As a result, code reuse and dependence on third-party libraries has grown, and with it the potential security, compliance and reputational risk organizations are exposing themselves to.

 

Just how much risk is associated with open source software today? According to Snyk research, vulnerabilities in open source software have increased 2.5x in the past three years. https://info.snyk.io/sooss-report-2020. What’s more, a recent report claimed to have detected a 430% year-on-year increase in attacks targeting open source components, with the end goal of infecting the software supply chain. While open source code is therefore being used to accelerate time-to-market, security teams are often unaware of the scope and impact this can have on their environments.

 

Managing open source risk

 

This is why cloud security leader Trend Micro, and Snyk, a specialist in developer-first open source security, have extended their partnership with a new joint solution. It’s designed to help security teams manage the risk of open source vulnerabilities from the moment code is introduced, without interrupting the software delivery process.

 

This ambitious achievement helps improve security for your operations teams without changing the way your developer teams work. Trend Micro and Snyk are addressing open source risks by simplifying a bottom-up approach to risk mitigation that brings together developer and SecOps teams under one unified solution. It combines state-of-the-art security technology with collaborative features and processes to eliminate the security blind spots that can impact development lifecycles and business outcomes.

 

Available as part of Trend Micro Cloud One, the new solution being currently co-developed with Snyk will:

  • Scan all code repositories for vulnerabilities using Snyk’s world-class vulnerability scanning and database
  • Bridge the organizational gap between DevOps & SecOps, to help influence secure DevOps practices
  • Deliver continuous visibility of code vulnerabilities, from the earliest code to code running in production
  • Integrate seamlessly into the complete Trend Micro Cloud One security platform

CloudOne

 

 

This unified solution closes the gap between security teams and developers, providing immediate visibility across modern cloud architectures. Trend Micro and Snyk continue to deliver world class protection that fits the cloud-native development and security requirements of today’s application-focused organizations.

 

 

 

The post Removing Open Source Visibility Challenges for Security Operations Teams appeared first on .

This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

By Jon Clay (Global Threat Communications)
week in security

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, read about one of Microsoft’s largest Patch Tuesday updates ever, including fixes for 120 vulnerabilities and two zero-days. Also, learn about Trend Micro’s new integrations with Amazon Web Services (AWS).

 

Read on:

 

Microsoft Patches 120 Vulnerabilities, Two Zero-Days

This week Microsoft released fixes for 120 vulnerabilities, including two zero-days, in 13 products and services as part of its monthly Patch Tuesday rollout. The August release marks its third-largest Patch Tuesday update, bringing the total number of security fixes for 2020 to 862. “If they maintain this pace, it’s quite possible for them to ship more than 1,300 patches this year,” says Dustin Childs of Trend Micro’s Zero-Day Initiative (ZDI).

 

XCSSET Mac Malware: Infects Xcode Projects, Performs UXSS Attack on Safari, Other Browsers, Leverages Zero-day Exploits

Trend Micro has discovered an unusual infection related to Xcode developer projects. Upon further investigation, it was discovered that a developer’s Xcode project at large contained the source malware, which leads to a rabbit hole of malicious payloads. Most notable in our investigation is the discovery of two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 1

We’re all now living in a post-COVID-19 world characterized by uncertainty, mass home working and remote learning. To help you adapt to these new conditions while protecting what matters most, Trend Micro has developed a two-part blog series on ‘the new normal’. Part one identifies the scope and specific cyber-threats of the new normal. 

 

Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

Trend Micro enhances agility and automation in cloud security through integrations with Amazon Web Services (AWS). Through this collaboration, Trend Micro Cloud One offers the broadest platform support and API integration to protect AWS infrastructure whether building with Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, AWS Fargate, containers, Amazon Simple Storage Service (Amazon S3), or Amazon Virtual Private Cloud (Amazon VPC) networking.

 

Shedding Light on Security Considerations in Serverless Cloud Architectures

The big shift to serverless computing is imminent. According to a 2019 survey, 21% of enterprises have already adopted serverless technology, while 39% are considering it. Trend Micro’s new research on serverless computing aims to shed light on the security considerations in serverless environments and help adopters in keeping their serverless deployments as secure as possible.

 

In One Click: Amazon Alexa Could be Exploited for Theft of Voice History, PII, Skill Tampering

Amazon’s Alexa voice assistant could be exploited to hand over user data due to security vulnerabilities in the service’s subdomains. The smart assistant, which is found in devices such as the Amazon Echo and Echo Dot — with over 200 million shipments worldwide — was vulnerable to attackers seeking user personally identifiable information (PII) and voice recordings.

 

New Attack Lets Hackers Decrypt VoLTE Encryption to Spy on Phone Calls

A team of academic researchers presented a new attack called ‘ReVoLTE,’ that could let remote attackers break the encryption used by VoLTE voice calls and spy on targeted phone calls. The attack doesn’t exploit any flaw in the Voice over LTE (VoLTE) protocol; instead, it leverages weak implementation of the LTE mobile network by most telecommunication providers in practice, allowing an attacker to eavesdrop on the encrypted phone calls made by targeted victims.

 

An Advanced Group Specializing in Corporate Espionage is on a Hacking Spree

A Russian-speaking hacking group specializing in corporate espionage has carried out 26 campaigns since 2018 in attempts to steal vast amounts of data from the private sector, according to new findings. The hacking group, dubbed RedCurl, stole confidential corporate documents including contracts, financial documents, employee records and legal records, according to research published this week by the security firm Group-IB.

 

Walgreens Discloses Data Breach Impacting Personal Health Information of More Than 72,000 Customers

The second-largest pharmacy chain in the U.S. recently disclosed a data breach that may have compromised the personal health information (PHI) of more than 72,000 individuals across the United States. According to Walgreens spokesman Jim Cohn, prescription information of customers was stolen during May protests, when around 180 of the company’s 9,277 locations were looted.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 2

The past few months have seen radical changes to our work and home life under the Coronavirus threat, upending norms and confining millions of American families within just four walls. In this context, it’s not surprising that more of us are spending an increasing portion of our lives online. In the final blog of this two-part series, Trend Micro discusses what you can do to protect your family, your data, and access to your corporate accounts.

 

What are your thoughts on Trend Micro’s tips to make your home cybersecurity and privacy stronger in the COVID-19-impacted world? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions appeared first on .

ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value

By Trend Micro

This material was published by ESG Research Insights Report, Validating Trend Micro’s Approach and Enhancing GTM Intelligence, 2020.

 

 

 

The post ESG Findings on Trend Micro Cloud-Powered XDR Drives Monumental Business Value appeared first on .

Have You Considered Your Organization’s Technical Debt?

By Madeline Van Der Paelt

TL;DR Deal with your dirty laundry.

Have you ever skipped doing your laundry and watched as that pile of dirty clothes kept growing, just waiting for you to get around to it? You’re busy, you’re tired and you keep saying you’ll get to it tomorrow. Then suddenly, you realize that it’s been three weeks and now you’re running around frantically, late for work because you have no clean socks!

That is technical debt.

Those little things that you put off, which can grow from a minor inconvenience into a full-blown emergency when they’re ignored long enough.

Piling Up

How many times have you had an alarm go off, or a customer issue arise from something you already knew about and meant to fix, but “haven’t had the time”? How many times have you been working on something and thought, “wow, this would be so much easier if I just had the time to …”?

That is technical debt.

But back to you. In your craze to leave for work you manage to find two old mismatched socks. One of them has a hole in it. You don’t have time for this! You throw them on and run out the door, on your way to solve real problems. Throughout the day, that hole grows and your foot starts to hurt.

This is really not your day. In your panicked state this morning you actually managed to add more pain to your already stressed system, plus you still have to do your laundry when you get home! If only you’d taken the time a few days ago…

Coming Back to Bite You

In the tech world where one seemingly small hole – one tiny vulnerability – can bring down your whole system, managing technical debt is critical. Fixing issues before they become emergent situations is necessary in order to succeed.

If you’re always running at full speed to solve the latest issue in production, you’ll never get ahead of your competition and only fall further behind.

It’s very easy to get into a pattern of leaving the little things for another day. Build optimizations, that random unit test that’s missing, that playbook you meant to write up after the last incident – technical debt is a real problem too! By spending just a little time each day to tidy up a few things, you can make your system more stable and provide a better experience for both your customers and your fellow developers.

Cleaning Up

Picture your code as that mountain of dirty laundry. Each day that passes, you add just a little more to it. The more debt you add on, the more daunting your task seems. It becomes a thing of legend. You joke about how you haven’t dealt with it, but really you’re growing increasingly anxious and wary about actually tackling it, and what you’ll find when you do.

Maybe if you put it off just a little bit longer a hero will swoop in and clean up for you! (A woman can dream, right?) The more debt you add, the longer it will take to conquer it, and the harder it will be and the higher the risk is of introducing a new issue.

This added stress and complexity doesn’t sound too appealing, so why do we do it? It’s usually caused by things like having too much work in progress, conflicting priorities and (surprise!) neglected work.

Managing technical debt requires only one important thing – a cultural change.

As much as possible we need to stop creating technical debt, otherwise we will never be able to get it under control. To do that, we need to shift our mindset. We need to step back and take the time to see and make visible all of the technical debt we’re drowning in. Then we can start to chip away at it.

Culture Shift

My team took a page out of “The Unicorn Project” (Kim, 2019) and started by running “debt days” when we caught our breath between projects. Each person chose a pain point, something they were interested in fixing, and we started there. We dedicated two days to removing debt and came out the other side having completed tickets that were on the backlog for over a year.

We also added new metrics and dashboards for better incident response, and improved developer tools.

Now, with each new code change, we’re on the lookout. Does this change introduce any debt? Do we have the ability to fix that now? We encourage each other to fix issues as we find them whether it’s with the way our builds work, our communication processes or a bug in the code.

We need to give ourselves the time to breathe, in both our personal lives or our work day. Taking a pause between tasks not only allows us to mentally prepare for the next one, but it gives us time to learn and reflect. It’s in these pauses that we can see if we’ve created technical debt in any form and potentially go about fixing it right away.

What’s Next?

The improvement of daily work ultimately enables developers to focus on what’s really important, delivering value. It enables them to move faster and find more joy in their work.

So how do you stay on top of your never-ending laundry? Your family chooses to makes a cultural change and decides to dedicate time to it. You declare Saturday as laundry day!

Make the time to deal with technical debt –your developers, security teams, and your customers will thank you for it.

 

The post Have You Considered Your Organization’s Technical Debt? appeared first on .

Fixing cloud migration: What goes wrong and why?

By Trend Micro

 

The cloud space has been evolving for almost a decade. As a company we’re a major cloud user ourselves. That means we’ve built up a huge amount of in-house expertise over the years around cloud migration — including common challenges and perspectives on how organizations can best approach projects to improve success rates.

As part of our #LetsTalkCloud series, we’ve focused on sharing some of this expertise through conversations with our own experts and folks from the industry. To kick off the series, we discussed some of the security challenges solution architects and security engineers face with customers when discussing cloud migrations. Spoiler…these challenges may not be what you expect.

 

Drag and drop

 

This lack of strategy and planning from the start is symptomatic of a broader challenge in many organizations: There’s no big-picture thinking around cloud, only short-term tactical efforts. Sometimes we get the impression that a senior exec has just seen a ‘cool’ demo at a cloud vendor’s conference and now wants to migrate a host of apps onto that platform. There’s no consideration of how difficult or otherwise this would be, or even whether it’s necessary and desirable.

 

These issues are compounded by organizational siloes. The larger the customer, the larger and more established their individual teams are likely to be, which can make communication a major challenge. Even if you have a dedicated cloud team to work on a project, they may not be talking to other key stakeholders in DevOps or security, for example.

 

The result is that, in many cases, tools, applications, policies, and more are forklifted over from on-premises environments to the cloud. This ends up becoming incredibly expensive. as these organizations are not really changing anything. All they are doing is adding an extra middleman, without taking advantage of the benefits of cloud-native tools like microservices, containers, and serverless.

 

There’s often no visibility or control. Organizations don’t understand they need to lockdown all their containers and sanitize APIs, for example. Plus, there’s no authority given to cloud teams around governance, cost management, and policy assignment, so things just run out of control. Often, shared responsibility isn’t well understood, especially in the new world of DevOps pipelines, so security isn’t applied to the right areas.

 

Getting it right

 

These aren’t easy problems to solve. From a security perspective, it seems we still have a job to do in educating the market about shared responsibility in the cloud, especially when it comes to newer technologies, like serverless and containers. Every time there’s a new way of deploying an app, it seems like people make the same mistakes all over again — presuming the vendors are in charge of security.

 

Automation is a key ingredient of successful migrations. Organizations should be automating everywhere, including policies and governance, to bring more consistency to projects and keep costs under control. In doing so, they must realize that this may require a redesign of apps, and a change in the tools they use to deploy and manage those apps.

 

Ultimately, you can migrate apps to the cloud in a couple of clicks. But the governance, policy, and management that must go along with this is often forgotten. That’s why you need clear strategic objectives and careful planning to secure more successful outcomes. It may not be very sexy, but it’s the best way forward.

 

To learn more about cloud migration, check out our blog series. And catch up on all of the latest trends in DevOps to learn more about securing your cloud environment.

The post Fixing cloud migration: What goes wrong and why? appeared first on .

Are You Promoting Security Fluency in your Organization?

By Trend Micro

 

Migrating to the cloud is hard. The PowerPoint deck and pretty architectures are drawn up quickly but the work required to make the move will take months and possibly years.

 

The early stages require significant effort by teams to learn new technologies (the cloud services themselves) and new ways of the working (the shared responsibility model).

 

In the early days of your cloud efforts, the cloud center of expertise is a logical model to follow.

 

Center of Excellence

 

A cloud center of excellence is exactly what it sounds like. Your organization forms a new team—or an existing team grows into the role—that focuses on setting cloud standards and architectures.

 

They are often the “go-to” team for any cloud questions. From the simple (“What’s an Amazon S3 bucket?”), to the nuanced (“What are the advantages of Amazon Aurora over RDS?”), to the complex (“What’s the optimum index/sort keying for this DynamoDB table?”).

 

The cloud center of excellence is the one-stop shop for cloud in your organization. At the beginning, this organizational design choice can greatly accelerate the adoption of cloud technologies.

 

Too Central

 

The problem is that accelerated adoption doesn’t necessarily correlate with accelerated understanding and learning.

 

In fact, as the center of excellent continues to grow its success, there is an inverse failure in organizational learning which create a general lack of cloud fluency.

 

Cloud fluency is an idea introduced by Forrest Brazeal at A Cloud Guru that describes the general ability of all teams within the organization to discuss cloud technologies and solutions. Forrest’s blog post shines a light on this situation and is summed up nicely in this cartoon;

 

Our own Mark Nunnikhoven also spoke to Forrest on episode 2 of season 2 for #LetsTalkCloud.

 

Even though the cloud center of excellence team sets out to teach everyone and raise the bar, the work soon piles up and the team quickly shifts away from an educational mandate to a “fix everything” one.

 

What was once a cloud accelerator is now a place of burnout for your top, hard-to-replace cloud talent.

 

Security’s Past

 

If you’ve paid attention to how cybersecurity teams operate within organizations, you have probably spotted a number of very concerning similarities.

 

Cybersecurity teams are also considered a center of excellence and the central team within the organization for security knowledge.

 

Most requests for security architecture, advice, operations, and generally anything that includes the prefix “cyber”, word “risk”, or hints of “hacking” get routed to this team.

 

This isn’t the security team’s fault. Over the years, systems have increased in complexity, more and more incidents occur, and security teams rarely get the opportunity to look ahead. They are too busy stuck in “firefighting mode” to take as step back and re-evaluate the organizational design structure they work within.

 

According to Gartner, for every 750 employees in an organization, one of those is dedicated to cybersecurity. Those are impossible odds that have lead to the massive security skills gap.

 

Fluency Is The Way Forward

 

Security needs to follow the example of cloud fluency. We need “security fluency” in order to import the security posture of the systems we built and to reduce the risk our organizations face.

 

This is the reason that security teams need to turn their efforts to educating development teams. DevSecOps is a term chock full of misconceptions and it lacks context to drive the needed changes but it is handy for raising awareness of the lack of security fluency.

 

Successful adoption of a DevOps philosophy is all about removing barriers to customer success. Providing teams with the tools and autonomy they require is a critical factor in their success.

 

Security is just one aspect of the development team’s toolkit. It’s up to the current security team to help educate them on the principles driving modern cybersecurity and how to ensure that the systems they build work as intended…and only as intended.

The post Are You Promoting Security Fluency in your Organization? appeared first on .

Cloud Security Is Simple, Absolutely Simple.

By Mark Nunnikhoven (Vice President, Cloud Research)

“Cloud security is simple, absolutely simple. Stop over complicating it.”

This is how I kicked off a presentation I gave at the CyberRisk Alliance, Cloud Security Summit on Apr 17 of this year. And I truly believe that cloud security is simple, but that does not mean easy. You need the right strategy.

As I am often asked about strategies for the cloud, and the complexities that come with it, I decided to share my recent talk with you all. Depending on your preference, you can either watch the video below or read the transcript of my talk that’s posted just below the video. I hope you find it useful and will enjoy it. And, as always, I’d love to hear from you, find me @marknca.

For those of you who prefer to read rather than watch a video, here’s the transcript of my talk:

Cloud security is simple, absolutely simple. Stop over complicating it.

Now, I know you’re probably thinking, “Wait a minute, what is this guy talking about? He is just off his rocker.”

Remember, simple doesn’t mean easy. I think we make things way more complicated than they need to be when it comes to securing the cloud, and this makes our lives a lot harder than they need to be. There’s some massive advantages when it comes to security in the cloud. Primarily, I think we can simplify our security approach because of three major reasons.

The first is integrated identity and access management. All three major cloud providers, AWS, Google and Microsoft offer fantastic identity, and access management systems. These are things that security, and [inaudible 00:00:48] professionals have been clamouring for, for decades.

We finally have this ability, we need to take advantage of it.

The second main area is the shared responsibility model. We’ll cover that more in a minute, but it’s an absolutely wonderful tool to understand your mental model, to realize where you need to focus your security efforts, and the third area that simplifies security for us is the universal application of APIs or application programming interfaces.

These give us as security professionals the ability to orchestrate. and automate a huge amount of the grunt work away. These three things add up to, uh, the ability for us to execute a very sophisticated, uh, or very difficult to pull off, uh, security practice, but one that ultimately is actually pretty simple in its approach.

It’s just all the details are hard and we’re going to use these three advantages to make those details simpler. So, let’s take a step back for a second and look at what our goal is.

What is the goal of cybersecurity? That’s not something you hear quite often as a question.

A lot of the time you’ll hear the definition of cybersecurity is, uh, about, uh, securing the confidentiality, integrity, and availability of information or data. The CIA triad, different CIA, but I like to phrase this in a different way. I think the goal is much clearer, and the goal’s much simpler.

It is to make sure that whatever you’re building works as intended and only as intended. Now, you’ll realize you can’t accomplish this goal just as a security team. You need to work with your, uh, developers, you need to work with operations, you need to work with the business units, with the end users of your application as well.

This is a wonderful way of phrasing our goal, and realizing that we’re all in this together to make sure whatever you’re building works as intended, and only as intended.

Now, if we move forward, and we look at who are we up against, who’s preventing our stuff from working, uh, well?

You look at normally, you think of, uh, who’s attacking our systems? Who are the risks? Is it nation states? Is it maybe insider threats? While these are valid threats, they’re really overblown. You’re… don’t have to worry about nation state attacks.

If you’re a nation state, worry about it. If you’re not a nation state, you don’t have to worry about it because frankly, there’s nothing you can do to stop them. You can slow them down a little bit, but by definition, they’re going to get through your resources.

As far as insider attacks, this is an HR problem. Treat your people well. Um, check in with them, and have a strong information management policy in place, and you’re going to reduce this threat naturally. If you go hunting for people, you’re going to create the very threats that you’re looking at.

So, it brings us to the next set. What about cyber criminals? You know, we do have to worry about cyber criminals.

Cyber criminals are targeting systems simply because these systems are online, these are profit motivated criminals who are organized, and have a good set of tools, so we absolutely need to worry about them, but there’s a more insidious or more commonplace, maybe a simpler threat that we need to worry about, and that’s one of mistakes.

The vast majority of issues that happen around data breaches around security vulnerabilities in the cloud are mistake driven. In fact, to the point where I would not even worry about cyber criminals simply because all the work we’re going to do to focus on, uh, preventing mistakes.

And catching, and rectifying the stakes really, really quickly is going to uh, you a cover all the stuff that we would have done to block out cyber criminals as well, so mistakes are very common because people are using a lot more services in the cloud.

You have a lot more, um, parts and moving, uh, complexity in your deployment, um, and you’re going to make a mistake, which is why you need to put automated systems in place to make sure that those mistakes don’t happen, or if they do happen that they’re caught very, very quickly.

This applies to standard DevOps, the philosophies for building. It also applies to security very, very wonderfully, so this is the main thing we’re going to focus on.

So, if we look at that sum up together, we have our goal of making sure whatever we’re building works as intended, and only as intended, and our major issue here, the biggest risk to this is simple mistakes and misconfigurations.

Okay, so we’re not starting from ground zero here. We can learn from others, and the first place we’re going to learn is the shared responsibility model. The shared responsibility applies to all cloud service providers.

If you look on the left hand side of the slide here, you’ll see the traditional on premise model. We roughly have six areas where something has to be done roughly daily, whether it’s patching, maintenance, uh, just operational visibility, monitoring, that kind of thing, and in a traditional on premise environment, you’re responsible for all of it, whether it’s your team, or a team underneath your organization.

Somewhere within your tree, people are on the hook for doing stuff daily. Here when we move into an infrastructure, so getting a virtual machine from a cloud provider right off the bat, half of the responsibilities are pushed away.

That’s a huge, huge win.

And, as we move further and further to the right to more managed service, or staff level services, we have less and less daily responsibilities.

Now, of course, you always still have to verify that the cloud service provider’s doing what they, uh, say they’re doing, which is why certifications and compliance frameworks come into play, uh, but the bottom line is you’re doing less work, so you can focus on fewer areas.

Um, that is, or I should say not less work, but you’re doing, uh, less broad of a work.

So you can have that deeper focus, and of course, you always have to worry about service configuration. You are given knobs and dials to turn to lock things down. You should use them like things like encrypting, uh, all your data at rest.

Most of the time it’s an easy check box, but it’s up to you to check it ‘cause it’s your responsibility.

We also have the idea of an adoption framework, and this applies for Azure, for AWS and for Google, uh, and what they do is they help you map out your business processes.

This is important to security, because it gives you the understanding of where your data is, what’s important to the business, where does it lie, who needs to touch it, and access it and process it.

That also gives us the idea, uh, or the ability to identify the stakeholders, so that we know, uh, you know, who’s concerned about this data, who is, has an investment in this data, and finally it helps to, to deliver an action plan.

The output of all of these frameworks is to deliver an action plan to help you migrate into the cloud and help you to continuously evolve. Well, it’s also a phenomenal map for your security efforts.

You want to prioritize security, this is how you do it. You get it through the adoption framework, understanding what’s important to the business, and that lets you identify critical systems and areas for your security.

Again, we want to keep things simple, right? And, the third, uh, the o- other things we want to look at is the CIS foundations. They have them for AWS, Azure and GCP, um, and these provide a prescriptive guidance.

They’re really, um, a strong baseline, and a checklist of tasks that you can accomplish, um, or take on, on your, uh, take on, on your own, excuse me, uh, in order to, um, you know, basically cover off the really basics is encryption at rest on, um, you know, do I make sure that I don’t have, uh, things needlessly exposed to the internet, that type of thing.

Really fantastic reference point and a starting point for your security practice.

Again, with this idea of keeping things as simple as possible, so when it comes to looking at our security policy, we’ve used the frameworks, um, and the baseline to kind of set up a strong, uh, start to understand, uh, where the business is concerned, and to prioritize.

And, the first question we need to ask ourselves as security practitioners, what happened? If we, if something happens, and we ask what happened?

Do we have the ability to answer this question? So, that starts us off with logging and auditing. This needs to be in place before something happened. Let me just say that again, before something happened, you need [laughs] to be able to have this information in place.

Now, uh, this is really, uh, to ask these key questions of what happened in my account, and who, or what made that thing happen?

So, this starts in the cloud with some basic services. Uh, for AWS it’s cloud trail, for Azure, it’s monitor, and for Google Cloud it used to be called Stackdriver, it is now the Google Cloud operations suite, so these need to be enabled on at full volume.

Don’t worry, you can use some lifecycle rules on the data source to keep your costs low.

But, this gives you that layer, that basic auditing and logging layer, so that you can answer that question of what happened?

So, the next question you want to ask yourself or have the ability to answer is who’s there, right? Who’s doing what in my account? And, that comes down to identity.

We’ve already mentioned this is one of the key pillars of keeping security simple, and getting that highly effective security in your cloud.

[00:09:00] So here you’re answering the questions of who are you, and what are you allowed to do? This is where we get a very simple privilege, uh, or principle in security, which is the principle of least privilege.

You want to give an identity, so whether that’s a user, or a role, or a service, uh, only the privileges they, uh, require that are essential to perform the task that, uh, they are intended to do.

Okay?

So, basically if I need to write a file into a storage, um, folder or a bucket, I should only have the ability to write that file. I don’t need to read it, I don’t need to delete it, I just need to write to it, so only give me that ability.

Remember, that comes back to the other pillar of simple security here of, of key cloud security, is integrated identity.

This is where it really takes off, is that we start to assign very granular access permissions, and don’t worry, we’re going to use the APIs to automate all this stuff, so that it’s not a management headache, but the principle of these privilege is absolutely critical here.

The services you’re going to be using, amazingly, all three cloud providers got in line, and named them the same thing. It’s IAM, identity access management, whether that’s AWS, Azure or Google Cloud.

Now, the next question we’re going to a- ask ourselves are the areas where we’re going to be looking at is really where should I be focusing security controls? Where should I be putting stuff in place?

Because up until now we’ve really talked about leveraging what’s available from the cloud service providers, and you absolutely should available, uh, maximize your usage of their, um, native and primitive, uh, structures primitive as far as base concepts, not as, um, refined.

They’re very advanced controls and, but there are times where you’re going to need to put in your own controls, and these are the areas you’re going to focus on, so you’re going to start with networking, right?

So, in your networking, you’re going to maximize the native structures that are available in the cloud that you’re in, so whether that’s a project structure in Google Cloud, whether that’s a service like transit gateway in AWS, um, and all of them have this idea of a VPC or virtual private cloud or virtual network that is a very strong boundary for you to use.

Remember, most of the time you’re not charged for the creation of those. You have limits in your accounts, but accounts are free, and you can keep adding more, uh, virtual networks. You may be saying, wait a minute, I’m trying to simplify things.

Actually, having multiple virtual networks or virtual private clouds ends up being far simpler because each of them has a task. You go, this application runs in this virtual private cloud, not a big shared one in this specific VPC, and that gives you this wonderfully strong security boundaries, and a very simple way of looking at one VPC, one action, very much the Unix philosophy in play.

Key here though is understanding that while all of the security controls in place for your service provider, um, give you, so, you know, whether it’s VPCs, routing tables, um, uh, access control lists, security groups, all the SDN features that they’ve got in place.

These really help you figure out whether service A or system A is allowed to talk to B, but they don’t tell you what they’re saying.

And, that’s where additional controls called an IPS, or intrusion prevention system come into play, and you may want to look at getting a third party control in to do that, because none of the th- big three cloud providers offer an IPS at this point.

[00:12:00] But that gives you the ability to not just say, “Hey, you’re allowed to talk to each other.” But, to monitor that conversation, to ensure that there’s not malicious code being passed back and forth between systems that nobody’s trying a denial of service attack.

A whole bunch of extra things on there have, so that’s where IPS comes into play in your network defense. Now, we look at compute, right?

We can have compute in various forms, whether that’s in serverless functions, whether that’s in containers, manage containers, whether that’s in traditional virtual machines, but all the principles are the same.

You want to understand where the shared responsibility line is, how much is on your plate, how much is on the CSPs?

You want to understand that you need to harden the EOS, or the service, or both in some cases, make sure that, that’s locked down, so have administrator passwords. Very, very complicated.

Don’t log into these systems, uh, you know, because you want to be fixing things upstream. You want to be fixing things in the build pipeline, not logging into these systems directly, and that’s a huge thing for, uh, systems people to get over, but it’s absolutely essential for security, and you know what?

It’s going to take a while, but there’s some tricks there you can follow with me. You can see, uh, on the slides, uh, at Mark, that is my social everywhere, uh, happy to walk you through the next steps.

This idea of this presentation’s really just the simple basics to start with, to give you that overview of where to focus your time, and, dispel that myth that cloud security is complicating things.

It is a huge path is simplicity, which is a massive lens, or for security.

So, the last area you want to focus here is in data and storage. Whether this is databases, whether this is big blob storage, or, uh, buckets in AWS, it doesn’t really matter the principles, again, all the same.

You want to encrypt your data at rest using the native cloud provided, uh, cloud service provider, uh, features functionality, because most of the time it’s just give it a key address, and give it a checkbox, and you’re good to go.

It’s never been easier to encrypt things, and there is no excuse for it and none of the providers charge extra for, uh, encryption, which is amazing, and you absolutely want to be taking advantage of that, and you want to be as granular as possible with your IAM, uh, and as reasonable, okay?

So, there’s a line here, and a lot of the data stores that are native to the cloud service providers, you can go right down to the data cell level and say, Mark has access, or Mark doesn’t have access to this cell.

That can be highly effective, and maybe right for your use case. It might be too much as well.

But, the nice thing is that you have that option. It’s integrated, it’s pretty straightforward to implement, and then, uh, when we look here, uh, sorry. and then, finally you want to be looking at lifecycle strategies to keep your costs under control.

Um, data really spins out of control when you don’t have to worry about capacity. All of the cloud service providers have some fantastic automations in place.

Basically, just giving you, uh, very simple rules to say, “Okay, after 90 days, move this over to cheaper storage. After 180 days, you know, get rid of it completely, or put it in cold storage.”

Take advantage of those or your bill’s going to spiral out of control, and, and that relates to availability ‘cause uh, uh, and reliability, ‘cause the more you’re spending on that kind of stuff, the less you have to spend on other areas like security and operational efficiency.

So, that brings us to our next big security question. Is this working?

[00:15:00] How do you know if any of this stuff is working? Well, you want to talk about the concept of traceability. Traceability is a, you know, somewhat formal definition, but for me it really comes down to where did this come from, who can access it, and when did they access it?

That ties very closely with the concept of observability. Basically, the ability to look at, uh, closed systems and to infer what’s going on inside based on what’s coming into that system, and what’s leaving that system, really what’s going on.

There’s some great tools here from the service providers. Again, you want to look at, uh, Amazon CloudWatch, uh, Azure Monitor and the Google Cloud operations, uh, suite. Um, and here this leads us to the key, okay?

This is the key to simplifying everything, and I know we’ve covered a ton in this presentation, but I really want you to take a good look at this slide, and again, hit me up, uh, @marknca, happy to answer any questions with, questions afterwards as well here, um, that this will really, really make this simple, and this will really take your security practice to the next level.

If the idea of something happened in your, cloud system, right? In your deployment, there’s a trigger, and then, it either is generating an event or a log.

If you go the bottom row here, you’ve got a log, which you can then react to in a function to deliver some sort of result. That’s the slow-lane on the bottom.

We’re talking minutes here. You also have the top lane where your trigger fires off an event, and then, you react to that with a function, and then, you get a result in the fast lane.

These things happen in seconds, sub-second time. You start to build out your security practice based on this model.

You start automating more and more in these functions, whether it’s, uh, Lambda, whether it’s Cloud Functions, whether it’s Azure Functions, it doesn’t matter.

The CSPs all offer the same core functionality here. This is the critical, critical success metric, is that when you start reacting in the fast lane automatically to things, so if you see that a security event is triggered from like your malware, uh, on your, uh, virtual machine, you can lock that off, and have a new one spin up automatically.

Um, if you’re looking for compliance stuff, the slow lane is the place to go, because it takes minutes.

Reactions happen up top, more, um, stately or more sedate things, so somebody logging into a system is both up top and down low, so up top, if you logged into a VPC or into, um, an instance, or a virtual machine, you’d have a trigger fire off and maybe ask me immediately, “Mark, did you log into the system? Uh, ‘cause you’re, you know, you’re not supposed to be.”

But then I’d respond and say, “Yeah, I, I did log in.” So, immediately you don’t have to respond. It’s not an incident response scenario, but on the bottom track, maybe you’re tracking how many times I’ve logged in.

And after the three or fourth time maybe someone comes by, and has a chat with me, and says, “Hey, do you keep logging into these systems? Can’t you fix it upstream in the deployment, uh, and build a pipeline ‘cause that’s where we need to be moving?”

So, you’ll find this balance, and this concept, I just wanted to get into your heads right now of automating your security practice. If you have a checklist, it should be sitting in a model like this, because it’ll help you, uh, reduce your workload, right?

The idea is to get as much automated possible, and keep things in very clear, and simple boundaries, and what’s more simple than having every security action listed as an automated function, uh, sitting in a code repository somewhere?

[00:18:00] Fantastic approach to modern security practice in the cloud. Very simple, very clear. Yes, difficult to implement. It can be, but it’s an awesome, simple mental model to keep in your head that everything gets automated as a function based on a trigger somewhere.

So, what are the keys to success? What are the keys to keeping this cloud security thing simple? And, hopefully you’ve realized the difference between a simple mental model, and the challenges, uh, in, uh, implementation.

It can be difficult. It’s not easy to implement, but the mental model needs to be kept simple, right? Keep things in their own VPCs, and their own accounts, automate everything. Very, very simple approach. Everything fits into this s- into this structure, so the keys here are remembering the goal.

Make sure that cybersecurity, uh, is making sure that whatever you build works as intended and only as intended. It’s understanding the shared responsibility model, and it’s really looking at, uh, having a plan through cloud adoption frameworks, how to build well, which is a, uh, a concept called the Well-Architected Framework.

It’s specific to AWS, but it’s generic, um, its principles, it can be applied everywhere. We didn’t cover it here, but I’ll put the links, um, in the materials for you, uh, as well as remembering systems over people, right?

Adding the right controls at the right time, uh, and then, finally observing and react. Be vigilant, practice. You’re not going to get this right out of the gates, uh, perfect.

You’re going to have to refine, iterate, and then it’s extremely cloud friendly. That is the cloud model is, get it out there, iterate quickly, but putting the structures in place, you’re not going to make sure that you’re not doing that in an insecure manner.

Thank you very much, uh, here’s a couple of links that’ll help you out before we take some Q&A here, um, trendmicro.com/cloud will get you to the products to learn more. We’re also doing this really cool streaming.

Uh, I host a show called Let’s Talk Cloud. Um, we uh, interview experts, uh, and have a great conversation around, um, what they’re talking about, uh, in the cloud, what they’re working on, and not just around security, but just in building in general.

You can hit that up at trendtalks.fyi. Um, and again, hit me up on social @marknca.

So, we have a couple of questions to kick this off, and you can put more questions in the webinar here, and they will send them along, or answer them in kind if they can.

Um, and that’s really what these are about, is the interaction is getting that, um, to and from. So, the first question that I wanted to tackle is an interesting one, and it’s really that systems over people.

Um, you heard me mention it in the, uh, in the end and the question is really what does that mean systems over people? Isn’t security really about people’s expertise?

And, yes and no, so if you are a SOC analyst, if you are working in a security, uh, role right now, I am really confident saying that 80%, 90% of what you do right now could be delegated out to a system.

So, if you were looking at log lines, and stuff that should be done by systems and bubble up, just the goal for you to investigate to do what people are good at in systems are bad at, so systems mean, uh, you know, putting in, uh, to build pipeline, putting in container scanning in the build pipeline, so that you have to manually scan stuff, right to get rid of the basics. Is that a pen test? 100% no.

Um, but it gets rid of that, hey, you didn’t upgrade to, um, you know, this version of this library.

[00:21:00] That’s all automated, and those, the more systems you get in place, the more you as a security professional, or your security team will be able to focus on where they can really deliver value and frankly, where it’s more interesting work, so that’s what systems over people mean, is basically automate as much as you can to get people doing what people are really good at, and to make sure that the systems catch what we make as mistakes all the time.

If you accidentally try to push an old build out, you know that systems should stop that, if you push a build that hasn’t been checked by that container scanning or by, um, you know, it doesn’t have the appropriate security policy in place.

Systems should catch all that humans shouldn’t have to worry about it at all. That’s systems over processing. You saw that on the, uh, keys to success slide here. I’ll just pull it up. Um, you know, is that, that’s absolutely key.

Another question that we had, uh, was what we didn’t get into here, which was around the Well-Architected Framework. Now, this is a document that was published by AWS, uh, a number of years back, and they’ve kept it going.

They’ve evolved it and essentially it has five pillars. Um, performance, efficiency, uh, op- reliability, security, cost optimization, and operational excellence. Hey, I’ve got all five.

Um, and really [laughs] what that is, is it’s about how to take advantage of these cloud tools.

Now, AWS publishes it, but honestly it applies to Azure, it applies to Google Cloud as well. It’s not service specific. It teaches you how to build in the cloud, and obviously security is one of those big pillars, but it’s… so talking about teaching you how to make those trade offs, how to build an innovation flywheel, so that you have an idea, test it, uh, get the feedback from it, and move forward.

Um, and that’s really, really key. Again, now you should be reading that even if you are an Azure, or GCP customer or, uh, that’s where you’re putting your most of your stuff, because it’s really about the principles, and everything we do, and encourage people to build well, it means that there’s less security issues, right?

Especially we know that the number one problem is mistakes.

That leads to the last question we have here, which is about that, how can I say that cyber criminals, you don’t need to worry about them.

You need to worry about mistakes? That’s a good question. It’s valid, and, um, Trend Micro does a huge amount of research around cyber criminals. I do a whole huge amount of research around cyber criminals.

Uh, my training, by training, and by professional experience. I’m a forensic investigator. This is what I do is take down cyber crimes. Um, but I think mistakes are the number one thing that we deal with in the cloud simply because of the underlying complexity.

I know it’s ironic, and to talk about simplicity, to talk about complexity, but the idea is, um, is that you look at all the major breaches, especially around s3 buckets, those are all m- based on mistake.

There’ve been billions, and billions, and billions of records, and, uh, millions of dollars of damage exposed because of simple mistakes, and that is far more common, uh, than cyber criminals.

And yes, cyber crimes you have [inaudible 00:23:32] worry. You have to worry about them, but everything you’re going to do to fix mistakes, and to put systems in place to stop those mistakes from happening is also going to be for your pr- uh, protection up against cyber criminals, and honestly, if you’re the guy who runs around your organization’s screaming about cyber criminals all the time, you’re far less credible than if you’re saying, “Hey, I want to make sure that we build really, really well, and don’t make mistakes.”

Thank you for taking the time. My name’s Mark Nunnikhoven. I’m the vice president of cloud research at Trend Micro. I’m also an AWS community hero, and I love this stuff. Hit me up on social @marknca. Happy to chat more.

The post Cloud Security Is Simple, Absolutely Simple. appeared first on .

Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower

By Trend Micro

Things fail. It happens. A core principle of building well in the AWS Cloud is reliability. Dr. Vogels said it best, “How can you reduce the impact of failure on your customers?” He uses the term “blast radius” to describe this principle.

One of the key methods for reducing blast radius is the AWS account itself. Accounts are free and provide a strong barrier between resources, and thus, failures or other issues. This type of protection and peace of mind helps teams innovate by reducing the risk of running into another team’s work. The challenge is managing all of these accounts in a reasonable manner. You need to strike a balance between providing security guardrails for teams while also ensuring that each team gets access to the resources they need.

AWS Services & Features

There are a number of AWS services and features that help address this need. AWS Organizations, AWS Firewall Manager, IAM Roles, tagging, AWS Resource Access Manager, AWS Control Tower, and more, which all play a role in helping your team manage multiple accounts.

For this post, we’ll look at AWS Control Tower a little closer. AWS Control Tower was made generally available at AWS re:Inforce. The service provides an easy way to setup and govern AWS accounts in your environment. You can configure strong defaults for all new accounts, pre-populate IAM Roles, and more. Essentially, AWS Control Tower makes sure that any new account starts off on the right foot.

For more on the service, check out this excellent talk from the launch.

Partner Integrations

With almost a year under its belt, AWS Control Tower is now expanding to provide partner integrations. Now, in addition to setting up AWS services and features, you can pre-config supported APN solutions as well. Trend Micro is among the first partners to support this integration by providing the ability to add Trend Micro Cloud One™Workload Security and Trend Micro Cloud One™Conformity to your Control Tower account factory. Once configured, any new account that is created via the factory will automatically be configured in your Trend Micro Cloud One account.

Integration Advantage

This integration not only reduces the friction in getting these key security tools setup, it also provides immediate visibility into your environment. Workload Security will now be able show you any Amazon EC2 instances or Amazon ECS hosts within your accounts. You’ll still need to install and apply a policy to the Workload Security agent to protect these instances, but this initial visibility provides a map for your teams, reducing the time to protection. Conformity will start generating information within minutes. This information from Conformity will allow your teams to get a quick handle on their security posture and more with fast and ongoing security and compliance checks.

Integrating this from the beginning of every new account will allow each team to track their progress against a huge set of recommended practices across all five pillars of the Well-Architected Framework.

What’s Next?

One of the biggest challenges in cloud security is integrating it early in the development process. We know that the earlier security is factored into your builds, the better the result. You can’t get much earlier than the initial creation on an account. That’s why this new integration with AWS Control Tower is so exciting. Having security in every account within your organization from day zero provides much needed visibility and a fantastic head start.

The post Automatic Visibility And Immediate Security with Trend Micro + AWS Control Tower appeared first on .

Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response

By Trend Micro

The endpoint has long been a major focal point for attackers targeting enterprise IT environments. Yet increasingly, security bosses are being forced to protect data across the organization, whether it’s in the cloud, on IoT devices, in email, or on-premises servers. Attackers may jump from one environment to the next in multi-stage attacks and even hide between the layers. So, it pays to have holistic visibility, in order to detect and respond more effectively.

This is where XDR solutions offer a convincing alternative to EDR and point solutions. But unfortunately, not all providers are created equal. Trend Micro separates themselves from the pack by providing mature security capabilities across all layers, industry-leading threat intelligence, and an AI-powered analytical approach that produces fewer, higher fidelity alerts.

Under pressure

It’s no secret that IT security teams today are under extreme pressure. They’re faced with an enemy able to tap into a growing range of tools and techniques from the cybercrime underground. Ransomware, social engineering, fileless malware, vulnerability exploits, and drive-by-downloads, are just the tip of the iceberg. There are “several hundred thousand new malicious programs or unwanted apps registered every day,” according to a new Osterman Research report. It argues that, while endpoint protection must be a “key component” in corporate security strategy, “It can only be one strand” —complemented with protection in the cloud, on the network, and elsewhere.

There’s more. Best-of-breed approaches have saddled organizations with too many disparate tools over the years, creating extra cost, complexity, management headaches, and security gaps. This adds to the workload for overwhelmed security teams.

According to Gartner, “Two of the biggest challenges for all security organizations are hiring and retaining technically savvy security operations staff, and building a security operations capability that can confidently configure and maintain a defensive posture as well as provide a rapid detection and response capacity. Mainstream organizations are often overwhelmed by the intersectionality of these two problems.”

XDR appeals to organizations struggling with all of these challenges as well as those unable to gain value from, or who don’t have the resources to invest in, SIEM or SOAR solutions. So what does it involve?

What to look for

As reported by Gartner, all XDR solutions should fundamentally achieve the following:

  • Improve protection, detection, and response
  • Enhance overall productivity of operational security staff
  • Lower total cost of ownership (TCO) to create an effective detection and response capability

However, the analyst urges IT buyers to think carefully before choosing which provider to invest in. That’s because, in some cases, underlying threat intelligence may be underpowered, and vendors have gaps in their product portfolio which could create dangerous IT blind spots. Efficacy will be a key metric. As Gartner says, “You will not only have to answer the question of does it find things, but also is it actually finding things that your existing tooling is not.”

A leader in XDR

This is where Trend Micro XDR excels. It has been designed to go beyond the endpoint, collecting and correlating data from across the organization, including; email, endpoint, servers, cloud workloads, and networks. With this enhanced context, and the power of Trend Micro’s AI algorithms and expert security analytics, the platform is able to identify threats more easily and contain them more effectively.

Forrester recently recognized Trend Micro as a leader in enterprise detection and response, saying of XDR, “Trend Micro has a forward-thinking approach and is an excellent choice for organizations wanting to centralize reporting and detection with XDR but have less capacity for proactively performing threat hunting.”

According to Gartner, fewer than 5% of organizations currently employ XDR. This means there’s a huge need to improve enterprise-wide protection. At a time when corporate resources are being stretched to the limit, Trend Micro XDR offers global organizations an invaluable chance to minimize enterprise risk exposure whilst maximizing the productivity of security teams.

The post Beyond the Endpoint: Why Organizations are Choosing XDR for Holistic Detection and Response appeared first on .

Survey: Employee Security Training is Essential to Remote Working Success

By Trend Micro

Organisations have been forced to adapt rapidly over the past few months as government lockdowns kept most workers to their homes. For many, the changes they’ve made may even become permanent as more distributed working becomes the norm. This has major implications for cybersecurity. Employees are often described as the weakest link in the corporate security chain, so do they become an even greater liability when working from home?

Unfortunately, a major new study from Trend Micro finds that, although many have become more cyber-aware during lockdown, bad habits persist. CISOs looking to ramp up user awareness training may get a better return on investment if they try to personalize strategies according to specific user personas.

What we found

We polled 13,200 remote workers across 27 countries to compile the Head in the Clouds study. It reveals that 72% feel more conscious of their organisation’s cybersecurity policies since lockdown began, 85% claim they take IT instructions seriously, and 81% agree that cybersecurity is partly their responsibility. Nearly two-thirds (64%) even admit that using non-work apps on a corporate device is a risk.

Yet in spite of these lockdown learnings, many employees are more preoccupied by productivity. Over half (56%) admit using a non-work app on a corporate device, and 66% have uploaded corporate data to it; 39% of respondents “often” or “always” access corporate data from a personal device; and 29% feel they can get away with using a non-work app, as IT-backed solutions are “nonsense.”

This is a recipe for shadow IT and escalating levels of cyber-risk. It also illustrates that current approaches to user awareness training are falling short. In fact, many employees seem to be aware of what best practice looks like, they just choose not to follow it.

Four security personas

This is where the second part of the research comes in. Trend Micro commissioned Dr Linda Kaye, Cyberpsychology Academic at Edge Hill University, to profile four employee personas based on their cybersecurity behaviors: fearful, conscientious, ignorant and daredevil.

In this way: Fearful employees may benefit from training simulation tools like Trend Micro’s Phish Insight, with real-time feedback from security controls and mentoring.

Conscientious staff require very little training but can be used as exemplars of good behavior, and to team up with “buddies” from the other groups.

Ignorant users need gamification techniques and simulation exercises to keep them engaged in training, and may also require additional interventions to truly understand the consequences of risky behavior.

Daredevil employees are perhaps the most challenging because their wrongdoing is the result not of ignorance but a perceived superiority to others. Organisations may need to use award schemes to promote compliance, and, in extreme circumstances, step up data loss prevention and security controls to mitigate their risky behavior.

By understanding that no two employees are the same, security leaders can tailor their approach in a more nuanced way. Splitting staff into four camps should ensure a more personalized approach than the one-size-fits-all training sessions most organisations run today.

Ultimately, remote working only works if there is a high degree of trust between managers and their teams. Once the pandemic recedes and staff are technically allowed back in the office, that trust will have to be re-earned if they are to continue benefiting from a Work From Home environment.

The post Survey: Employee Security Training is Essential to Remote Working Success appeared first on .

Risk Decisions in an Imperfect World

By Mark Nunnikhoven (Vice President, Cloud Research)

Risk decisions are the foundation of information security. Sadly, they are also one of the most often misunderstood parts of information security.

This is bad enough on its own but can sink any effort at education as an organization moves towards a DevOps philosophy.

To properly evaluate the risk of an event, two components are required:

  1. An assessment of the impact of the event
  2. The likelihood of the event

Unfortunately, teams—and humans in general—are reasonably good at the first part and unreasonably bad at the second.

This is a problem.

It’s a problem that is amplified when security starts to integration with teams in a DevOps environment. Originally presented as part of AllTheTalks.online, this talk examines the ins and outs of risk decisions and how we can start to work on improving how our teams handle them.

 

The post Risk Decisions in an Imperfect World appeared first on .

Perspectives Summary – What You Said

By William "Bill" Malik (CISA VP Infrastructure Strategies)

 

On Thursday, June 25, Trend Micro hosted our Perspectives 2-hour virtual event. As the session progressed, we asked our attendees, composed of +5000 global registrants, two key questions. This blog analyzes those answers.

 

First, what is your current strategy for securing the cloud?

Rely completely on native cloud platform security capabilities (AWS, Azure, Google…) 33%

Add on single-purpose security capabilities (workload protection, container security…) 13%

Add on security platform with multiple security capabilities for reduced complexity 54%

 

This result affirms IDC analyst Frank Dickson’s observation that most cloud customers will benefit from a suite offering a range of security capabilities covering multiple cloud environments. For the 15% to 20% of organizations that rely on one cloud provider, purchasing a security solution from that vendor may provide sufficient coverage. The quest for point products (which may be best-of-breed, as well) introduces additional complexity across multiple cloud platforms, which can obscure problems, confuse cybersecurity analysts and business users, increase costs, and reduce efficiency.  The comprehensive suite strategy compliments most organizations’ hybrid, multi-cloud approach.

Second, and this is multiple choice, how are you enabling secure digital transformation in the cloud today?

 

This shows that cloud users are open to many available solutions for improving cloud security. The adoption pattern follows traditional on-premise security deployment models. The most commonly cited solution, Network Security/Cloud IPS, recognizes that communication with anything in the cloud requires a trustworthy network. This is a very familiar technique, dating back in the on-premise environment to the introduction of firewalls in the early 1990s from vendors like CheckPoint and supported by academic research as found in Cheswick and Bellovin’s Firewalls and Internet Security (Addison Wesley, 1994).

 

The frequency of data exposure due to misconfigured cloud instances surely drives Cloud Security Posture Management, certainly aided by the ease of deployment of tools like Cloud One conformity.

 

The newness of containers in the production environment most likely explains the relatively lower deployment of container security today.

 

The good news is that organizations do not have to deploy and manage a multitude of point products addressing one problem on one environment. The suite approach simplifies today’s reality and positions the organization for tomorrow’s challenges.

 

Looking ahead, future growth in industrial IoT and increasing deployments of 5G-based public and non-public networks will drive further innovations, increasing the breadth of the suite approach to securing hybrid, multi-cloud environments.

 

What do you think? Let me know @WilliamMalikTM.

 

The post Perspectives Summary – What You Said appeared first on .

Principles of a Cloud Migration

By Jason Dablow
cloud

Development and application teams can be the initial entry point of a cloud migration as they start looking at faster ways to accelerate value delivery. One of the main things they might use during this is “Infrastructure as Code,” where they are creating cloud resources for running their applications using lines of code.

In the below video, as part of a NADOG (North American DevOps Group) event, I describe some additional techniques on how your development staff can incorporate the Well Architected Framework and other compliance scanning against their Infrastructure as Code prior to it being launched into your cloud environment.

If this content has sparked additional questions, please feel free to reach out to me on my LinkedIn. Always happy to share my knowledge of working with large customers on their cloud and transformation journeys!

The post Principles of a Cloud Migration appeared first on .

8 Cloud Myths Debunked

By Trend Micro

Many businesses have misperceptions about cloud environments, providers, and how to secure it all. We want to help you separate fact from fiction when it comes to your cloud environment.

This list debunks 8 myths to help you confidently take the next steps in the cloud.

The post 8 Cloud Myths Debunked appeared first on .

Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations

By Trend Micro

 

Trend Micro is excited to launch new Trend Micro Cloud One™ – Conformity capabilities that will strengthen protection for Azure resources.

 

As with any launch, there is a lot of new information, so we decided to sit down with one of the founders of Conformity, Mike Rahmati. Mike is a technologist at heart, with a proven track record of success in the development of software systems that are resilient to failure and grow and scale dynamically through cloud, open-source, agile, and lean disciplines. In the interview, we picked Mike’s brain on how these new capabilities can help customers prevent or easily remediate misconfigurations on Azure. Let’s dive in.

 

What are the common business problems that customers encounter when building on or moving their applications to Azure or Amazon Web Services (AWS)?

The common problem is there are a lot of tools and cloud services out there. Organizations are looking for tool consolidation and visibility into their cloud environment. Shadow IT and business units spinning up their own cloud accounts is a real challenge for IT organizations to keep on top of. Compliance, security, and governance controls are not necessarily top of mind for business units that are innovating at incredible speeds. That is why it is so powerful to have a tool that can provide visibility into your cloud environment and show where you are potentially vulnerable from a security and compliance perspective.

 

Common misconfigurations on AWS are an open Amazon Elastic Compute Cloud (EC2) or a misconfigured IAM policy. What is the equivalent for Microsoft?

The common misconfigurations are actually quite similar to what we’ve seen with AWS. During the product preview phase, we’ve seen customers with many of the same kinds of misconfiguration issues as we’ve seen with AWS. For example, Microsoft Azure Blobs Storage is the equivalent to Amazon S3 – that is a common source of misconfigurations. We have observed misconfiguration in two main areas: Firewall and Web Application Firewall (WAF),which is equivalent to AWS WAF. The Firewall is similar to networking configuration in AWS, which provides inbound protection for non-HTTP protocols and network related protection for all ports and protocols. It is important to note that this is based on the 100 best practices and 15 services we currently support for Azure and growing, whereas, for AWS, we have over 600 best practices in total, with over 70 controls with auto-remediation.

 

Can you tell me about the CIS Microsoft Azure Foundation Security Benchmark?

We are thrilled to support the CIS Microsoft Azure Foundation Security Benchmark. The CIS Microsoft Azure Foundations Benchmark includes automated checks and remediation recommendations for the following: Identity and Access Management, Security Center, Storage Accounts, Database Services, Logging and Monitoring, Networking, Virtual Machines, and App Service. There are over 100 best practices in this framework and we have rules built to check for all of those best practices to ensure cloud builders are avoiding risk in their Azure environments.

Can you tell me a little bit about the Microsoft Shared Responsibility Model?

In terms of shared responsibility model, it’s is very similar to AWS. The security OF the cloud is a Microsoft responsibility, but the security IN the cloud is the customers responsibility. Microsoft’s ecosystem is growing rapidly, and there are a lot of services that you need to know in order to configure them properly. With Conformity, customers only need to know how to properly configure the core services, according to best practices, and then we can help you take it to the next level.

Can you give an example of how the shared responsibility model is used?

Yes. Imagine you have a Microsoft Azure Blob Storage that includes sensitive data. Then, by accident, someone makes it public. The customer might not be able to afford an hour, two hours, or even days to close that security gap.

In just a few minutes, Conformity will alert you to your risk status, provide remediation recommendations, and for our AWS checks give you the ability to set up auto-remediation. Auto-remediation can be very helpful, as it can close the gap in near-real time for customers.

What are next steps for our readers?

I’d say that whether your cloud exploration is just taking shape, you’re midway through a migration, or you’re already running complex workloads in the cloud, we can help. You can gain full visibility of your infrastructure with continuous cloud security and compliance posture management. We can do the heavy lifting so you can focus on innovating and growing. Also, you can ask anyone from our team to set you up with a complimentary cloud health check. Our cloud engineers are happy to provide an AWS and/or Azure assessment to see if you are building a secure, compliant, and reliable cloud infrastructure. You can find out your risk level in just 10-minutes.

 

Get started today with a 60-day free trial >

Check out our knowledge base of Azure best practice rules>

Learn more >

 

Do you see value in building a security culture that is shifted left?

Yes, we have done this for our customers using AWS and it has been very successful. The more we talk about shifting security left the better, and I think that’s where we help customers build a security culture. Every cloud customer is struggling with implementing earlier on in the development cycle and they need tools. Conformity is a tool for customers which is DevOps or DevSecOps friendly and helps them build a security culture that is shifted left.

We help customers shift security left by integrating the Conformity API into their CI/CD pipeline. The product also has preventative controls, which our API and template scanners provide. The idea is we help customers shift security left to identify those misconfigurations early on, even before they’re actually deployed into their environments.

We also help them scan their infrastructure-as-code templates before being deployed into the cloud. Customers need a tool to bake into their CI/CD pipeline. Shifting left doesn’t simply mean having a reporting tool, but rather a tool that allows them to shift security left. That’s where our product, Conformity, can help.

 

The post Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations appeared first on .

The Fear of Vendor Lock-in Leads to Cloud Failures

By Trend Micro

 

Vendor lock-in has been an often-quoted risk since the mid-1990’s.

Fear that by investing too much with one vendor, an organization reduces their options in the future.

Was this a valid concern? Is it still today?

 

The Risk

Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.

This way, as the vendor’s tools mature, they continue to support your business.

The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.

If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.

Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.

All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.

Cloud Maturity

When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.

Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.

In isolation, that’s a smart move.

Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.

Automation

The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.

Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.

Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.

Missing Out

When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.

The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.

You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.

Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.

When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.

 

But Multi-cloud…

In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.

As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.

They use automation to reduce the impact if they have to change providers at some point in the future.

This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.

 

Cloud Spanning Tools

There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.

Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.

The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.

This means automation playbooks can easily support multi-cloud (e.g.,  Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One™).  Observability tools can easily support multi-cloud (e.g., Honeycomb.io).

The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.

In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.

The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .

Principles of a Cloud Migration – Security W5H – The HOW

By Jason Dablow
cloud

“How about… ya!”

Security needs to be treated much like DevOps in evolving organizations; everyone in the company has a responsibility to make sure it is implemented. It is not just a part of operations, but a cultural shift in doing things right the first time – Security by default. Here are a few pointers to get you started:

1. Security should be a focus from the top on down

Executives should be thinking about security as a part of the cloud migration project, and not just as a step of the implementation. Security should be top of mind in planning, building, developing, and deploying applications as part of your cloud migration. This is why the Well Architected Framework has an entire pillar dedicated to security. Use it as a framework to plan and integrate security at each and every phase of your migration.

2. A cloud security policy should be created and/or integrated into existing policy

Start with what you know: least privilege permission models, cloud native network security designs, etc. This will help you start creating a framework for these new cloud resources that will be in use in the future. Your cloud provider and security vendors, like Trend Micro, can help you with these discussions in terms of planning a thorough policy based on the initial migration services that will be used. Remember from my other articles, a migration does not just stop when the workload has been moved. You need to continue to invest in your operation teams and processes as you move to the next phase of cloud native application delivery.

3. Trend Micro’s Cloud One can check off a lot of boxes!

Using a collection of security services, like Trend Micro’s Cloud One, can be a huge relief when it comes to implementing runtime security controls to your new cloud migration project. Workload Security is already protecting thousands of customers and billions of workload hours within AWS with security controls like host-based Intrusion Prevention and Anti-Malware, along with compliance controls like Integrity Monitoring and Application Control. Meanwhile, Network Security can handle all your traffic inspection needs by integrating directly with your cloud network infrastructure, a huge advantage in performance and design over Layer 4 virtual appliances requiring constant changes to route tables and money wasted on infrastructure. As you migrate your workloads, continuously check your posture against the Well Architected Framework using Conformity. You now have your new infrastructure secure and agile, allowing your teams to take full advantage of the newly migrated workloads and begin building the next iteration of your cloud native application design.

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html. To have a more personalized conversation, please add me to LinkedIn!

The post Principles of a Cloud Migration – Security W5H – The HOW appeared first on .

Is Cloud Computing Any Safer From Malicious Hackers?

By Rob Maynard

Cloud computing has revolutionized the IT world, making it easier for companies to deploy infrastructure and applications and deliver their services to the public. The idea of not spending millions of dollars on equipment and facilities to host an on-premises data center is a very attractive prospect to many. And certainly, moving resources to the cloud just has to be safer, right? The cloud provider is going to keep our data and applications safe for sure. Hackers won’t stand a chance. Wrong. More commonly than anyone should, I often hear this delusion from many customers. The truth of the matter is, without proper configuration and the right skillsets administering the cloud presence, as well as practicing common-sense security practices, cloud services are just (if not more) vulnerable.

The Shared Responsibility Model

Before going any further, we need to discuss the shared responsibility model of the cloud service provider and user.

When planning your migration to the cloud, one needs to be aware of which responsibilities belong to which entity. As the chart above shows, the cloud service provider is responsible for the cloud infrastructure security and physical security of such. By contrast, the customer is responsible for their own data, the security of their workloads (all the way to the OS layer), as well as the internal network within the companies VPC’s.

One more pretty important aspect that remains in the hands of the customer is access control. Who has access to what resources? This is really no different than it’s been in the past, exception being the physical security of the data center is handled by the CSP as opposed to the on-prem security, but the company (specifically IT and IT security) are responsible for locking down those resources efficiently.

Many times, this shared responsibility model is overlooked, and poor assumptions are made the security of a company’s resources. Chaos ensues, and probably a firing or two.

So now that we have established the shared responsibility model and that the customer is responsible for their own resource and data security, let’s take a look at some of the more common security issues that can affect the cloud.

Amazon S3 

Amazon S3 is a truly great service from Amazon Web Services. Being able to store data, host static sites or create storage for applications are widely used use cases for this service. S3 buckets are also a prime target for malicious actors, since many times they end up misconfigured.

One such instance occurred in 2017 when Booz Allen Hamilton, a defense contractor for the United States, was pillaged of battlefield imagery as well as administrator credentials to sensitive systems.

Yet another instance occurred in 2017, when due to an insecure Amazon S3 bucket, the records of 198 million American voters were exposed. Chances are if you’re reading this, there’s a good chance this breach got you.

A more recent breach of an Amazon S3 bucket (and I use the word “breach,” however most of these instances were a result of poor configuration and public exposure, not a hacker breaking in using sophisticated techniques) had to do with the cloud storage provider “Data Deposit Box.” Utilizing Amazon S3 buckets for storage, a configuration issue caused the leak of more than 270,000 personal files as well as personal identifiable information (PII) of its users.

One last thing to touch on the subject of cloud file storage has to do with how many organizations are using Amazon S3 to store uploaded data from customers as a place to send for processing by other parts of the application. The problem here is how do we know if what’s being uploaded is malicious or not? This question comes up more and more as I speak to more customers and peers in the IT world.

API

APIs are great. They allow you to interact with programs and services in a programmatic and automated way. When it comes to the cloud, APIs allow administrators to interact with services, an in fact, they are really a cornerstone of all cloud services, as it allows the different services to communicate. As with anything in this world, this also opens a world of danger.

Let’s start with the API gateway, a common construct in the cloud to allow communication to backend applications. The API gateway itself is a target, because it can allow a hacker to manipulate the gateway, and allow unwanted traffic through. API gateways were designed to be integrated into applications. They were not designed for security. This means untrusted connections can come into said gateway and perhaps retrieve data that individual shouldn’t see. Likewise, the API requests to the gateway can come with malicious payloads.

Another attack that can affect your API gateway and likewise the application behind it, is a DDOS attack. The common answer to defend against this is Web Application Firewall (WAF). The problem is WAFs struggle to deal with low, slow DDOS attacks, because the steady stream of requests looks like normal traffic. A really great way to deter DDOS attacks at the API gateway however is to limit the number of requests for each method.

A great way to prevent API attacks lies in the configuration. Denying anonymous access is huge. Likewise, changing tokens, passwords and keys limit the chance effective credentials can be used. Lastly, disabling any type of clear-text authentication. Furthermore, enforcing SSL/TLS encryption and implementing multifactor authentication are great deterrents.

Compute

No cloud service would be complete without compute resources. This is when an organization builds out virtual machines to host applications and services. This also introduces yet another attack surface, and once again, this is not protected by the cloud service provider. This is purely the customers responsibility.

Many times, in discussing my customers’ migration from an on-premises datacenter to the cloud, one of the common methods is the “lift-and-shift” approach. This means customers take the virtual machines they have running in their datacenter and simply migrating those machines to the cloud. Now, the question is, what kind of security assessment was done on those virtual machines prior to migrating? Were those machines patched? Were discovered security flaws fixed? In my personal experience the answer is no. Therefore, these organizations are simply taking their problems from one location to the next. The security holes still exist and could potentially be exploited, especially if the server is public facing or network policies are improperly applied. For this type of process, I think a better way to look at this is “correct-and-lift-and-shift”.

Now once organizations have already established their cloud presence, they will eventually need to deploy new resources, and this can mean developing or building upon a machine image. The most important thing to remember here is that these are computers. They are still vulnerable to malware, so regardless of being in the cloud or not, the same security controls are required including things like anti-malware, host IPS, integrity monitoring and application control just to name a few.

Networking

Cloud services make it incredibly easy to deploy networks and divide them into subnets and even allow cross network communication. They also give you the ability to lock down the types of traffic that are allowed to traverse those networks to reach resources. This is where security groups come in. These security groups are configured by people, so there’s always that chance that a port is open that shouldn’t be, opening a potential vulnerability. It’s incredibly important from this perspective to really have a grasp on what a compute resource is talking to and why, so the proper security measures can be applied.

So is the cloud really safe from hackers? No safer than anything else unless organizations make sure they’re taking security in their hands and understand where their responsibility begins, and the cloud service provider’s ends. The arms war between hackers and security professionals is still the same as it ever was, the battleground just changed.

The post Is Cloud Computing Any Safer From Malicious Hackers? appeared first on .

Principles of a Cloud Migration – Security W5H – The WHERE

By Jason Dablow
cloud

“Wherever I go, there I am” -Security

I recently had a discussion with a large organization that had a few workloads in multiple clouds while assembling a cloud security focused team to build out their security policy moving forward.  It’s one of my favorite conversations to have since I’m not just talking about Trend Micro solutions and how they can help organizations be successful, but more so on how a business approaches the creation of their security policy to achieve a successful center of operational excellence.  While I will talk more about the COE (center of operational excellence) in a future blog series, I want to dive into the core of the discussion – where do we add security in the cloud?

We started discussing how to secure these new cloud native services like hosted services, serverless, container infrastructures, etc., and how to add these security strategies into their ever-evolving security policy.

Quick note: If your cloud security policy is not ever-evolving, it’s out of date. More on that later.

A colleague and friend of mine, Bryan Webster, presented a concept that traditional security models have been always been about three things: Best Practice Configuration for Access and Provisioning, Walls that Block Things, and Agents that Inspect Things.  We have relied heavily on these principles since the first computer was connected to another. I present to you this handy graphic he presented to illustrate the last two points.

But as we move to secure cloud native services, some of these are outside our walls, and some don’t allow the ability to install an agent.  So WHERE does security go now?

Actually, it’s not all that different – just how it’s deployed and implemented. Start by removing the thinking that security controls are tied to specific implementations. You don’t need an intrusion prevention wall that’s a hardware appliance much like you don’t need an agent installed to do anti-malware. There will also be a big focus on your configuration, permissions, and other best practices.  Use security benchmarks like the AWS Well-Architected, CIS, and SANS to help build an adaptable security policy that can meet the needs of the business moving forward.  You might also want to consider consolidating technologies into a cloud-centric service platform like Trend Micro Cloud One, which enables builders to protect their assets regardless of what’s being built.  Need IPS for your serverless functions or containers?  Try Cloud One Application Security!  Do you want to push security further left into your development pipeline? Take a look at Trend Micro Container Security for Pre-Runtime Container Scanning or Cloud One Conformity for helping developers scan your Infrastructure as Code.

Keep in mind – wherever you implement security, there it is. Make sure that it’s in a place to achieve the goals of your security policy using a combination of people, process, and products, all working together to make your business successful!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The WHERE appeared first on .

Principles of a Cloud Migration – Security W5H – The When

By Jason Dablow
cloud

If you have to ask yourself when to implement security, you probably need a time machine!

Security is as important to your migration as the actual workload you are moving to the cloud. Read that again.

It is essential to be planning and integrating security at every single layer of both architecture and implementation. What I mean by that, is if you’re doing a disaster recovery migration, you need to make sure that security is ready for the infrastructure, your shiny new cloud space, as well as the operations supporting it. Will your current security tools be effective in the cloud? Will they still be able to do their task in the cloud? Do your teams have a method of gathering the same security data from the cloud? More importantly, if you’re doing an application migration to the cloud, when you actually implement security means a lot for your cost optimization as well.

NIST Planning Report 02-3

In this graph, it’s easy to see that the earlier you can find and resolve security threats, not only do you lessen the workload of infosec, but you also significantly reduce your costs of resolution. This can be achieved through a combination of tools and processes to really help empower development to take on security tasks sooner. I’ve also witnessed time and time again that there’s friction between security and application teams often resulting in Shadow IT projects and an overall lack of visibility and trust.

Start there. Start with bringing these teams together, uniting them under a common goal: Providing value to your customer base through agile secure development. Empower both teams to learn about each other’s processes while keeping the customer as your focus. This will ultimately bring more value to everyone involved.

At Trend Micro, we’ve curated a number of security resources designed for DevOps audiences through our Art of Cybersecurity campaign.  You can find it at https://www.trendmicro.com/devops/.

Also highlighted on this page is Mark Nunnikhoven’s #LetsTalkCloud series, which is a live stream series on LinkedIn and YouTube. Seasons 1 and 2 have some amazing content around security with a DevOps focus – stay tuned for Season 3 to start soon!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The When appeared first on .

Principles of a Cloud Migration – Security, The W5H – Episode WHAT?

By Jason Dablow
cloud

Teaching you to be a Natural Born Pillar!

Last week, we took you through the “WHO” of securing a cloud migration here, detailing each of the roles involved with implementing a successful security practice during a cloud migration. Read: everyone. This week, I will be touching on the “WHAT” of security; the key principles required before your first workload moves.  The Well-Architected Framework Security Pillar will be the baseline for this article since it thoroughly explains security concepts in a best practice cloud design.

If you are not familiar with the AWS Well-Architected Framework, go google it right now. I can wait. I’m sure telling readers to leave the article they’re currently reading is a cardinal sin in marketing, but it really is important to understand just how powerful this framework is. Wait, this blog is html ready – here’s the link: https://wa.aws.amazon.com/index.en.html. It consists of five pillars that include best practice information written by architects with vast experience in each area.

Since the topic here is Security, I’ll start by giving a look into this pillar. However, I plan on writing about each and as I do, each one of the graphics above will become a link. Internet Magic!

There are seven principles as a part of the security framework, as follows:

  • Implement a strong identity foundation
  • Enable traceability
  • Apply security at all layers
  • Automate security best practices
  • Protect data in transit and at rest
  • Keep people away from data
  • Prepare for security events

Now, a lot of these principles can be solved by using native cloud services and usually these are the easiest to implement. One thing the framework does not give you is suggestions on how to set up or configure these services. While it might reference turning on multi-factor authentication as a necessary step for your identity and access management policy, it is not on by default. Same thing with file object encryption. It is there for you to use but not necessarily enabled on the ones you create.

Here is where I make a super cool (and free) recommendation on technology to accelerate your learning about these topics. We have a knowledge base with hundreds of cloud rules mapped to the Well-Architected Framework (and others!) to help accelerate your knowledge during and after your cloud migration. Let us take the use case above on multi-factor authentication. Our knowledge base article here details the four R’s: Risk, Reason, Rationale, and References on why MFA is a security best practice.

Starting with a Risk Level and detailing out why this is presents a threat to your configurations is a great way to begin prioritizing findings.  It also includes the different compliance mandates and Well-Architected pillar (obviously Security in this case) as well as descriptive links to the different frameworks to get even more details.

The reason this knowledge base rule is in place is also included. This gives you and your teams context to the rule and helps further drive your posture during your cloud migration. Sample reason is as follows for our MFA Use Case:

“As a security best practice, it is always recommended to supplement your IAM user names and passwords by requiring a one-time passcode during authentication. This method is known as AWS Multi-Factor Authentication and allows you to enable extra security for your privileged IAM users. Multi-Factor Authentication (MFA) is a simple and efficient method of verifying your IAM user identity by requiring an authentication code generated by a virtual or hardware device on top of your usual access credentials (i.e. user name and password). The MFA device signature adds an additional layer of protection on top of your existing user credentials making your AWS account virtually impossible to breach without the unique code generated by the device.”

If Reason is the “what” of the rule, Rationale is the “why” supplying you with the need for adoption.  Again, perfect for confirming your cloud migration path and strategy along the way.

“Monitoring IAM access in real-time for vulnerability assessment is essential for keeping your AWS account safe. When an IAM user has administrator-level permissions (i.e. can modify or remove any resource, access any data in your AWS environment and can use any service or component – except the Billing and Cost Management service), just as with the AWS root account user, it is mandatory to secure the IAM user login with Multi-Factor Authentication.

Implementing MFA-based authentication for your IAM users represents the best way to protect your AWS resources and services against unauthorized users or attackers, as MFA adds extra security to the authentication process by forcing IAM users to enter a unique code generated by an approved authentication device.”

Finally, all the references for each of the risk, reason, and rationale, are included at the bottom which helps provide additional clarity. You’ll also notice remediation steps, the 5th ‘R’ when applicable, which shows you how to actually the correct the problem.

All of this data is included to the community as Trend Micro continues to be a valued security research firm helping the world be safe for exchanging digital information. Explore all the rules we have available in our public knowledge base: https://www.cloudconformity.com/knowledge-base/.

This blog is part of a multi-part series dealing with the principles of a successful cloud migration.  For more information, start at the first post here: https://blog.trendmicro.com/principles-of-a-cloud-migration-from-step-one-to-done/

The post Principles of a Cloud Migration – Security, The W5H – Episode WHAT? appeared first on .

Trend Micro Integrates with Amazon AppFlow

By Trend Micro

The acceleration of in-house development enabled by public cloud and Software-as-a-Service (SaaS) platform adoption in the last few years has given us new levels of visibility and access to data. Putting all of that data together to generate insights and action, however, can substitute one challenge for another.

Proprietary protocols, inconsistent fields and formatting combined with interoperability and connectivity hurdles can turn the process of answering simple questions into a major undertaking. When this undertaking is a recurrent requirement then that effort can seem overwhelming.

Nowhere is this more evident than in security teams, where writing code to integrate technologies is rarely a core competency and almost never a core project, but when a compliance or security event requires explanation, finding and making sense of that data is necessary.

Amazon is changing that with the release of AppFlow. Trend Micro Cloud One is a launch partner with this new service, enabling simple data retrieval from your Cloud One dashboard to be fed into AWS services as needed.

Amazon AppFlow is an application integration service that enables you to securely transfer data between SaaS applications and AWS services in just a few clicks. With AppFlow, you can data flows between supported SaaS applications, including Trend Micro, and AWS services like Amazon S3 and Redshift, and run flows on a schedule, in response to a business event, or on demand. Data transformation capabilities, such as data masking, validation, and filtering, empower you to enrich your data as part of the flow itself without the need for post-transfer manipulation. AppFlow keeps data secure in transit and at rest with the flexibility to bring your own encryption keys.

Audit automation

Any regularly scheduled export or query of Cloud One requires data manipulation before an audit can be performed.

You may be responsible for weekly or monthly reports on the state of your security agents. To create this report today, you’ve written a script to automate the data analysis process. However, any change to the input or output requires new code to be written for your script, and you have to find somewhere to actually run the script for it to work.

As part of a compliance team, this isn’t something you really have time for and may not be your area of expertise, so it takes significant effort to create the required audit report.

Using Amazon AppFlow, you can create a private data flow between RedShift, for example, and your Cloud One environment to automatically and regularly retrieve data describing security policies into an easy to digest format that can be stored for future review. Data flows can also be scheduled so regular reports can be produced without recurring user input.

This process also improves integrity and reduces overall effort by having reports always available, rather than needing to develop them in response to a request.

This eliminates the need for custom code and the subsequent frustration from trying to automate this regularly occurring task.

Developer Enablement

Developers don’t typically have direct access to security management consoles or APIs for Cloud One or Deep Security as a Service. However, they may need to retrieve data from security agents or check the state of agents that need remediation. This requires someone from the security team to pull data for the developer each time this situation arises.

While we encourage and enable DevOps cultures working closely with security teams to automate and deploy securely, no one likes unnecessary steps in their workflow. And having to wait on the security team to export data is adding a roadblock to the development team.

Fortunately, Amazon AppFlow solves this issue as well. By setting up a flow between Deep Security as a Service and Amazon S3, the security team can enable developers to easily access the necessary information related to security agents on demand.

This provides direct access to the needed data without expanding access controls for critical security systems.

Security Remediation

Security teams focus on identifying and remediating security alerts across all their tools and multiple SaaS applications. This often leads to collaborating with other teams across the organization on application-specific issues that must be resolved. Each system and internal team has different requirements and they all take time and attention to ensure everything is running smoothly and securely.

At Trend Micro, we are security people too. We understand the need to quickly and reliably scale infrastructure without compromising its security integrity. We also know that this ideal state is often hindered by the disparate nature of the solutions on which we rely.

Integrating Amazon AppFlow with your Cloud One – Workload Security solution allows you to obtain the security status from each agent and deliver them to the relevant development or cloud team. Data from all machines and instances can be sent on demand to the Amazon S3 bucket you indicate. As an added bonus, Amazon S3 can trigger a Lambda to automate how the data is processed, so what is in the storage bucket can be immediately useful. And all of this data is secured in transit and at rest by default, so you don’t have to worry about an additional layer of security controls to maintain.

Easy and secure remediation that doesn’t slow anyone down is the goal we’re collectively working toward.

It is always our goal to help your business securely move to and operate in the cloud. Our solutions are designed to enable security teams to seamlessly integrate with a DevOps environment, removing the “roadblock” of security.

As always, we’re excited to be part of this new Amazon service, and we believe our customers can see immediate value by leveraging Amazon AppFlow with their existing Trend Micro cloud solutions.

The post Trend Micro Integrates with Amazon AppFlow appeared first on .

Principles of a Cloud Migration – Security, The W5H

By Jason Dablow
cloud

Whosawhatsit?! –  WHO is responsible for this anyways?

For as long as cloud providers have been in business, we’ve been discussing the Shared Responsibility Model when it comes to customer operation teams. It defines the different aspects of control, and with that control, comes the need to secure, manage, and maintain.

While I often make an assumption that everyone is already familiar with this model, let’s highlight some of the requirements as well as go a bit deeper into your organization’s layout for responsibility.

During your cloud migration, you’ll no doubt come across a variety of cloud services that fits into each of these configurations. From running cloud instances (IaaS) to cloud storage (SaaS), there’s a need to apply operational oversight (including security) to each of these based on your level of control of the service.  For example, in a cloud instance, since you’re still responsible for the Operating System and Applications, you’ll still need a patch management process in place, whereas with file object storage in the cloud, only oversight of permissions and data management is required. I think Mark Nunnikhoven does a great job in going into greater detail of the model here: https://blog.trendmicro.com/the-shared-responsibility-model/.

shared responsibility model

I’d like to zero in on some of the other “WHO”s that should be involved in security of your cloud migration.

InfoSec – I think this is the obvious mention here. Responsible for all information security within an organization. Since your cloud migration is working with “information”, InfoSec needs to be involved with how they get access to monitoring the security and risk associated to an organization. 

Cloud Architect – Another no-brainer in my eyes but worth a mention; if you’re not building a secure framework with a look beyond a “lift-and-shift” initial migration, you’ll be doomed with archaic principles leftover from the old way of doing things. An agile platform built for automating every operation including security should be the focus to achieving success.

IT / Cloud Ops – This may be the same or different teams. As more and more resources move to the cloud, an IT team will have less responsibilities for the physical infrastructure since it’s now operated by a cloud provider. They will need to go through a “migration” themselves to learn new skills to operate and secure a hybrid environment. This adaptation of new skills needs to be lead by…

Leadership – Yes, leadership plays an important role in operations and security even if they aren’t part of the CIO / CISO / COO branch. While I’m going to cringe while I type it, business transformation is a necessary step as you move along your cloud migration journey. The acceleration that the cloud provides can not be stifled by legacy operation and security ideologies. Every piece of the business needs to be involved in accelerating the value you’re delivering your customer base by implementing the agile processes including automation into the operations and security of your cloud.

With all of your key players focused on a successful cloud migration, regardless of what stage you’re in, you’ll reach the ultimate stage: the reinvention of your business where operational and security automation drives the acceleration of value delivered to your customers.

This blog is part of a multi-part series dealing with the principles of a successful cloud migration.  For more information, start at the first post here: https://blog.trendmicro.com/principles-of-a-cloud-migration-from-step-one-to-done/

The post Principles of a Cloud Migration – Security, The W5H appeared first on .

5 reasons to move your endpoint security to the cloud now

By Chris Taylor

As the world has adopts work from home initiatives, we’ve seen many organizations accelerate their plans to move from on-premises endpoint security and Detection and Response (EDR/XDR) solutions to Software as a Service versions. And several customers who switched to the SaaS version last year, recently wrote us to tell how glad to have done so as they transitioned to working remote. Here are 5 reasons to consider moving to a cloud managed solution:

 

  1. No internal infrastructure management = less risk

If you haven’t found the time to update your endpoint security software and are one or two versions behind, you are putting your organization at risk of attack. Older versions do not have the same level of protection against ransomware and file-less attacks. Just as the threats are always evolving, the same is true for the technology built to protect against them.

With Apex One as a Service, you always have the latest version. There are no software patches to apply or Apex One servers to manage – we take care of it for you. If you are working remote, this is one less task to worry about and less servers in your environment which might need your attention.

  1. High availability, reliability

With redundant processes and continuous service monitoring, Apex One as a Services delivers the uptime you need with 99.9% availability. The operations team also proactively monitors for potential issues on your endpoints and with your prior approval, can fix minor issues with an endpoint agent before they need your attention.

  1. Faster Detection and Response (EDR/XDR)

By transferring endpoint telemetry to a cloud data lake, detection and response activities like investigations and sweeping can be processed much faster. For example, creating a root cause analysis diagram in cloud takes a fraction of the time since the data is readily available and can be quickly processed with the compute power of the cloud.

  1. Increased MITRE mapping

The unmatched power of cloud computing also enables analytics across a high volume of events and telemetry to identify a suspicious series of activities. This allows for innovative detection methods but also additional mapping of techniques and tactics to the MITRE framework.  Building the equivalent compute power in an on- premises architecture would be cost prohibitive.

  1. XDR – Combined Endpoint + Email Detection and Response

According to Verizon, 94% of malware incidents start with email.  When an endpoint incident occurs, chances are it came from an email message and you want to know what other users have messages with the same email or email attachment in their inbox? You can ask your email admin to run these searches for you which takes time and coordination. As Forrester recognized in the recently published report: The Forrester Wave™ Enterprise Detection and Response, Q1 2020:

“Trend Micro delivers XDR functionality that can be impactful today. Phishing may be the single most effective way for an adversary to deliver targeted payloads deep into an infrastructure. Trend Micro recognized this and made its first entrance into XDR by integrating Microsoft office 365 and Google G suite management capabilities into its EDR workflows.”

This XDR capability is available today by combining alerts, logs and activity data of Apex One as a Service and Trend Micro Cloud App Security. Endpoint data is linked with Office 365 or G Suite email information from Cloud App Security to quickly assess the email impact without having to use another tool or coordinate with other groups.

Moving endpoint protection and detection and response to the cloud, has enormous savings in customer time while increasing their protection and capabilities. If you are licensed with our Smart Protection Suites, you already have access to Apex One as a Service and our support team is ready to help you with your migration. If you are an older suite, talk to your Trend Micro sales rep about moving to a license which includes SaaS.

 

The post 5 reasons to move your endpoint security to the cloud now appeared first on .

Shift Well-Architecture Left. By Extension, Security Will Follow

By Raphael Bottino, Solutions Architect

A story on how Infrastructure as Code can be your ally on Well-Architecting and securing your Cloud environment

By Raphael Bottino, Solutions Architect — first posted as a medium article
Using Infrastructure as Code(IaC for short) is the norm in the Cloud. CloudFormation, CDK, Terraform, Serverless Framework, ARM… the options are endless! And they are so many just because IaC makes total sense! It allows Architects and DevOps engineers to version the application infrastructure as much as the developers are already versioning the code. So any bad change, no matter if on the application code or infrastructure, can be easily inspected or, even better, rolled back.

For the rest of this article, let’s use CloudFormation as reference. And, if you are new to IaC, check how to create a new S3 bucket on AWS as code:

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Well, it’s not that simple…

Although this is a functional and useful CloudFormation template, following correctly all its rules, it doesn’t follow the rules of something bigger and more important: The AWS Well-Architected Framework. This amazing tool is a set of whitepapers describing how to architect on top of AWS, from 5 different views, called Pillars: Security, Cost Optimization, Operational Excellence, Reliability and Performance Efficiency. As you can see from the pillar names, an architecture that follows it will be more secure, cheaper, easier to operate, more reliable and with better performance.

Among others, this template will generate a S3 bucket that doesn’t have encryption enabled, doesn’t enforce said encryption and doesn’t log any kind of access to it–all recommended by the Well-Architected Framework. Even worse, these misconfigurations are really hard to catch in production and not visibly alerted by AWS. Even the great security tools provided by them such as Trusted Advisor or Security Hub won’t give an easy-to-spot list of buckets with those misconfigurations. Not for nothing Gartner states that 95% of cloud security failures will be the customer’s fault¹.

The DevOps movement brought to the masses a methodology of failing fast, which is not exactly compatible with the above scenario where a failure many times is just found out whenever unencrypted data is leaked or the access log is required. The question is, then, how to improve it? Spoiler alert: the answer lies on the IaC itself 🙂

Shifting Left

Even before making sure a CloudFormation template is following AWS’ own best practices, the first obvious requirement is to make sure that the template is valid. A fantastic open-source tool called cfn-lint is made available by AWS on GitHub² and can be easily adopted on any CI/CD pipeline, failing the build if the template is not valid, saving precious time. To shorten the feedback loop even further and fail even faster, the same tool can be adopted on the developer IDE³ as an extension so the template can be validated as it is coded. Pretty cool, right? But it still doesn’t help us with the misconfiguration problem that we created with that really simple template in the beginning of this post.

Conformity⁴ provides, among other capabilities, an API endpoint to scan CloudFormation templates against the Well-Architected Framework, and that’s exactly how I know that template is not adhering to its best practices. This API can be implemented on your pipeline, just like the cfn-lint. However, I wanted to move this check further left, just like the cfn-lint extension I mentioned before.

The Cloud Conformity Template Scanner Extension

With that challenge in mind, but also with the need for scanning my templates for misconfigurations fast myself, I came up with a Visual Studio Code extension that, leveraging Conformity’s API, allows the developer to scan the template as it is coded. The Extension can be found here⁵ or searching for “Conformity” on your IDE.

After installing it, scanning a template is as easy as running a command on VS Code. Below it is running for our template example:

This tool allows anyone to shift misconfiguration and compliance checking as left as possible, right on developers’ hands. To use the extension, you’ll need a Conformity API key. If you don’t have one and want to try it out, Conformity provides a 14-day free trial, no credit card required. If you like it but feels that this time period is not enough for you, let me know and I’ll try to make it available to you.

But… What about my bucket template?

Oh, by the way, if you are wondering how a S3 bucket CloudFormation template looks like when following the best practices, take a look:

   
A Well-Architected bucket template

Not as simple, right? That’s exactly why this kind of tool is really powerful, allowing developers to learn as they code and organizations to fail the deployment of any resource that goes against the AWS recommendations.

References

[1] https://www.gartner.com/smarterwithgartner/why-cloud-security-is-everyones-business

[2] https://github.com/aws-cloudformation/cfn-python-lint

[3] https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint

[4] https://www.cloudconformity.com/

[5] https://marketplace.visualstudio.com/items?itemName=raphaelbottino.cc-template-scanner

The post Shift Well-Architecture Left. By Extension, Security Will Follow appeared first on .

What do serverless compute platforms mean for security?

By Trend Micro

By Kyle Klassen Product Manager – Cloud Native Application Security at Trend Micro

Containers provide many great benefits to organizations – they’re lightweight, flexible, add consistency across different environments and scale easily.

One of the characteristics of containers is that they run in dedicated namespaces with isolated resource requirements. General purpose OS’s deployed to run containers might be viewed as overkill since many of their features and interfaces aren’t needed.

A key tenant in the cybersecurity doctrine is to harden platforms by exposing only the fewest number of interfaces and applying the tightest configurations required to run only the required operations.

Developers deploying containers to restricted platforms or “serverless” containers to the likes of AWS Fargate for example, should think about security differently – by looking upward, looking left and also looking all-around your cloud domain for opportunities to properly security your cloud native applications. Oh, and don’t forget to look outside. Let me explain…

Looking Upward

As infrastructure, OS, container orchestration and runtimes become the domain of the cloud provider, the user’s primary responsibility becomes securing the containers and applications themselves. This is where Trend Micro Cloud One™, a security services platform for cloud builders, can help Dev and Ops teams better implement build pipeline and runtime security requirements.  Cloud One – Application Security embeds a security library within the application itself to provide defense against web application attacks and to detect malicious activity.

One of the greatest benefits of this technology is that once an application is secured in this manner, it can be deployed anywhere and the protection comes along for the ride. Users can be confident their applications are secure whether deployed in a container on traditional hosts, into EKS on AWS Bottlerocket, serverless on AWS Fargate, or even as an AWS Lambda function!

Looking Left

It’s great that cloud providers are taking security seriously and providing increasingly secure environments within which to deploy your containers. But you need to make sure your containers themselves are not introducing security risks. This can be accomplished with container image scanning to identify security issues before these images ever make it to the production environment.

Enter Deep Security Smart Check – Container Image Scanning part of the Cloud One offering. Scans must be able to detect more than just vulnerabilities. Developer reliance on code re-use, public images, and 3rd party contributions mean that malware injection into private images is a real concern. Sensitive objects like secrets, keys and certificates must be found and removed and assurance against regulatory requirements like PCI, HIPAA or NIST should be a requirement before a container image is allowed to run.

Looking All-Around

Imagine taking the effort to ensure your applications, containers and functions are built securely, comply with strict security regulations and are deployed into container optimized cloud environments only to find out that you’ve still become a victim of an attack! How could this be? Well, one common oversight is recognizing the importance of disciplined configuration and management of the cloud resources themselves – you can’t assume they’re secure just because they’re working.

But, making sure your cloud services are secure can be a daunting task – likely comprised of dozens of cloud services, each with as many configuration options – these environments are complex. Cloud One – Conformity is your cloud security companion and gives you assurance that any hidden security issues with your cloud configurations are detected and prioritized. Disabled security options, weak keys, open permissions, encryption options, high-risk exposures and many, many more best practice security rules make it easy to conform to security best practices and get the most from your cloud provider services.

Look Outside

All done? Not quite. You also need to think about how the business workflows of your cloud applications ingest files (or malware?).  Cloud storage like S3 Buckets are often used to accept files from external customers and partners.  Blindly accepting uploads and pulling them into your workflows is an open door for attack.

Cloud One – File Storage Security incorporates Trend Micro’s best-in-class malware detection technology to identify and remove files infected with malware. As a cloud native application itself, the service deploys easily with deployment templates and runs as a ‘set and forget’ service – automatically scanning new files of any type, any size and automatically removing malware so you can be confident that all of your downstream workflows are protected.

It’s still about Shared Responsibility

Cloud providers will continue to offer security features for deploying cloud native applications – and you should embrace all of this capability.  However, you can’t assume your cloud environment is optimally secure without validating your configurations. And once you have a secure environment, you need to secure all of the components within your control – your functions, applications, containers and workflows. With this practical approach, Trend Micro Cloud One™ perfectly complements your cloud services with Network Security, Workload Security, Application Security, Container Security, File Storage Security and Conformity for cloud posture management, so you can be confident that you’ve got security covered no matter which way you look.

To learn more visit Trendmicro.com/CloudOne and join our webinar on cloud native application threats https://resources.trendmicro.com/Cloud-One-Webinar-Series-Cloud-Native-Application-Threats.html

 

 

 

 

The post What do serverless compute platforms mean for security? appeared first on .

Cloud Native Application Development Enables New Levels of Security Visibility and Control

By Trend Micro

We are in unique times and it’s important to support each other through unique ways. Snyk is providing a community effort to make a difference through AllTheTalks.online, and Trend Micro is proud to be a sponsor of their virtual fundraiser and tech conference.

In today’s threat landscape new cloud technologies can pose a significant risk. Applying traditional security techniques not designed for cloud platforms can restrict the high-volume release cycles of cloud-based applications and impact business and customer goals for digital transformation.

When organizations are moving to the cloud, security can be seen as an obstacle. Often, the focus is on replicating security controls used in existing environments, however, the cloud actually enables new levels of visibility and controls that weren’t possible before.

With today’s increased attention on cyber threats, cloud vulnerabilities provide an opportunistic climate for novice and expert hackers alike as a result of dependencies on modern application development tools, and lack of awareness of security gaps in build pipelines and deployment environments.

Public clouds are capable of auditing API calls to the cloud management layer. This gives in-depth visibility into every action taken in your account, making it easy to audit exactly what’s happening, investigate and search for known and unknown attacks and see who did what to identify unusual behavior.

Join Mike Milner, Global Director of Application Security Technology at Trend Micro on Wednesday April 15, at 11:45am EST to learn how to Use Observability for Security and Audit. This is a short but important session where we will discuss the tools to help build your own application audit system for today’s digital transformation. We’ll look at ways of extending this level of visibility to your applications and APIs, such as using new capabilities offered by cloud providers for network mirroring, storage and massive data handling.

Register for a good cause and learn more at https://www.allthetalks.org/.

The post Cloud Native Application Development Enables New Levels of Security Visibility and Control appeared first on .

Cloud Transformation Is The Biggest Opportunity To Fix Security

By Greg Young (Vice President for Cybersecurity)

This overview builds on the recent report from Trend Micro Research on cloud-specific security gaps, which can be found here.

Don’t be cloud-weary. Hear us out.

Recently, a major tipping point was reached in the IT world when more than half of new IT spending was on cloud over non- cloud. So rather than being the exception, cloud-based operations have become the rule.

However, too many security solutions and vendors still treat the cloud like an exception – or at least not as a primary use case. The approach remains “and cloud” rather than “cloud and.”

Attackers have made this transition. Criminals know that business security is generally behind the curve with its approach to the cloud and take advantage of the lack of security experience surrounding new cloud environments. This leads to ransomware, cryptocurrency mining and data exfiltration attacks targeting cloud environments, to name a few.

Why Cloud?

There are many reasons why companies transition to the cloud. Lower costs, improved efficiencies and faster time to market are some of the primary benefits touted by cloud providers.

These benefits come with common misconceptions. While efficiency and time to market can be greatly improved by transitioning to the cloud, this is not done overnight. It can take years to move complete data centers and operational applications to the cloud. The benefits won’t be fully realized till the majority of functional data has been transitioned.

Misconfiguration at the User Level is the Biggest Security Risk in the Cloud

Cloud providers have built in security measures that leave many system administrators, IT directors and CTOs feeling content with the security of their data. We’ve heard it many times – “My cloud provider takes care of security, why would I need to do anything additional?”

This way of thinking ignores the shared responsibility model for security in the cloud. While cloud providers secure the platform as a whole, companies are responsible for the security of their data hosted in those platforms.

Misunderstanding the shared responsibility model leads to the No. 1 security risk associated with the cloud: Misconfiguration.

You may be thinking, “But what about ransomware and cryptomining and exploits?” Other attack types are primarily possible when one of the 3 misconfigurations below are present.

You can forget about all the worst-case, overly complex attacks: Misconfigurations are the greatest risk and should be the No. 1 concern. These misconfigurations are in 3 categories:

  1. Misconfiguration of the native cloud environment
  2. Not securing equally across multi-cloud environments (i.e. different brands of cloud service providers)
  3. Not securing equally to your on-premises (non-cloud) data centers

How Big is The Misconfiguration Problem?

Trend Micro Cloud One™ – Conformity identifies an average of 230 million misconfigurations per day.

To further understand the state of cloud misconfigurations, Trend Micro Research recently investigated cloud-specific cyber attacks. The report found a large number of websites partially hosted in world-writable cloud-based storage systems. Despite these environments being secure by default, settings can be manually changed to allow more access than actually needed.

These misconfigurations are typically put in place without knowing the potential consequences. But once in place, it is simple to scan the internet to find this type of misconfiguration, and criminals are exploiting them for profit.

Why Do Misconfigurations Happen?

The risk of misconfigurations may seem obvious in theory, but in practice, overloaded IT teams are often simply trying to streamline workflows to make internal processes easier. So, settings are changed to give read and/or write access to anyone in the organization with the necessary credentials. What is not realized is that this level of exposure can be found and exploited by criminals.

We expect this trend will increase in 2020, as more cloud-based services and applications gain popularity with companies using a DevOps workflow. Teams are likely to misconfigure more cloud-based applications, unintentionally exposing corporate data to the internet – and to criminals.

Our prediction is that through 2025, more than 75% of successful attacks on cloud environments will be caused by missing or misconfigured security by cloud customers rather than cloud providers.

How to Protect Against Misconfiguration

Nearly all data breaches involving cloud services have been caused by misconfigurations. This is easily preventable with some basic cyber hygiene and regular monitoring of your configurations.

Your data and applications in the cloud are only as secure as you make them. There are enough tools available today to make your cloud environment – and the majority of your IT spend – at least as secure as your non-cloud legacy systems.

You can secure your cloud data and applications today, especially knowing that attackers are already cloud-aware and delivering vulnerabilities as a service. Here are a few best practices for securing your cloud environment:

  • Employ the principle of least privilege: Access is only given to users who need it, rather than leaving permissions open to anyone.
  • Understand your part of the Shared Responsibility Model: While cloud service providers have built in security, the companies using their services are responsible for securing their data.
  • Monitor your cloud infrastructure for misconfigured and exposed systems: Tools are available to identify misconfigurations and exposures in your cloud environments.
  • Educate your DevOps teams about security: Security should be built in to the DevOps process.

To read the complete Trend Micro Research report, please visit: https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/exploring-common-threats-to-cloud-security.

For additional information on Trend Micro’s approach to cloud security, click here: https://www.trendmicro.com/en_us/business/products/hybrid-cloud.html.

The post Cloud Transformation Is The Biggest Opportunity To Fix Security appeared first on .

Principles of a Cloud Migration – From Step One to Done

By Jason Dablow
cloud

Boiling the ocean with the subject, sous-vide deliciousness with the content.

Cloud Migrations are happening every day.  Analysts predict over 75% of mid-large enterprises will migrate a workload to the cloud by 2021 – but how can you make sure your workload is successful? There are not just factors with IT teams, operations, and security, but also with business leaders, finance, and many other organizations of your business. In this multi-part series, I’ll explore best practices, forward thinking, and use cases around creating a successful cloud migration from multiple perspectives.  Whether you’re a builder in the cloud or an executive overseeing the transformation, you’ll learn from my firsthand experience and knowledge on how to bring value into your cloud migration project.

Here are just a few advantages of a cloud migration:

  • Technology benefits like scalability, high availability, simplified infrastructure maintenance, and an environment compliant with many industry certifications
  • The ability to switch from a CapEx to an OpEx model
  • Leaving the cost of a data center behind

While there can certainly be several perils associated with your move, with careful planning and a company focus, you can make your first step into cloud a successful one.  And the focus of a company is an important step to understand. The business needs to adopt the same agility that the cloud provides by continuing to learn, grow, and adapt to this new environment. The Phoenix Project and the Unicorn Project are excellent examples that show the need and the steps for a successful business transformation.

To start us off, let’s take a look at some security concepts that will help you secure your journey into this new world. My webinar on Principles to Make Your Cloud Migration Journey Secure is a great place to start: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html

The post Principles of a Cloud Migration – From Step One to Done appeared first on .

Cloud-First but Not Cloud-Only: Why Organizations Need to Simplify Cybersecurity

By Wendy Moore

The global public cloud services market is on track to grow 17% this year, topping $266 billion. These are impressive figures, and whatever Covid-19 may do short-term to the macro-economy, they’re a sign of where the world is heading. But while many organizations may describe themselves as “cloud-first”, they’re certainly not “cloud-only.” That is, hybrid cloud is the name of the game today: a blend of multiple cloud providers and multiple datacenters.

Whilst helping to drive agility, differentiation and growth, this new reality also creates cyber risk. As IT leaders try to chart a course for success, they’re crying out for a more holistic, simpler way to manage hybrid cloud security.

Cloud for everyone

Organizations are understandably keen to embrace cloud platforms. Who wouldn’t want to empower employees to be more productive and DevOps to deliver agile, customer-centric services? But digital transformation comes with its own set of challenges. Migration often happens at different rates throughout an organization. That makes it hard to gain unified visibility across the enterprise and manage security policies in a consistent manner — especially when different business units and departments are making siloed decisions. An estimated 85% of organizations are now using multiple clouds, and 76% are using between two and 15 hybrid clouds.

To help manage this complexity, organisations are embracing containers and serverless architectures to develop new applications more efficiently. However, the DevOps teams using these technologies are focused primarily on time-to-market, sometimes at the expense of security. Their use of third-party code is a classic example: potentially exposing the organization to buggy or even malware-laden code.

A shared responsibility

The question is, how to mitigate these risks in a way that respects the Shared Responsibility model of cloud security, but in a consistent manner across the organization? It’s a problem exacerbated by two further concerns.

First, security needs to be embedded in the DevOps process to ensure that the applications delivered are secure, but not in a way that threatens the productivity of teams. They need to be able to use the tools and platforms they want to, but in a way that doesn’t expose the organization to unnecessary extra risk. Second, cloud complexity can often lead to human error: misconfigurations of cloud services that threaten to expose highly regulated customer and corporate data to possible attacks. The Capital One data breach, which affected an estimated 100 million consumers, was caused partly by a misconfigured Web Application Firewall.

Simplifying security

Fortunately, organizations are becoming more mature in their cloud security efforts. We see customers that started off tackling cyber risk with multiple security tools across the enterprise, but in time developed an operational excellence model. By launching what amount to cloud centers of excellence, they’re showing that security policies and processes can be standardized and rolled out in a repeatable way across the organization to good effect.

But what of the tools security teams are using to achieve this? Unfortunately, in too many cases they’re relying on fragmented, point products which add cost, further complexity and dangerous security gaps to the mix. It doesn’t have to be like this.

Cloud One from Trend Micro brings together workload security, container security, application security, network security, file storage security and cloud security posture management (CSPM). The latter, Cloud One – Conformity offers a simple, automated way to spot and fix misconfigurations and enhance security compliance and governance in the cloud.

Whatever stage of maturity you are at with your cloud journey, Cloud One offers simple, automated protection from a single console. It’s simply the way cloud security needs to be.

The post Cloud-First but Not Cloud-Only: Why Organizations Need to Simplify Cybersecurity appeared first on .

The AWS Service to Focus On – Amazon EC2

By Trend Micro
cloud services

If we run a contest for Mr. Popular of Amazon Web Services (AWS), without a doubt Amazon Simple Storage Service (S3) has ‘winner’ written all over it. However, what’s popular is not always what is critical for your business to focus on. There is popularity and then there is dependability. Let’s acknowledge how reliant we are on Amazon Elastic Cloud Computing (EC2) as AWS infrastructure led-organizations.

We reflected upon our in-house findings for the AWS ‘Security’ pillar in our last blog, Four Reasons Your Cloud Security is Keeping You Up at Night, explicitly leaving out over caffeination and excessive screen time!

Drilling further down to the most affected AWS Services, Amazon EC2 related issues topped the list with 32% of all issues. Whereas Mr. Popular – Amazon S3 contributed to 12% of all issues. While cloud providers, like AWS, offer a secure infrastructure and best practices, many customers are unaware of their role in the shared responsibility model. The results showing the number of issues impacting Amazon EC2 customers demonstrates the security gap that can happen when the customer part of the shared responsibility model is not well understood.

While these AWS services and infrastructure are secure, customers also have a responsibility to secure their data and to configure environments according to AWS best practices. So how do we ensure that we keep our focus on this crucial service and ensure the flexibility, scalability, and security of a growing infrastructure?

Introducing Rules

If you thought you were done with rules after passing high school and moving out of your parent’s house, you would have soon realized that you were living a dream. Rules seem to be everywhere! Rules are important, they keep us safe and secure. While some may still say ‘rules are made to be broken’, you will go into a slump if your cloud infrastructure breaks the rules of the industry and gets exposed to security vulnerabilities.

It is great if you are already following the Best Practices for Amazon EC2, but if not, how do you monitor the performance of your services day in and day out to ensure their adherence to these best practices? How can you track if all your services and resources are running as per the recommended standards?

We’re here to help with that. Trend Micro Cloud One – Conformity ‘Rules’ provide you with that visibility for some of the most critical services like Amazon EC2.

What is the Rule?

A ‘Rule’ is the definition of the best practice used as a basis for an assessment that is run by Conformity on a particular piece of your Cloud infrastructure. When a rule is run against the infrastructure (resources) associated with your AWS account, the result of the scan is referred to as a Check. For example, an Amazon EC2 may have 60 Rules (Checks) scanning for various risks/vulnerabilities. Checks are either a SUCCESS or a FAILURE.

Conformity has about 540 Rules and 60 of them are for monitoring your Amazon EC2 services best practices. Conformity Bot scans your cloud accounts for these Rules and presents you with the ‘Checks’ to prioritize and remediate the issues keeping your services healthy and prevent security breaches.

Amazon EC2 Best Practices and Rules

Here are just a few examples of how Conformity Rules have got you covered for some of the most critical Amazon EC2 best practices:

  1. To ensure Security, ensure IAM users and roles are used and management policies are established for access policies.
  2. For managing Storage, keep EBS volumes separate for operating systems and data, and check that the Amazon EC2 instances provisioned outside of the AWS Auto Scaling Groups (ASGs) have Termination Protection safety feature enabled to protect your instances from being accidentally terminated.
  3. For efficient Resource Management, utilize custom tags to track and identify resources, and keep on top of your stated Amazon EC2 limits.
  4. For full confident Backup and Recovery, regularly test the process of recovering instances and EBS volumes should they fail, and create and use approved AMIs for easier and consistent future instance deployment.

See how Trend Micro can support your part of the shared responsibility model for cloud security: https://www.trendmicro.com/cloudconformity.

Stay Safe!

The post The AWS Service to Focus On – Amazon EC2 appeared first on .

Smart Check Validated for New Bottlerocket OS

By Trend Micro

Containers provide a list of benefits to organizations that use them. They’re light, flexible, add consistency across the environment and operate in isolation.

However, security concerns prevent some organizations from employing containers. This is despite containers having an extra layer of security built in – they don’t run directly on the host OS.

To make containers even easier to manage, AWS released an open-source Linux-based operating system meant for hosting containers. While Bottlerocket AMIs are provided at no cost, standard Amazon EC2 and AWS charges apply for running Amazon EC2 instances and other services.

Bottlerocket is purpose-built to run containers and improves security and resource utilization by only including the essential software to run containers, which improves resource utilization and reduces the attack surface compared to general-purpose OS’s.

At Trend Micro, we’re always focused on the security of our customers cloud environments. We’re proud to be a launch partner for AWS Bottlerocket, with our Smart Check component validated for the OS prior to the launch.

Why use additional security in cloud environments

While an OS specifically for containers that includes native security measures is a huge plus, there seems to be a larger question of why third-party security solutions are even needed in cloud environments. We often hear a misconception with cloud deployment that, since the cloud service provider has built in security, users don’t have to think about the security of their data.

That’s simply not accurate and leaves a false sense of security. (Pun intended.)

Yes – cloud providers like AWS build in security measures and have addressed common problems by adding built in security controls. BUT cloud environments operate with a shared responsibility model for security – meaning the provider secures the environment, and users are responsible for their instances and data hosted therein.

That’s for all cloud-based hosting, whether in containers, serverless or otherwise.

 

Why Smart Check in Bottlerocket matters

Smooth execution without security roadblocks

DevOps teams leverage containerized applications to deploy fast and don’t have time for separate security roadblocks. Smart Check is built for the DevOps community with real-time image scanning at any point in the pipeline to ensure insecure images aren’t deployed.

Vulnerability scanning before runtime

We have the largest vulnerability data set of any security vendor, which is used to scan images for known software flaws before they can be exploited at runtime. This not only includes known vendor vulnerabilities from the Zero Day Initiative (ZDI), but also vulnerability intelligence for bugs patched outside the ZDI program and open source vulnerability intelligence built in through our partnership with Snyk.

Flexible enough to fit with your pipeline

Container security needs to be as flexible as containers themselves. Smart Check has a simple admin process to implement role-based access rules and multiple concurrent scanning scenarios to fit your specific pipeline needs.

Through our partnership with AWS, Trend Micro is excited to help ensure customers can continue to execute on their portion of the shared responsibility model through container image scanning by validating that the Smart Check solution will be available for customers to run on Bottlerocket at launch.

More information can be found here: https://aws.amazon.com/bottlerocket/

If you are still interested in learning more, check out this AWS blog from Jeff Barr.

The post Smart Check Validated for New Bottlerocket OS appeared first on .

Trend Micro Cloud App Security Blocked 12.7 Million High-Risk Email Threats in 2019 – in addition to those detected by cloud email services’ built-in security

By Chris Taylor

On March 3, 2020, the cyber division of Federal Bureau of Investigation (FBI) issued a private industry notification calling out Business Email Compromise (BEC) scams through exploitation of cloud-based email services. Microsoft Office 365 and Google G Suite, the two largest cloud-based email services, are targeted by cyber criminals based on FBI complaint information since 2014. The scams are initiated through credential phishing attacks in order to compromise business email accounts and request or misdirect transfers of funds. Between January 2014 and October 2019, the Internet Crime Complaint Center (IC3) received complaints totaling over $2.1 billion in actual losses from BEC scams targeting the two cloud services. The popularity of Office 365 and G Suite has positioned themselves as attractive targets for cybercriminals.

Trend Micro™ Cloud App Security™ is an API-based service protecting Microsoft® Office 365™, Google G Suite, Box, and Dropbox. Using multiple advanced threat protection techniques, it acts as a second layer of protection after emails and files have passed through Office 365 and G Suite’s built-in security.

In 2019, Trend Micro Cloud App Security caught 12.7 million high-risk email threats in addition to what Office 365 and Gmail security have blocked. Those threats include close to one million malware, 11.3 million phishing attempts, and 386,000 BEC attempts. The blocked threats include 4.8 million of credential phishing and 225,000 of ransomware. These are potential attacks that could result in an organization’s monetary, productivity, or even reputation losses.

Trend Micro started publishing its Cloud App Security threat report since 2018. For third year in a row, Trend Micro Cloud App Security is proven to provide effective protection for cloud email services. The following customer examples for different scenarios further show how Cloud App Security is protecting different organizations.

Customer examples: Additional detections after Office 365 built-in security (2019 data)

These five customers, ranging from 550 seats to 80K seats, are across different industries. All of them use E3, which includes basic security (Exchange Online Protection). This data shows the value of adding CAS to enhance Office 365 native security. For example, a transportation company with 80,000 Office 365 E3 users found an additional 16,000 malware, 510,000 malicious & phishing URLs and 27,000 BEC, all in 2019. With the average cost of a BEC attack at $75,000 each and the potential losses and costs to recover from credential phishing and ransomware attacks, Trend Micro Cloud App Security pays for itself very quickly.

Customer examples: Additional Detections after Office 365 Advanced Threat Protection (2019 data)

Customers using Office 365 Advanced Threat Protection (ATP) also need an additional layer of filtering as well. For example, an IT Services company with 10,000 users of E3 and ATP detected an additional 14,000 malware, 713,000 malicious and phishing URLs, and 6,000 BEC in 2019 with Trend Micro Cloud App Security.

Customer examples: Additional Detections after third-party email gateway (2019 data)

Many customers use a third-party email gateway to scan emails before they are delivered to their Office 365 environment. Despite these gateway deployments, many of the sneakiest and hardest to detect threats still slipped though. Plus, a gateway solution can’t detect internal email threats, which can originate from compromised devices or accounts within Office 365.

For example, a business with 120,000 Office 365 users with a third-party email gateway stopped an additional 27,000 malware, 195,000 malicious and phishing emails, and almost 6,000 BEC in 2019 with Trend Micro Cloud App Security.

Customer examples: Additional Detections after Gmail built-in security (2019 data)

*Trend Micro Cloud App Security supports Gmail starting April 2019.

For customer choosing G suite, Trend Micro Cloud App Security can provide additional protection as well. For example, a telecommunication company with 12,500 users blocked almost 8,000 high risk threats with Cloud App Security in just five months.

Email gateway or built-in security for cloud email services is no longer enough to protect organizations from email-based threats. Businesses, no matter the size, are at risk from a plethora of dangers that these kinds of threats pose. Organizations should consider a comprehensive multilayered security solution such as Trend Micro Cloud App Security. It supplements the included security features in email and collaboration platforms like Office 365 and G Suite.

Check out the Trend Micro Cloud App Security Report 2019 to get more details on the type of threats blocked by this product and common email attacks analyzed by Trend Micro Research in 2019.

The post Trend Micro Cloud App Security Blocked 12.7 Million High-Risk Email Threats in 2019 – in addition to those detected by cloud email services’ built-in security appeared first on .

Four Reasons Your Cloud Security Is Keeping You Up At Night

By Trend Micro

We are excited to introduce guest posts from our newest Trenders from Cloud Conformity, now Trend Micro Cloud One – Conformity. More insights will be shared from this talented team to help you be confident and in control of the security of your cloud environments!

Why your cloud security is keeping you up at night

We are all moving to the cloud for speed, agility, scalability, and cost-efficiency and have realized that it demands equally powerful security management. As the cloud keeps on attracting more businesses, security teams are spending sleepless nights securing the infrastructure.

Somewhere, a cyber con artist has a target set on you and is patiently waiting to infiltrate your security. Managing your security posture is as critical as wearing sunscreen even if the sun is hiding behind a cloud. You may not feel the heat instantly, but it definitely leaves a rash for you to discover later.

Analyzing the volume of issues across the global Trend Micro Cloud One – Conformity customer base clearly shows that ‘Security’ is the most challenging area within AWS infrastructure.

According to an internal study in June 2019, more than 50% of issues belonged to the ‘Security’ category.

We can definitely reduce the number of security issues affecting cloud infrastructure, but first need to conquer the possible reasons for security vulnerabilities.

 1. Not scanning your accounts regularly enough

If you deploy services and resources multiple times a day, you must continuously scan all your environments and instances at regular intervals. Tools like Conformity Bot scans your accounts against 530 rules across five pillars of the Well-Architected Framework to help you identify potential security risks and prioritize them. You can even set up the frequency of scans or run them manually as required.

2. Not investing in preventative measures

Seemingly harmless misconfigurations can cause enormous damage that can rapidly scale up and result in a security breach. You can prevent potential security risks from entering live environments by investing some time in scanning your staging or test accounts before launching any resources or services. You can use a Template Scanner to scan your account settings against CloudFormation Template and identify any security and compliance issues before deployment.

3. Not monitoring real-time activity

Catastrophes don’t wait! It may take a few minutes before someone barges into your cloud infrastructure while you are away on the weekend. You need to watch activity in real-time to act on threats without delay. A tool such as Real-Time Monitoring Add-on tracks your account’s activity in real time and triggers alerts for suspicious activity based on set configurations. For example, you can set up alerts to monitor account activity from a specific country or region.

4. Not communicating risks in a timely manner

The information trickling from your monitoring controls is fruitless until you get the right people to act quickly. One of the best practices to maintain smooth security operations is to merge the flow of security activity and events into information channels. Conformity allows you to integrate your AWS accounts with communication channels, for example Jira, email, SMS, Slack, PagerDuty, Zendesk, ServiceNow ITSM, and Amazon SNS. Moreover, configuring communication triggers sends notifications and alerts to set teams through the selected channels.

AWS provides you with the services and resources to host your apps and infrastructure, but remember – Security is a shared responsibility in which you must take an active role.

See how Trend Micro can support your part of the shared responsibility model for cloud security: https://www.trendmicro.com/cloudconformity.

Stay Safe!

The post Four Reasons Your Cloud Security Is Keeping You Up At Night appeared first on .

The Summit of Cybersecurity Sits Among the Clouds

By Trend Micro

Trend Micro Apex One™ as a Service

You have heard it before, but it needs to be said again—threats are constantly evolving and getting sneakier, more malicious, and harder to find than ever before.

It’s a hard job to stay one step ahead of the latest threats and scams organizations come across, but it’s something Trend Micro has done for a long time, and something we do very well! At the heart of Trend Micro security is the understanding that we have to adapt and evolve faster than hackers and their malicious threats. When we released Trend Micro™ OfficeScan™ 11.0, we were facing browser exploits, the start of advanced ransomware and many more new and dangerous threats. That’s why we launched our connected threat defense approach—allowing all Trend Micro solutions to share threat information and research, keeping our customers one step ahead of threats.

 

With the launch of Trend Micro™ OfficeScan™ XG, we released a set of new capabilities like anti-exploit prevention, ransomware enhancements, and pre-execution and runtime machine learning, protecting customers from a wider range of fileless and file-based threats. Fast forward to last year, we saw a huge shift in not only the threats we saw in the security landscape, but also in how we architected and deployed our endpoint security. This lead to Trend Micro Apex One™, our newly redesigned endpoint protection solution, available as a single agent. Trend Micro Apex One brought to the market enhanced fileless attack detection, advanced behavioral analysis, and combined our powerful endpoint threat detection capabilities with our sophisticated endpoint detection and response (EDR) investigative capabilities.

 

We all know that threats evolve, but, as user protection product manager Kris Anderson says, with Trend Micro, your endpoint protection evolves as well. While we have signatures and behavioral patterns that are constantly being updated through our Smart Protection Network, attackers are discovering new tactics that threaten your company. At Trend Micro, we constantly develop and fine-tune our detection engines to combat these threats, real-time, with the least performance hit to the endpoint. This is why we urge customers to stay updated with the latest version of endpoint security—Apex One.”

Trend Micro Apex One has the broadest set of threat detection capabilities in the industry today, and staying updated with the latest version allows you to benefit from this cross-layered approach to security.

 

One easy way to ensure you are always protected with the latest version of Trend Micro Apex One is to migrate to Trend Micro Apex One™ as a Service. By deploying a SaaS model of Trend Micro Apex One, you can benefit from automatic updates of the latest Trend Micro Apex One security features without having to go through the upgrade process yourself. Trend Micro Apex One as a Service deployments will automatically get updated as new capabilities are introduced and existing capabilities are enhanced, meaning you will always have the most recent and effective endpoint security protecting your endpoints and users.

 

Trend Micro takes cloud security seriously, and endpoint security is no different. You can get the same gold standard endpoint protection of Trend Micro Apex One, but delivered as a service, allowing you to benefit from easy management and ongoing maintenance.

The post The Summit of Cybersecurity Sits Among the Clouds appeared first on .

How To Get The Most Out Of Industry Analyst Reports

By Trend Micro

Whether you’re trying to inform purchasing decisions or just want to better understand the cybersecurity market and its players, industry analyst reports can be very helpful. Following our recent accolades by Forrester and IDC in their respective cloud security reports, we want to help customers understand how to use this information.

Our VP of cybersecurity, Greg Young, taps into his past experience at Gartner to explain how to discern the most value from industry analyst reports.

The post How To Get The Most Out Of Industry Analyst Reports appeared first on .

Network security simplified with Amazon VPC Ingress Routing and Trend Micro

By Trend Micro

Today, Amazon Web Services (AWS) announced the availability of a powerful new service, Amazon Virtual Private Cloud (Amazon VPC) Ingress Routing. As a Launch Partner for Amazon VPC Ingress Routing, we at Trend Micro are proud to continue to innovate alongside AWS to provide solutions to customers—enabling new approaches to network security. Trend Micro™ TippingPoint™ and Trend Micro™ Cloud One integrate with Amazon VPC Ingress Routing deliver network security that allows customers to quickly obtain compliance by inspecting both ingress and egress traffic. This gives you a deployment experience designed to eliminate any disruption in your business.

Cloud network layer security by Trend Micro

A defense-in-depth or layered security approach is important to organizations, especially at the cloud network layer. That being said, customers need to be able to deploy a solution without re-architecting or slowing down their business, the problem is, previous solutions in the marketplace couldn’t meet both requirements.

So, when our customers asked us to bring TippingPoint intrusion prevention system (IPS) capabilities to the cloud, we responded with a solution. Backed by industry leading research from Trend Micro Research, including the Zero Day Initiative™, we created a solution that includes cloud network IPS capabilities, incorporating detection, protection and threat disruption—without any disruption to the network.

At AWS re:Invent 2018, AWS announced the launch of Amazon Transit Gateway. This powerful architecture enables customers to route traffic through a hub and spoke topology. We leveraged this as a primary deployment model in our Cloud Network Protection, powered by TippingPoint, cloud IPS solution, announced in July 2019. This enabled our customers to quickly gain broad security and compliance, without re-architecting. Now, we’re adding a flexible new deployment model.

 

Enhancing security through partnered innovation

This year we are excited to be a Launch Partner for Amazon VPC Ingress Routing, a new service that allows for customers to gain additional flexibility and control in their network traffic routing. Learn more about this new feature here.

Amazon VPC Ingress Routing is a service that helps customers simplify the integration of network and security appliances within their network topology. With Amazon VPC Ingress Routing, customers can define routing rules at the Internet Gateway (IGW) and Virtual Private Gateway (VGW) to redirect ingress traffic to third-party appliances, before it reaches the final destination. This makes it easier for customers to deploy production-grade applications with the networking and security services they require within their Amazon VPC.

By enabling customers to redirect their north-south traffic flowing in and out of a VPC through internet gateway and virtual private gateway to the Trend Micro cloud network security solution. Not only does this enable customers to screen all external traffic before it reaches the subnet, but it also allows for the interception of traffic flowing into different subnets, using different instances of the Trend Micro solution.

Trend Micro customers now have the ability to have powerful cloud network layer security in AWS leveraging Amazon VPC Ingress Routing. With this enhancement, customers can now deploy in any VPC, without any disruptive re-architecture and without introducing any additional routing or proxies. Deploying directly inline is the ideal solution and enables simplified network security without disruption in the cloud.

 

What types of protection can customers expect?

When you think of classic IPS capabilities, of course you think of preventing inbound attacks. Now, with Amazon VPC Ingress Routing and Trend Micro, customers can protect their VPCs in even more scenarios. Here is what our customers are thinking about:

  • Protecting physical and on-premises assets by routing that traffic to AWS via DirectConnect or VPN
  • Detecting compromised cloud workloads (cloud native or otherwise) and disrupting those attacks, including DNS filters and geo-blocking capabilities
  • Preventing lateral movement between multi-tiered applications or between connected partner ecosystems
  • Prevention for cloud-native threats, including Kubernetes® and Docker® vulnerabilities, and container image and repository compromises occurring when pulled into VPCs

 

Trend Micro™ Cloud One ­– Network Security

Amazon VPC Ingress Ingress Routing will be available as a deployment option soon for Cloud Network Protection, powered by TippingPoint, available in AWS Marketplace. It will also be available upon release of our recently announced Trend Micro™ Cloud One – Network Security, a key service in Trend Micro’s new Cloud One, a cloud security services platform.

The post Network security simplified with Amazon VPC Ingress Routing and Trend Micro appeared first on .

❌