FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

S3 Ep142: Putting the X in X-Ops

By Paul Ducklin
How to get all your corporate "Ops" teams working together, with cybersecurity correctness as a guiding light.

s3-ep100-js-1200

Introducing AI-guided Remediation for IaC Security / KICS

By The Hacker News
While the use of Infrastructure as Code (IaC) has gained significant popularity as organizations embrace cloud computing and DevOps practices, the speed and flexibility that IaC provides can also introduce the potential for misconfigurations and security vulnerabilities.  IaC allows organizations to define and manage their infrastructure using machine-readable configuration files, which are

GitHub Extends Push Protection to Prevent Accidental Leaks of Keys and Other Secrets

By Ravie Lakshmanan
GitHub has announced the general availability of a new security feature called push protection, which aims to prevent developers from inadvertently leaking keys and other secrets in their code. The Microsoft-owned cloud-based repository hosting platform, which began testing the feature a year ago, said it's also extending push protection to all public repositories at no extra cost. The

Are Source Code Leaks the New Threat Software vendors Should Care About?

By The Hacker News
Less than a month ago, Twitter indirectly acknowledged that some of its source code had been leaked on the code-sharing platform GitHub by sending a copyright infringement notice to take down the incriminated repository. The latter is now inaccessible, but according to the media, it was accessible to the public for several months. A user going by the name FreeSpeechEnthousiast committed

The Secret Vulnerability Finance Execs are Missing

By The Hacker News
The (Other) Risk in Finance A few years ago, a Washington-based real estate developer received a document link from First American – a financial services company in the real estate industry – relating to a deal he was working on. Everything about the document was perfectly fine and normal. The odd part, he told a reporter, was that if he changed a single digit in the URL, suddenly, he could see

Regular Pen Testing Is Key to Resolving Conflict Between SecOps and DevOps

By The Hacker News
In an ideal world, security and development teams would be working together in perfect harmony. But we live in a world of competing priorities, where DevOps and security departments often butt heads with each other. Agility and security are often at odds with each other— if a new feature is delivered quickly but contains security vulnerabilities, the SecOps team will need to scramble the release

CircleCI Urges Customers to Rotate Secrets Following Security Incident

By Ravie Lakshmanan
DevOps platform CircleCI on Wednesday urged its customers to rotate all their secrets following an unspecified security incident. The company said an investigation is currently ongoing, but emphasized that "there are no unauthorized actors active in our systems." Additional details are expected to be shared in the coming days. "Immediately rotate any and all secrets stored in CircleCI,"

Hackers Using Fake CircleCI Notifications to Hack GitHub Accounts

By Ravie Lakshmanan
GitHub has put out an advisory detailing what may be an ongoing phishing campaign targeting its users to steal credentials and two-factor authentication (2FA) codes by impersonating the CircleCI DevOps platform. The Microsoft-owned code hosting service said it learned of the attack on September 16, 2022, adding the campaign impacted "many victim organizations." The fraudulent messages claim to

Integrating Live Patching in SecDevOps Workflows

By The Hacker News
SecDevOps is, just like DevOps, a transformational change that organizations undergo at some point during their lifetime. Just like many other big changes, SecDevOps is commonly adopted after a reality check of some kind: a big damaging cybersecurity incident, for example. A major security breach or, say, consistent problems in achieving development goals signals to organizations that the

GitLab Issues Patch for Critical Flaw in its Community and Enterprise Software

By Ravie Lakshmanan
DevOps platform GitLab this week issued patches to address a critical security flaw in its software that could lead to arbitrary code execution on affected systems. Tracked as CVE-2022-2884, the issue is rated 9.9 on the CVSS vulnerability scoring system and impacts all versions of GitLab Community Edition (CE) and Enterprise Edition (EE) starting from 11.3.4 before 15.1.5, 15.2 before 15.2.3,

GitLab Issues Security Patch for Critical Account Takeover Vulnerability

By Ravie Lakshmanan
GitLab has moved to address a critical security flaw in its service that, if successfully exploited, could result in an account takeover. Tracked as CVE-2022-1680, the issue has a CVSS severity score of 9.9 and was discovered internally by the company. The security flaw affects all versions of GitLab Enterprise Edition (EE) starting from 11.10 before 14.9.5, all versions starting from 14.10

Yes, Containers Are Terrific, But Watch the Security Risks

By The Hacker News
Containers revolutionized the development process, acting as a cornerstone for DevOps initiatives, but containers bring complex security risks that are not always obvious. Organizations that don’t mitigate these risks are vulnerable to attack.  In this article, we outline how containers contributed to agile development, which unique security risks containers bring into the picture – and what

Connected Security Solutions Helps City of Tyler’s CIO to Reduce Costs While Enabling Delivery of Enhanced Community & Public Safety Services

By Trend Micro

“We’re here to serve” is Benny Yazdanpanahi’s motto as CIO for City of Tyler located in Texas. Supporting a population of approximately 107,000, Yazdanpanahi’s vision for his city relies on the use of data to deliver exceptional services to citizens, today and into the future.

 

Since joining the city nearly 19 years ago, Yazdanpanahi has continually challenged himself and his small IT team to stay agile and to keep the needs of the city’s citizens at the forefront. Today, Yazdanpanahi and his team use IT systems to make more informed decisions, enhance community services, and improve public safety.

 

“Our citizens, and especially the younger generation, want immediate access to information and online services,” said Yazdanpanahi. “We want to keep pace with the latest technologies, not only for citizens but also to make our city employees more effective and efficient.”

But Yazdanpanahi knows that a highly secure IT environment is essential to their continued success. “Many US cities have been hacked, so security is on top of everyone’s mind. As a city, we want to provide great services, but we have to provide them in a highly secure manner.”

To accomplish those security goals with limited resources and staff, Tyler’s leaders have been collaborating with Trend Micro for several years. The cybersecurity giant has brought a hands-on approach and an ability to stay ahead of the threats. Their adaptability to the threat landscape strengthens the city’s security posture and empowers the IT team to focus on serving the community.

 

The city has been able to stay secure without additional staff and resources. City employees don’t spend time resolving IT issues and improve their productivity to focus on things that mater for the city.

 

“If you don’t collaborate with a partner that’s highly experienced in the security field, you can easily get blindsided,” said Yazdanpanahi. “We need someone there, day in and out, focused on security. Trend Micro knows how to protect cities like us. They provide the kind of north, south, east, and west protection that makes my job easier and allows us to use our data to accomplish new, exciting things for our city.”

 

Read more about Benny’s journey to securing the city:

https://www.trendmicro.com/en_ca/about/customer-stories/city-of-tyler.html

 

 

The post Connected Security Solutions Helps City of Tyler’s CIO to Reduce Costs While Enabling Delivery of Enhanced Community & Public Safety Services appeared first on .

Have You Considered Your Organization’s Technical Debt?

By Madeline Van Der Paelt

TL;DR Deal with your dirty laundry.

Have you ever skipped doing your laundry and watched as that pile of dirty clothes kept growing, just waiting for you to get around to it? You’re busy, you’re tired and you keep saying you’ll get to it tomorrow. Then suddenly, you realize that it’s been three weeks and now you’re running around frantically, late for work because you have no clean socks!

That is technical debt.

Those little things that you put off, which can grow from a minor inconvenience into a full-blown emergency when they’re ignored long enough.

Piling Up

How many times have you had an alarm go off, or a customer issue arise from something you already knew about and meant to fix, but “haven’t had the time”? How many times have you been working on something and thought, “wow, this would be so much easier if I just had the time to …”?

That is technical debt.

But back to you. In your craze to leave for work you manage to find two old mismatched socks. One of them has a hole in it. You don’t have time for this! You throw them on and run out the door, on your way to solve real problems. Throughout the day, that hole grows and your foot starts to hurt.

This is really not your day. In your panicked state this morning you actually managed to add more pain to your already stressed system, plus you still have to do your laundry when you get home! If only you’d taken the time a few days ago…

Coming Back to Bite You

In the tech world where one seemingly small hole – one tiny vulnerability – can bring down your whole system, managing technical debt is critical. Fixing issues before they become emergent situations is necessary in order to succeed.

If you’re always running at full speed to solve the latest issue in production, you’ll never get ahead of your competition and only fall further behind.

It’s very easy to get into a pattern of leaving the little things for another day. Build optimizations, that random unit test that’s missing, that playbook you meant to write up after the last incident – technical debt is a real problem too! By spending just a little time each day to tidy up a few things, you can make your system more stable and provide a better experience for both your customers and your fellow developers.

Cleaning Up

Picture your code as that mountain of dirty laundry. Each day that passes, you add just a little more to it. The more debt you add on, the more daunting your task seems. It becomes a thing of legend. You joke about how you haven’t dealt with it, but really you’re growing increasingly anxious and wary about actually tackling it, and what you’ll find when you do.

Maybe if you put it off just a little bit longer a hero will swoop in and clean up for you! (A woman can dream, right?) The more debt you add, the longer it will take to conquer it, and the harder it will be and the higher the risk is of introducing a new issue.

This added stress and complexity doesn’t sound too appealing, so why do we do it? It’s usually caused by things like having too much work in progress, conflicting priorities and (surprise!) neglected work.

Managing technical debt requires only one important thing – a cultural change.

As much as possible we need to stop creating technical debt, otherwise we will never be able to get it under control. To do that, we need to shift our mindset. We need to step back and take the time to see and make visible all of the technical debt we’re drowning in. Then we can start to chip away at it.

Culture Shift

My team took a page out of “The Unicorn Project” (Kim, 2019) and started by running “debt days” when we caught our breath between projects. Each person chose a pain point, something they were interested in fixing, and we started there. We dedicated two days to removing debt and came out the other side having completed tickets that were on the backlog for over a year.

We also added new metrics and dashboards for better incident response, and improved developer tools.

Now, with each new code change, we’re on the lookout. Does this change introduce any debt? Do we have the ability to fix that now? We encourage each other to fix issues as we find them whether it’s with the way our builds work, our communication processes or a bug in the code.

We need to give ourselves the time to breathe, in both our personal lives or our work day. Taking a pause between tasks not only allows us to mentally prepare for the next one, but it gives us time to learn and reflect. It’s in these pauses that we can see if we’ve created technical debt in any form and potentially go about fixing it right away.

What’s Next?

The improvement of daily work ultimately enables developers to focus on what’s really important, delivering value. It enables them to move faster and find more joy in their work.

So how do you stay on top of your never-ending laundry? Your family chooses to makes a cultural change and decides to dedicate time to it. You declare Saturday as laundry day!

Make the time to deal with technical debt –your developers, security teams, and your customers will thank you for it.

 

The post Have You Considered Your Organization’s Technical Debt? appeared first on .

Risk Decisions in an Imperfect World

By Mark Nunnikhoven (Vice President, Cloud Research)

Risk decisions are the foundation of information security. Sadly, they are also one of the most often misunderstood parts of information security.

This is bad enough on its own but can sink any effort at education as an organization moves towards a DevOps philosophy.

To properly evaluate the risk of an event, two components are required:

  1. An assessment of the impact of the event
  2. The likelihood of the event

Unfortunately, teams—and humans in general—are reasonably good at the first part and unreasonably bad at the second.

This is a problem.

It’s a problem that is amplified when security starts to integration with teams in a DevOps environment. Originally presented as part of AllTheTalks.online, this talk examines the ins and outs of risk decisions and how we can start to work on improving how our teams handle them.

 

The post Risk Decisions in an Imperfect World appeared first on .

Principles of a Cloud Migration

By Jason Dablow
cloud

Development and application teams can be the initial entry point of a cloud migration as they start looking at faster ways to accelerate value delivery. One of the main things they might use during this is “Infrastructure as Code,” where they are creating cloud resources for running their applications using lines of code.

In the below video, as part of a NADOG (North American DevOps Group) event, I describe some additional techniques on how your development staff can incorporate the Well Architected Framework and other compliance scanning against their Infrastructure as Code prior to it being launched into your cloud environment.

If this content has sparked additional questions, please feel free to reach out to me on my LinkedIn. Always happy to share my knowledge of working with large customers on their cloud and transformation journeys!

The post Principles of a Cloud Migration appeared first on .

8 Cloud Myths Debunked

By Trend Micro

Many businesses have misperceptions about cloud environments, providers, and how to secure it all. We want to help you separate fact from fiction when it comes to your cloud environment.

This list debunks 8 myths to help you confidently take the next steps in the cloud.

The post 8 Cloud Myths Debunked appeared first on .

Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations

By Trend Micro

 

Trend Micro is excited to launch new Trend Micro Cloud One™ – Conformity capabilities that will strengthen protection for Azure resources.

 

As with any launch, there is a lot of new information, so we decided to sit down with one of the founders of Conformity, Mike Rahmati. Mike is a technologist at heart, with a proven track record of success in the development of software systems that are resilient to failure and grow and scale dynamically through cloud, open-source, agile, and lean disciplines. In the interview, we picked Mike’s brain on how these new capabilities can help customers prevent or easily remediate misconfigurations on Azure. Let’s dive in.

 

What are the common business problems that customers encounter when building on or moving their applications to Azure or Amazon Web Services (AWS)?

The common problem is there are a lot of tools and cloud services out there. Organizations are looking for tool consolidation and visibility into their cloud environment. Shadow IT and business units spinning up their own cloud accounts is a real challenge for IT organizations to keep on top of. Compliance, security, and governance controls are not necessarily top of mind for business units that are innovating at incredible speeds. That is why it is so powerful to have a tool that can provide visibility into your cloud environment and show where you are potentially vulnerable from a security and compliance perspective.

 

Common misconfigurations on AWS are an open Amazon Elastic Compute Cloud (EC2) or a misconfigured IAM policy. What is the equivalent for Microsoft?

The common misconfigurations are actually quite similar to what we’ve seen with AWS. During the product preview phase, we’ve seen customers with many of the same kinds of misconfiguration issues as we’ve seen with AWS. For example, Microsoft Azure Blobs Storage is the equivalent to Amazon S3 – that is a common source of misconfigurations. We have observed misconfiguration in two main areas: Firewall and Web Application Firewall (WAF),which is equivalent to AWS WAF. The Firewall is similar to networking configuration in AWS, which provides inbound protection for non-HTTP protocols and network related protection for all ports and protocols. It is important to note that this is based on the 100 best practices and 15 services we currently support for Azure and growing, whereas, for AWS, we have over 600 best practices in total, with over 70 controls with auto-remediation.

 

Can you tell me about the CIS Microsoft Azure Foundation Security Benchmark?

We are thrilled to support the CIS Microsoft Azure Foundation Security Benchmark. The CIS Microsoft Azure Foundations Benchmark includes automated checks and remediation recommendations for the following: Identity and Access Management, Security Center, Storage Accounts, Database Services, Logging and Monitoring, Networking, Virtual Machines, and App Service. There are over 100 best practices in this framework and we have rules built to check for all of those best practices to ensure cloud builders are avoiding risk in their Azure environments.

Can you tell me a little bit about the Microsoft Shared Responsibility Model?

In terms of shared responsibility model, it’s is very similar to AWS. The security OF the cloud is a Microsoft responsibility, but the security IN the cloud is the customers responsibility. Microsoft’s ecosystem is growing rapidly, and there are a lot of services that you need to know in order to configure them properly. With Conformity, customers only need to know how to properly configure the core services, according to best practices, and then we can help you take it to the next level.

Can you give an example of how the shared responsibility model is used?

Yes. Imagine you have a Microsoft Azure Blob Storage that includes sensitive data. Then, by accident, someone makes it public. The customer might not be able to afford an hour, two hours, or even days to close that security gap.

In just a few minutes, Conformity will alert you to your risk status, provide remediation recommendations, and for our AWS checks give you the ability to set up auto-remediation. Auto-remediation can be very helpful, as it can close the gap in near-real time for customers.

What are next steps for our readers?

I’d say that whether your cloud exploration is just taking shape, you’re midway through a migration, or you’re already running complex workloads in the cloud, we can help. You can gain full visibility of your infrastructure with continuous cloud security and compliance posture management. We can do the heavy lifting so you can focus on innovating and growing. Also, you can ask anyone from our team to set you up with a complimentary cloud health check. Our cloud engineers are happy to provide an AWS and/or Azure assessment to see if you are building a secure, compliant, and reliable cloud infrastructure. You can find out your risk level in just 10-minutes.

 

Get started today with a 60-day free trial >

Check out our knowledge base of Azure best practice rules>

Learn more >

 

Do you see value in building a security culture that is shifted left?

Yes, we have done this for our customers using AWS and it has been very successful. The more we talk about shifting security left the better, and I think that’s where we help customers build a security culture. Every cloud customer is struggling with implementing earlier on in the development cycle and they need tools. Conformity is a tool for customers which is DevOps or DevSecOps friendly and helps them build a security culture that is shifted left.

We help customers shift security left by integrating the Conformity API into their CI/CD pipeline. The product also has preventative controls, which our API and template scanners provide. The idea is we help customers shift security left to identify those misconfigurations early on, even before they’re actually deployed into their environments.

We also help them scan their infrastructure-as-code templates before being deployed into the cloud. Customers need a tool to bake into their CI/CD pipeline. Shifting left doesn’t simply mean having a reporting tool, but rather a tool that allows them to shift security left. That’s where our product, Conformity, can help.

 

The post Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations appeared first on .

The Fear of Vendor Lock-in Leads to Cloud Failures

By Trend Micro

 

Vendor lock-in has been an often-quoted risk since the mid-1990’s.

Fear that by investing too much with one vendor, an organization reduces their options in the future.

Was this a valid concern? Is it still today?

 

The Risk

Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.

This way, as the vendor’s tools mature, they continue to support your business.

The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.

If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.

Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.

All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.

Cloud Maturity

When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.

Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.

In isolation, that’s a smart move.

Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.

Automation

The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.

Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.

Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.

Missing Out

When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.

The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.

You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.

Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.

When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.

 

But Multi-cloud…

In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.

As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.

They use automation to reduce the impact if they have to change providers at some point in the future.

This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.

 

Cloud Spanning Tools

There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.

Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.

The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.

This means automation playbooks can easily support multi-cloud (e.g.,  Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One™).  Observability tools can easily support multi-cloud (e.g., Honeycomb.io).

The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.

In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.

The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .

Principles of a Cloud Migration – Security W5H – The HOW

By Jason Dablow
cloud

“How about… ya!”

Security needs to be treated much like DevOps in evolving organizations; everyone in the company has a responsibility to make sure it is implemented. It is not just a part of operations, but a cultural shift in doing things right the first time – Security by default. Here are a few pointers to get you started:

1. Security should be a focus from the top on down

Executives should be thinking about security as a part of the cloud migration project, and not just as a step of the implementation. Security should be top of mind in planning, building, developing, and deploying applications as part of your cloud migration. This is why the Well Architected Framework has an entire pillar dedicated to security. Use it as a framework to plan and integrate security at each and every phase of your migration.

2. A cloud security policy should be created and/or integrated into existing policy

Start with what you know: least privilege permission models, cloud native network security designs, etc. This will help you start creating a framework for these new cloud resources that will be in use in the future. Your cloud provider and security vendors, like Trend Micro, can help you with these discussions in terms of planning a thorough policy based on the initial migration services that will be used. Remember from my other articles, a migration does not just stop when the workload has been moved. You need to continue to invest in your operation teams and processes as you move to the next phase of cloud native application delivery.

3. Trend Micro’s Cloud One can check off a lot of boxes!

Using a collection of security services, like Trend Micro’s Cloud One, can be a huge relief when it comes to implementing runtime security controls to your new cloud migration project. Workload Security is already protecting thousands of customers and billions of workload hours within AWS with security controls like host-based Intrusion Prevention and Anti-Malware, along with compliance controls like Integrity Monitoring and Application Control. Meanwhile, Network Security can handle all your traffic inspection needs by integrating directly with your cloud network infrastructure, a huge advantage in performance and design over Layer 4 virtual appliances requiring constant changes to route tables and money wasted on infrastructure. As you migrate your workloads, continuously check your posture against the Well Architected Framework using Conformity. You now have your new infrastructure secure and agile, allowing your teams to take full advantage of the newly migrated workloads and begin building the next iteration of your cloud native application design.

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html. To have a more personalized conversation, please add me to LinkedIn!

The post Principles of a Cloud Migration – Security W5H – The HOW appeared first on .

Principles of a Cloud Migration – Security W5H – The WHERE

By Jason Dablow
cloud

“Wherever I go, there I am” -Security

I recently had a discussion with a large organization that had a few workloads in multiple clouds while assembling a cloud security focused team to build out their security policy moving forward.  It’s one of my favorite conversations to have since I’m not just talking about Trend Micro solutions and how they can help organizations be successful, but more so on how a business approaches the creation of their security policy to achieve a successful center of operational excellence.  While I will talk more about the COE (center of operational excellence) in a future blog series, I want to dive into the core of the discussion – where do we add security in the cloud?

We started discussing how to secure these new cloud native services like hosted services, serverless, container infrastructures, etc., and how to add these security strategies into their ever-evolving security policy.

Quick note: If your cloud security policy is not ever-evolving, it’s out of date. More on that later.

A colleague and friend of mine, Bryan Webster, presented a concept that traditional security models have been always been about three things: Best Practice Configuration for Access and Provisioning, Walls that Block Things, and Agents that Inspect Things.  We have relied heavily on these principles since the first computer was connected to another. I present to you this handy graphic he presented to illustrate the last two points.

But as we move to secure cloud native services, some of these are outside our walls, and some don’t allow the ability to install an agent.  So WHERE does security go now?

Actually, it’s not all that different – just how it’s deployed and implemented. Start by removing the thinking that security controls are tied to specific implementations. You don’t need an intrusion prevention wall that’s a hardware appliance much like you don’t need an agent installed to do anti-malware. There will also be a big focus on your configuration, permissions, and other best practices.  Use security benchmarks like the AWS Well-Architected, CIS, and SANS to help build an adaptable security policy that can meet the needs of the business moving forward.  You might also want to consider consolidating technologies into a cloud-centric service platform like Trend Micro Cloud One, which enables builders to protect their assets regardless of what’s being built.  Need IPS for your serverless functions or containers?  Try Cloud One Application Security!  Do you want to push security further left into your development pipeline? Take a look at Trend Micro Container Security for Pre-Runtime Container Scanning or Cloud One Conformity for helping developers scan your Infrastructure as Code.

Keep in mind – wherever you implement security, there it is. Make sure that it’s in a place to achieve the goals of your security policy using a combination of people, process, and products, all working together to make your business successful!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The WHERE appeared first on .

Principles of a Cloud Migration – Security W5H – The When

By Jason Dablow
cloud

If you have to ask yourself when to implement security, you probably need a time machine!

Security is as important to your migration as the actual workload you are moving to the cloud. Read that again.

It is essential to be planning and integrating security at every single layer of both architecture and implementation. What I mean by that, is if you’re doing a disaster recovery migration, you need to make sure that security is ready for the infrastructure, your shiny new cloud space, as well as the operations supporting it. Will your current security tools be effective in the cloud? Will they still be able to do their task in the cloud? Do your teams have a method of gathering the same security data from the cloud? More importantly, if you’re doing an application migration to the cloud, when you actually implement security means a lot for your cost optimization as well.

NIST Planning Report 02-3

In this graph, it’s easy to see that the earlier you can find and resolve security threats, not only do you lessen the workload of infosec, but you also significantly reduce your costs of resolution. This can be achieved through a combination of tools and processes to really help empower development to take on security tasks sooner. I’ve also witnessed time and time again that there’s friction between security and application teams often resulting in Shadow IT projects and an overall lack of visibility and trust.

Start there. Start with bringing these teams together, uniting them under a common goal: Providing value to your customer base through agile secure development. Empower both teams to learn about each other’s processes while keeping the customer as your focus. This will ultimately bring more value to everyone involved.

At Trend Micro, we’ve curated a number of security resources designed for DevOps audiences through our Art of Cybersecurity campaign.  You can find it at https://www.trendmicro.com/devops/.

Also highlighted on this page is Mark Nunnikhoven’s #LetsTalkCloud series, which is a live stream series on LinkedIn and YouTube. Seasons 1 and 2 have some amazing content around security with a DevOps focus – stay tuned for Season 3 to start soon!

This is part of a multi-part blog series on things to keep in mind during a cloud migration project.  You can start at the beginning which was kicked off with a webinar here: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html.

Also, feel free to give me a follow on LinkedIn for additional security content to use throughout your cloud journey!

The post Principles of a Cloud Migration – Security W5H – The When appeared first on .

Shift Well-Architecture Left. By Extension, Security Will Follow

By Raphael Bottino, Solutions Architect

A story on how Infrastructure as Code can be your ally on Well-Architecting and securing your Cloud environment

By Raphael Bottino, Solutions Architect — first posted as a medium article
Using Infrastructure as Code(IaC for short) is the norm in the Cloud. CloudFormation, CDK, Terraform, Serverless Framework, ARM… the options are endless! And they are so many just because IaC makes total sense! It allows Architects and DevOps engineers to version the application infrastructure as much as the developers are already versioning the code. So any bad change, no matter if on the application code or infrastructure, can be easily inspected or, even better, rolled back.

For the rest of this article, let’s use CloudFormation as reference. And, if you are new to IaC, check how to create a new S3 bucket on AWS as code:

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Pretty simple, right? And you can easily create as many buckets as you need using the above template (if you plan to do so, remove the BucketName line, since names are globally unique on S3!). For sure, way simpler and less prone to human error than clicking a bunch of buttons on AWS console or running commands on CLI.

Well, it’s not that simple…

Although this is a functional and useful CloudFormation template, following correctly all its rules, it doesn’t follow the rules of something bigger and more important: The AWS Well-Architected Framework. This amazing tool is a set of whitepapers describing how to architect on top of AWS, from 5 different views, called Pillars: Security, Cost Optimization, Operational Excellence, Reliability and Performance Efficiency. As you can see from the pillar names, an architecture that follows it will be more secure, cheaper, easier to operate, more reliable and with better performance.

Among others, this template will generate a S3 bucket that doesn’t have encryption enabled, doesn’t enforce said encryption and doesn’t log any kind of access to it–all recommended by the Well-Architected Framework. Even worse, these misconfigurations are really hard to catch in production and not visibly alerted by AWS. Even the great security tools provided by them such as Trusted Advisor or Security Hub won’t give an easy-to-spot list of buckets with those misconfigurations. Not for nothing Gartner states that 95% of cloud security failures will be the customer’s fault¹.

The DevOps movement brought to the masses a methodology of failing fast, which is not exactly compatible with the above scenario where a failure many times is just found out whenever unencrypted data is leaked or the access log is required. The question is, then, how to improve it? Spoiler alert: the answer lies on the IaC itself 🙂

Shifting Left

Even before making sure a CloudFormation template is following AWS’ own best practices, the first obvious requirement is to make sure that the template is valid. A fantastic open-source tool called cfn-lint is made available by AWS on GitHub² and can be easily adopted on any CI/CD pipeline, failing the build if the template is not valid, saving precious time. To shorten the feedback loop even further and fail even faster, the same tool can be adopted on the developer IDE³ as an extension so the template can be validated as it is coded. Pretty cool, right? But it still doesn’t help us with the misconfiguration problem that we created with that really simple template in the beginning of this post.

Conformity⁴ provides, among other capabilities, an API endpoint to scan CloudFormation templates against the Well-Architected Framework, and that’s exactly how I know that template is not adhering to its best practices. This API can be implemented on your pipeline, just like the cfn-lint. However, I wanted to move this check further left, just like the cfn-lint extension I mentioned before.

The Cloud Conformity Template Scanner Extension

With that challenge in mind, but also with the need for scanning my templates for misconfigurations fast myself, I came up with a Visual Studio Code extension that, leveraging Conformity’s API, allows the developer to scan the template as it is coded. The Extension can be found here⁵ or searching for “Conformity” on your IDE.

After installing it, scanning a template is as easy as running a command on VS Code. Below it is running for our template example:

This tool allows anyone to shift misconfiguration and compliance checking as left as possible, right on developers’ hands. To use the extension, you’ll need a Conformity API key. If you don’t have one and want to try it out, Conformity provides a 14-day free trial, no credit card required. If you like it but feels that this time period is not enough for you, let me know and I’ll try to make it available to you.

But… What about my bucket template?

Oh, by the way, if you are wondering how a S3 bucket CloudFormation template looks like when following the best practices, take a look:

   
A Well-Architected bucket template

Not as simple, right? That’s exactly why this kind of tool is really powerful, allowing developers to learn as they code and organizations to fail the deployment of any resource that goes against the AWS recommendations.

References

[1] https://www.gartner.com/smarterwithgartner/why-cloud-security-is-everyones-business

[2] https://github.com/aws-cloudformation/cfn-python-lint

[3] https://marketplace.visualstudio.com/items?itemName=kddejong.vscode-cfn-lint

[4] https://www.cloudconformity.com/

[5] https://marketplace.visualstudio.com/items?itemName=raphaelbottino.cc-template-scanner

The post Shift Well-Architecture Left. By Extension, Security Will Follow appeared first on .

Cloud Native Application Development Enables New Levels of Security Visibility and Control

By Trend Micro

We are in unique times and it’s important to support each other through unique ways. Snyk is providing a community effort to make a difference through AllTheTalks.online, and Trend Micro is proud to be a sponsor of their virtual fundraiser and tech conference.

In today’s threat landscape new cloud technologies can pose a significant risk. Applying traditional security techniques not designed for cloud platforms can restrict the high-volume release cycles of cloud-based applications and impact business and customer goals for digital transformation.

When organizations are moving to the cloud, security can be seen as an obstacle. Often, the focus is on replicating security controls used in existing environments, however, the cloud actually enables new levels of visibility and controls that weren’t possible before.

With today’s increased attention on cyber threats, cloud vulnerabilities provide an opportunistic climate for novice and expert hackers alike as a result of dependencies on modern application development tools, and lack of awareness of security gaps in build pipelines and deployment environments.

Public clouds are capable of auditing API calls to the cloud management layer. This gives in-depth visibility into every action taken in your account, making it easy to audit exactly what’s happening, investigate and search for known and unknown attacks and see who did what to identify unusual behavior.

Join Mike Milner, Global Director of Application Security Technology at Trend Micro on Wednesday April 15, at 11:45am EST to learn how to Use Observability for Security and Audit. This is a short but important session where we will discuss the tools to help build your own application audit system for today’s digital transformation. We’ll look at ways of extending this level of visibility to your applications and APIs, such as using new capabilities offered by cloud providers for network mirroring, storage and massive data handling.

Register for a good cause and learn more at https://www.allthetalks.org/.

The post Cloud Native Application Development Enables New Levels of Security Visibility and Control appeared first on .

Cloud Transformation Is The Biggest Opportunity To Fix Security

By Greg Young (Vice President for Cybersecurity)

This overview builds on the recent report from Trend Micro Research on cloud-specific security gaps, which can be found here.

Don’t be cloud-weary. Hear us out.

Recently, a major tipping point was reached in the IT world when more than half of new IT spending was on cloud over non- cloud. So rather than being the exception, cloud-based operations have become the rule.

However, too many security solutions and vendors still treat the cloud like an exception – or at least not as a primary use case. The approach remains “and cloud” rather than “cloud and.”

Attackers have made this transition. Criminals know that business security is generally behind the curve with its approach to the cloud and take advantage of the lack of security experience surrounding new cloud environments. This leads to ransomware, cryptocurrency mining and data exfiltration attacks targeting cloud environments, to name a few.

Why Cloud?

There are many reasons why companies transition to the cloud. Lower costs, improved efficiencies and faster time to market are some of the primary benefits touted by cloud providers.

These benefits come with common misconceptions. While efficiency and time to market can be greatly improved by transitioning to the cloud, this is not done overnight. It can take years to move complete data centers and operational applications to the cloud. The benefits won’t be fully realized till the majority of functional data has been transitioned.

Misconfiguration at the User Level is the Biggest Security Risk in the Cloud

Cloud providers have built in security measures that leave many system administrators, IT directors and CTOs feeling content with the security of their data. We’ve heard it many times – “My cloud provider takes care of security, why would I need to do anything additional?”

This way of thinking ignores the shared responsibility model for security in the cloud. While cloud providers secure the platform as a whole, companies are responsible for the security of their data hosted in those platforms.

Misunderstanding the shared responsibility model leads to the No. 1 security risk associated with the cloud: Misconfiguration.

You may be thinking, “But what about ransomware and cryptomining and exploits?” Other attack types are primarily possible when one of the 3 misconfigurations below are present.

You can forget about all the worst-case, overly complex attacks: Misconfigurations are the greatest risk and should be the No. 1 concern. These misconfigurations are in 3 categories:

  1. Misconfiguration of the native cloud environment
  2. Not securing equally across multi-cloud environments (i.e. different brands of cloud service providers)
  3. Not securing equally to your on-premises (non-cloud) data centers

How Big is The Misconfiguration Problem?

Trend Micro Cloud One™ – Conformity identifies an average of 230 million misconfigurations per day.

To further understand the state of cloud misconfigurations, Trend Micro Research recently investigated cloud-specific cyber attacks. The report found a large number of websites partially hosted in world-writable cloud-based storage systems. Despite these environments being secure by default, settings can be manually changed to allow more access than actually needed.

These misconfigurations are typically put in place without knowing the potential consequences. But once in place, it is simple to scan the internet to find this type of misconfiguration, and criminals are exploiting them for profit.

Why Do Misconfigurations Happen?

The risk of misconfigurations may seem obvious in theory, but in practice, overloaded IT teams are often simply trying to streamline workflows to make internal processes easier. So, settings are changed to give read and/or write access to anyone in the organization with the necessary credentials. What is not realized is that this level of exposure can be found and exploited by criminals.

We expect this trend will increase in 2020, as more cloud-based services and applications gain popularity with companies using a DevOps workflow. Teams are likely to misconfigure more cloud-based applications, unintentionally exposing corporate data to the internet – and to criminals.

Our prediction is that through 2025, more than 75% of successful attacks on cloud environments will be caused by missing or misconfigured security by cloud customers rather than cloud providers.

How to Protect Against Misconfiguration

Nearly all data breaches involving cloud services have been caused by misconfigurations. This is easily preventable with some basic cyber hygiene and regular monitoring of your configurations.

Your data and applications in the cloud are only as secure as you make them. There are enough tools available today to make your cloud environment – and the majority of your IT spend – at least as secure as your non-cloud legacy systems.

You can secure your cloud data and applications today, especially knowing that attackers are already cloud-aware and delivering vulnerabilities as a service. Here are a few best practices for securing your cloud environment:

  • Employ the principle of least privilege: Access is only given to users who need it, rather than leaving permissions open to anyone.
  • Understand your part of the Shared Responsibility Model: While cloud service providers have built in security, the companies using their services are responsible for securing their data.
  • Monitor your cloud infrastructure for misconfigured and exposed systems: Tools are available to identify misconfigurations and exposures in your cloud environments.
  • Educate your DevOps teams about security: Security should be built in to the DevOps process.

To read the complete Trend Micro Research report, please visit: https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/exploring-common-threats-to-cloud-security.

For additional information on Trend Micro’s approach to cloud security, click here: https://www.trendmicro.com/en_us/business/products/hybrid-cloud.html.

The post Cloud Transformation Is The Biggest Opportunity To Fix Security appeared first on .

Principles of a Cloud Migration – From Step One to Done

By Jason Dablow
cloud

Boiling the ocean with the subject, sous-vide deliciousness with the content.

Cloud Migrations are happening every day.  Analysts predict over 75% of mid-large enterprises will migrate a workload to the cloud by 2021 – but how can you make sure your workload is successful? There are not just factors with IT teams, operations, and security, but also with business leaders, finance, and many other organizations of your business. In this multi-part series, I’ll explore best practices, forward thinking, and use cases around creating a successful cloud migration from multiple perspectives.  Whether you’re a builder in the cloud or an executive overseeing the transformation, you’ll learn from my firsthand experience and knowledge on how to bring value into your cloud migration project.

Here are just a few advantages of a cloud migration:

  • Technology benefits like scalability, high availability, simplified infrastructure maintenance, and an environment compliant with many industry certifications
  • The ability to switch from a CapEx to an OpEx model
  • Leaving the cost of a data center behind

While there can certainly be several perils associated with your move, with careful planning and a company focus, you can make your first step into cloud a successful one.  And the focus of a company is an important step to understand. The business needs to adopt the same agility that the cloud provides by continuing to learn, grow, and adapt to this new environment. The Phoenix Project and the Unicorn Project are excellent examples that show the need and the steps for a successful business transformation.

To start us off, let’s take a look at some security concepts that will help you secure your journey into this new world. My webinar on Principles to Make Your Cloud Migration Journey Secure is a great place to start: https://resources.trendmicro.com/Cloud-One-Webinar-Series-Secure-Cloud-Migration.html

The post Principles of a Cloud Migration – From Step One to Done appeared first on .

The AWS Service to Focus On – Amazon EC2

By Trend Micro
cloud services

If we run a contest for Mr. Popular of Amazon Web Services (AWS), without a doubt Amazon Simple Storage Service (S3) has ‘winner’ written all over it. However, what’s popular is not always what is critical for your business to focus on. There is popularity and then there is dependability. Let’s acknowledge how reliant we are on Amazon Elastic Cloud Computing (EC2) as AWS infrastructure led-organizations.

We reflected upon our in-house findings for the AWS ‘Security’ pillar in our last blog, Four Reasons Your Cloud Security is Keeping You Up at Night, explicitly leaving out over caffeination and excessive screen time!

Drilling further down to the most affected AWS Services, Amazon EC2 related issues topped the list with 32% of all issues. Whereas Mr. Popular – Amazon S3 contributed to 12% of all issues. While cloud providers, like AWS, offer a secure infrastructure and best practices, many customers are unaware of their role in the shared responsibility model. The results showing the number of issues impacting Amazon EC2 customers demonstrates the security gap that can happen when the customer part of the shared responsibility model is not well understood.

While these AWS services and infrastructure are secure, customers also have a responsibility to secure their data and to configure environments according to AWS best practices. So how do we ensure that we keep our focus on this crucial service and ensure the flexibility, scalability, and security of a growing infrastructure?

Introducing Rules

If you thought you were done with rules after passing high school and moving out of your parent’s house, you would have soon realized that you were living a dream. Rules seem to be everywhere! Rules are important, they keep us safe and secure. While some may still say ‘rules are made to be broken’, you will go into a slump if your cloud infrastructure breaks the rules of the industry and gets exposed to security vulnerabilities.

It is great if you are already following the Best Practices for Amazon EC2, but if not, how do you monitor the performance of your services day in and day out to ensure their adherence to these best practices? How can you track if all your services and resources are running as per the recommended standards?

We’re here to help with that. Trend Micro Cloud One – Conformity ‘Rules’ provide you with that visibility for some of the most critical services like Amazon EC2.

What is the Rule?

A ‘Rule’ is the definition of the best practice used as a basis for an assessment that is run by Conformity on a particular piece of your Cloud infrastructure. When a rule is run against the infrastructure (resources) associated with your AWS account, the result of the scan is referred to as a Check. For example, an Amazon EC2 may have 60 Rules (Checks) scanning for various risks/vulnerabilities. Checks are either a SUCCESS or a FAILURE.

Conformity has about 540 Rules and 60 of them are for monitoring your Amazon EC2 services best practices. Conformity Bot scans your cloud accounts for these Rules and presents you with the ‘Checks’ to prioritize and remediate the issues keeping your services healthy and prevent security breaches.

Amazon EC2 Best Practices and Rules

Here are just a few examples of how Conformity Rules have got you covered for some of the most critical Amazon EC2 best practices:

  1. To ensure Security, ensure IAM users and roles are used and management policies are established for access policies.
  2. For managing Storage, keep EBS volumes separate for operating systems and data, and check that the Amazon EC2 instances provisioned outside of the AWS Auto Scaling Groups (ASGs) have Termination Protection safety feature enabled to protect your instances from being accidentally terminated.
  3. For efficient Resource Management, utilize custom tags to track and identify resources, and keep on top of your stated Amazon EC2 limits.
  4. For full confident Backup and Recovery, regularly test the process of recovering instances and EBS volumes should they fail, and create and use approved AMIs for easier and consistent future instance deployment.

See how Trend Micro can support your part of the shared responsibility model for cloud security: https://www.trendmicro.com/cloudconformity.

Stay Safe!

The post The AWS Service to Focus On – Amazon EC2 appeared first on .

Smart Check Validated for New Bottlerocket OS

By Trend Micro

Containers provide a list of benefits to organizations that use them. They’re light, flexible, add consistency across the environment and operate in isolation.

However, security concerns prevent some organizations from employing containers. This is despite containers having an extra layer of security built in – they don’t run directly on the host OS.

To make containers even easier to manage, AWS released an open-source Linux-based operating system meant for hosting containers. While Bottlerocket AMIs are provided at no cost, standard Amazon EC2 and AWS charges apply for running Amazon EC2 instances and other services.

Bottlerocket is purpose-built to run containers and improves security and resource utilization by only including the essential software to run containers, which improves resource utilization and reduces the attack surface compared to general-purpose OS’s.

At Trend Micro, we’re always focused on the security of our customers cloud environments. We’re proud to be a launch partner for AWS Bottlerocket, with our Smart Check component validated for the OS prior to the launch.

Why use additional security in cloud environments

While an OS specifically for containers that includes native security measures is a huge plus, there seems to be a larger question of why third-party security solutions are even needed in cloud environments. We often hear a misconception with cloud deployment that, since the cloud service provider has built in security, users don’t have to think about the security of their data.

That’s simply not accurate and leaves a false sense of security. (Pun intended.)

Yes – cloud providers like AWS build in security measures and have addressed common problems by adding built in security controls. BUT cloud environments operate with a shared responsibility model for security – meaning the provider secures the environment, and users are responsible for their instances and data hosted therein.

That’s for all cloud-based hosting, whether in containers, serverless or otherwise.

 

Why Smart Check in Bottlerocket matters

Smooth execution without security roadblocks

DevOps teams leverage containerized applications to deploy fast and don’t have time for separate security roadblocks. Smart Check is built for the DevOps community with real-time image scanning at any point in the pipeline to ensure insecure images aren’t deployed.

Vulnerability scanning before runtime

We have the largest vulnerability data set of any security vendor, which is used to scan images for known software flaws before they can be exploited at runtime. This not only includes known vendor vulnerabilities from the Zero Day Initiative (ZDI), but also vulnerability intelligence for bugs patched outside the ZDI program and open source vulnerability intelligence built in through our partnership with Snyk.

Flexible enough to fit with your pipeline

Container security needs to be as flexible as containers themselves. Smart Check has a simple admin process to implement role-based access rules and multiple concurrent scanning scenarios to fit your specific pipeline needs.

Through our partnership with AWS, Trend Micro is excited to help ensure customers can continue to execute on their portion of the shared responsibility model through container image scanning by validating that the Smart Check solution will be available for customers to run on Bottlerocket at launch.

More information can be found here: https://aws.amazon.com/bottlerocket/

If you are still interested in learning more, check out this AWS blog from Jeff Barr.

The post Smart Check Validated for New Bottlerocket OS appeared first on .

❌