FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayInfosec Island Latest Articles

FireEye Launches XDR Platform to Help Security Operations Teams

FireEye (NASDAQ: FEYE) on Monday launched FireEye XDR, a unified platform designed to help security operations teams strengthen threat detection, accelerate response capabilities, and simplify investigations.  

The FireEye XDR platform provides native security protections for Endpoint, Network, Email, and Cloud with a focus on improving organizations’ capabilities for controlling incidents from detection to response. FireEye Helix unifies the security operations platform by providing next-generation security incident and event management (SIEM), security orchestration, automation and response (SOAR), and correlation capabilities along with threat intelligence powered by Mandiant.   

“Our XDR platform translates insight to action across more than 600 security technologies," said Bryan Palma, EVP of FireEye Products.   

FireEye’s Helix native cloud design provides an improved analyst experience allowing for the seamless integration of disparate security tools regardless of vendor or data source. FireEye’s XDR platform is best suited for enterprise and mid-market security operations teams that are increasingly at risk from cyber attacks due to an array of factors including sophistication of threats, suboptimal security tool management, and personnel shortages.  

[ Related: XDR is a Destination, Not a Solution ]

Over the next few quarters, FireEye says that it plans to introduce new features to the FireEye XDR platform including enhanced Endpoint cloud capabilities, FireEye Helix upgraded dashboards and threat graphing capabilities, additional support for leading third-party security tools, and continued integration with the Mandiant Advantage platform which includes Automated Defense.  

“Forward-thinking security and risk leaders are looking to defend their enterprises in ways that can reduce complexity and upfront investment, while at the same time speeding the time it takes to detect and respond to pervasive threats,” said Jon Oltsik, Senior Principal Analyst and ESG Fellow. “Leveraging an approach to XDR built on threat intelligence can help security leaders improve efficacy and avoid becoming the next headline.”  

The FireEye XDR Platform is available now and includes FireEye Helix and any combination of FireEye products including Endpoint, Network, Email, and Cloud delivered via cloud subscription licenses with per user or by data consumption options.

Copyright 2010 Respective Author at Infosec Island
  • August 17th 2021 at 10:34

Five Practical Steps to Implementing a Zero-Trust Network

While the concept of Zero Trust was created 10 years ago, the events of 2020 thrust it to the top of enterprise security agendas. The COVID-19 pandemic has driven mass remote working, which means that organizations’ traditional perimeter-based security models were broken up, in many cases literally overnight. For the foreseeable future, an organization's network is no longer a single thing in one location: it is everywhere, all of the time. Even if we look at organizations that use a single data center located in one place, this data center is accessed by multiple users on multiple devices.

With the sprawling, dynamic nature of today’s networks, if you don’t adopt a Zero-Trust approach, then a breach in one part of the network could quickly cripple your organization as malware, and especially ransomware, makes it way unhindered throughout the network. We have seen multiple examples of ransomware attacks in recent years: organizations  spanning all sectors, from hospitals, to local government and major corporations, have all suffered large-scale outages. Put simply, few could argue that a purely perimeter-based security model makes sense anymore.

Five Practical Steps

So how should organizations go about applying the Zero Trust blueprint to address their new and complex network reality? These five steps represent the most logical way to achieve Zero-Trust networking, by finding out what data is of value, where that data is going and how it's being used. The only way to do this successfully is with automation and orchestration.

1. Identifying and segmenting data

This is one of the most complicated areas of implementing Zero-Trust, since it requires organizations to figure out what data is sensitive.

Businesses that operate in highly regulated environments probably already know what that data is, since the regulators have been requiring oversight of such data. Another approach is to separate systems that humans have access to from other parts of the environment, for example parts of the network that can be connected to by smartphones, laptops or desktops. Unfortunately, humans are often the weakest link and the first source of a breach, so it makes sense to separate these types of network segments from servers in the data center. Naturally, all home-user connections into the organization need to be terminated in a segregated network segment.

2. Mapping the traffic flows of your sensitive data and associate them to your business applications

Once you’ve identified your sensitive data, the next step is knowing where the data is going, what it is being used for and what it is doing. Data flows across your networks. Systems and users access it all the time, via many business applications. If you don’t know this information about your data, you can’t effectively defend it.

Automated discovery tools can help you to understand the intent of your data - why is that flow there? What is the purpose? What data is it transferring? What application is a particular flow serving? With the right tooling, you can start to grow your understanding of which flows need to be allowed. Once you have that, you can then get to the Zero-Trust part of saying “and everything else will not be allowed.”

3. Architecting the network

Once you know what flows should be allowed (and then everything else deserves to be blocked), you can move onto designing a network architecture, and a filtering policy that enforces your network’s micro-perimeters. In other words, architecting the controls to make sure that only legitimate flows are allowed.

Current virtualization technologies allow you to architect such networks much more easily than in the past. Software-defined networking (SDN) platforms within data centers and public-cloud providers all allow you to deploy filters within the network fabric – so placing the filtering policies anywhere in your networks is technically possible. However, actually defining the content of these filtering policies: the rules governing the allowed flows – is where the automatic discovery really pays off

After going through the discovery process, you are able to understand the intent of the flows and can place boundaries between the different zones and segments. This is a balancing act between how much control you want to achieve and how secure you want to be. With many tiny islands of connectivity or micro-segments, you have to think about how much time you want to invest in setting that up and managing it over time. Discovering intent is a way to make this simple because it helps you decide where to logically put these segments.

4. Monitoring

Once the microsegments and policies are deployed, it’s essential to monitor everything. This is where visibility comes into its own. The only way to know if there is a problem is by monitoring traffic across the entire infrastructure, all the time.

There are two important facets to monitoring. Firstly, you need continuous compliance. You don’t want to be in a situation where you only check you are compliant when the auditors drop in. This means that you need to be monitoring configurations and traffic all the time, and when the auditor does come, you can just show them the latest report.

Secondly, organizations have to make the distinction between the learning phase of monitoring, and the enforcement stage. In the discovery’s learning phase, you are monitoring the network to learn all the flows that are there and to annotate these with their intent. This allows you to see what flows are necessary before writing the policy rules. There comes a point, however, where you have to stop learning, and decide that any flow that you haven’t seen is an anomaly which you will block by default. This is where you can make the big switch from a default ‘allow’ policy to a default ‘deny,’ or organizational ‘D-Day.’

At this stage, you can switch to monitoring for enforcing purposes. From then on, any developer who wants to allow another flow through the data center will have to file a change request and get permission to have that connectivity allowed.

5. Automate and orchestrate

Finally, the only way you will ever get to D-day is with the help of a policy engine, the central ‘brain’ behind your whole network policy. Without this, you have to do everything manually across the entire infrastructure every time there is a need for a change.

Your policy engine, enabled by automation orchestration, is able to compare any change request against what you have defined as your legitimate business connectivity requirements. If the additional connectivity being requested is in line with what is defined as acceptable use, then it should move ahead with Zero-Touch, in a fully automated manner. This can be achieved with deployment of necessary updates to the filters in minutes. Only requests that fall outside the guidelines of acceptable use need to be reviewed and approved by human experts.

Once approved (automatically or after review), a change needs to be deployed. If you have to deploy a change to potentially hundreds of different enforcement points, using all kinds of different technologies, each with their own intricacies and configurations, this change request process is almost impossible to do without an intelligent automation system.

Focus on business outcomes, rather than security outcomes

Removing the complexity of security enables real business outcomes, since processes become faster and more flexible without compromising security or compliance. Right now in many organizations, even with the limited segmentation that they have in place already, pushing through a change post ‘D-Day’ is very slow – sometimes taking weeks to get through the approval stage on the security side because there is a lot of manual work involved. Micro-segmentation can make this even more complex.

However, using the steps I’ve outlined here to automate Zero Trust practices means that the end-to-end time from making a change request to deployment and enforcement goes down to one day, or even a few hours – without introducing risk.  Put simply, automation means organizations spend less time and budget on dealing with managing their security infrastructure, and more on enabling the business.  That’s a true win-win situation.  

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
  • May 26th 2021 at 12:26

Facebook Shuts Down Two Hacking Groups in Palestine

Social media giant Facebook today announced that it took action against two groups of hackers originating from Palestine that abused its infrastructure for malware distribution and account compromise across the Internet. 

One of the dismantled networks was linked to the Preventive Security Service (PSS), one of the several intelligence services of Palestine, while the other was associated with Arid Viper, an established threat actor in the Gaza region.

The two clusters of activity, Facebook says, were not connected to one another, as one was focused on domestic audiences, while the other primarily targeted Palestinian territories and Syria, but also hit Turkey, Iraq, Lebanon and Libya.

As part of the shutdown operation, Facebook took down accounts, blocked domains, sent alerts to people who were targeted, and released malware hashes to the public.

“The groups behind these operations are persistent adversaries, and we know they will evolve their tactics in response to our enforcement,” Facebook says.

The PSS-linked activity originated in the West Bank and focused on targets outside Palestine, employing social engineering to lure individuals into clicking on malicious links and getting infected with malware.

Targets included journalists, opponents of the Fatah-led government, human rights activists, the Syrian opposition, Iraqi military, and other military groups.

An in-house built Android malware family associated with the operation masqueraded as a chat application and collected device metadata, call logs, text messages, contacts, and location, and only rarely exhibited keylogging capabilities. All data was sent to mobile app development platform Firebase.

The group also employed the publicly available Android malware family SpyNote, offers remote access to devices, and deployed publicly available Windows malware, such as NJRat and HWorm. Fake and compromised accounts were used to build trust in targeted individuals.

Also referred to as Desert Falcons, and DHS, Arid Viper has been active for more than half a decade and is likely closely connected to the Molerats APT. The newly observed activity, Facebook says, targeted government officials in Palestine, members of the Fatah party, students, and security forces.

The threat actor employed a large infrastructure of more than one hundred websites that hosted iOS and Android malware, were designed for phishing, or functioned as command and control (C&C) servers.

“They appear to operate across multiple internet services, using a combination of social engineering, phishing websites and continually evolving Windows and Android malware in targeted cyber espionage campaigns,” Facebook says.

As part of the observed activity, the adversary used custom-built iOS surveillanceware dubbed Phenakite and tricked users into installing a mobile configuration profile for the malware to be effective. The malware was packed inside a Trojanized, fully-functional chat application and could direct victims to phishing pages for Facebook and iCloud.

While the app could be installed without jailbreak, the malware did require one to elevate privileges and access sensitive user information. The publicly available Osiris jailbreak tool was used for this purpose.

Arid Viper also employed Android malware that resembled FrozenCell and VAMP and which required installation of apps from third-party sources. Variants of the Micropsia malware family were also used.

The distribution of malware relied on social engineering, with 41 attacker-controlled phishing sites used to distribute the Android malware, and a 3rd party Chinese app development site employed for the delivery of iOS malware.

Facebook says that, for roughly two years, it has been in contact with industry partners to share information about the discovered activity and proceed with the identification and blocking of the threat actors. 

Related: Facebook Removes 14 Networks Fueling Deceptive Campaigns

Related: Facebook Says Hackers 'Scraped' Data of 533 Million Users in 2019 Leak

Related: Facebook Disrupts Chinese Spies Using iPhone, Android Malware

Copyright 2010 Respective Author at Infosec Island
  • April 21st 2021 at 18:59

Cloud Security Alliance Shares Security Guidance for Crypto-Assets Exchange

The Cloud Security Alliance (CSA) has released new Crypto-Asset Exchange Security Guidelines, a set of guidelines and best practices for crypto-asset exchange (CaE) security.  

Drafted by CSA’s Blockchain/Distributed Ledger Working Group, the document provides readers with a comprehensive set of guidelines for effective exchange security to help educate users, policymakers, and cybersecurity professionals on the pros and cons of further securing cryptocurrency exchanges, including both Decentralized Exchanges (DEX) and hosted wallets at cloud-based exchanges, OTC desks, and cryptocurrency swap services.  

Cryptocurrency exchanges are increasingly becoming targets of hackers. For instance, in December 2020, cryptocurrency exchange Exmo “detected suspicious withdrawal activity” to the tune of more than $10 million.   

CSA's document includes a model that identifies the 10 top threats to crypto exchanges, plus a reference architecture and set of security best practices for the end-user, exchange operators, and auditors. Also covered are security control measures across a wide area of administrative and physical domains.  

“As the digital assets industry evolves and matures, crypto-asset exchanges increasingly cover areas that were, for decades, the sole dominion of long-established financial service institutions,” said Bill Izzo, co-chair of CSA’s Blockchain/Distributed Ledger Working Group. “It’s our hope that this document will provide a roadmap for those tasked with ushering new and existing financial services organizations into the future in a controlled and secure manner.”  

The Crypto-Asset Exchange Security Guidelines can be downloaded here.

Copyright 2010 Respective Author at Infosec Island
  • April 13th 2021 at 20:05

Intel Corp. to Speak at SecurityWeek Supply Chain Security Summit

Join Intel on Wednesday, March 10, at SecurityWeek’s Supply Chain Security Summit, where industry leaders will examine the current state of supply chain attacks. Hear Intel’s experts discuss the need for transparency and integrity across the complete product lifecycle, from build to retire.  

Into the Spotlight: Is Supply Chain Ready for the Magnifying Glass?  

Listen in on a live conversation with Intel’s Jackie Sturm, corporate vice president of Global Supply Chain Operations, and Tom Garrison, vice president and general manager of Client Security Strategy & Initiatives. They will discuss the benefits of cybersecurity and transparency across the digital supply chain, and share their insights on what it means to be prepared in 2021.

The session will be moderated by Camille Morhardt, director of Security Initiatives & Communications at Intel.  

When: 8-8:45 a.m. PST, Wednesday, March 10, 2021  

Where: https://register.securityweek.com/supply-chain-security-summit

Registration: Free    

About IntelIntel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.

Copyright 2010 Respective Author at Infosec Island
  • March 9th 2021 at 01:11

GitHub Hires Former Cisco Executive Mike Hanley as Chief Security Officer

Software development platform GitHub announced on Wednesday that it has hired Mike Hanley as its new Chief Security Officer (CSO).  

Hanley joins GitHub from Cisco, where he served as Chief Information Security Officer (CISO). He arrived at Cisco via its $2.35 billion acquisition of Duo Security in 2018.  

“As the largest global network of developers, GitHub is also crucial to supply chain security, giving developers the tools and knowledge to secure software following major breaches like SolarWinds,” a spokesperson told SecurityWeek.  

“As a security practitioner, this is also an exciting transition for me as much of the security community, and many of my favorite security projects, live on GitHub, like CloudMapper, stethoscope, GoPhish, and osquery,” Hanley wrote in a blog post. “I couldn’t be more excited to help secure the platform that’s made these influential projects possible and expanded their reach in incredible ways.”  

GitHub, which Microsoft acquired for $7.5 billion in 2018, said last year that it had paid out a total of more than $1 million through its bug bounty program on HackerOne, where it has no maximum reward limit for critical vulnerabilities.  

News of Hanley’s hire is one of several prominent industry moves announced this week, as Reddit announced that former Bank of America security executive Allison Miller would be its new CISO, and stock trading firm Robinhood has hired veteran cybersecurity practitioner Caleb Sima as Chief Security Officer.

Copyright 2010 Respective Author at Infosec Island
  • February 24th 2021 at 20:34

Reddit Names Allison Miller as Chief Information Security Officer (CISO)

Social news community site Reddit announced on Monday that it has hired Allison Miller as Chief Information Security Officer (CISO) and VP of Trust. 

Miller joins Reddit from Bank of America where she most recently served as SVP Technology Strategy & Design, and had been overseeing technology design and engineering delivery for the bank’s information security organization. She previously held technical and leadership roles at Google, Electronic Arts, Tagged/MeetMe, PayPal/eBay, and Visa. 

According to a blog post announcing Miller’s hire, she will be tasked expanding trust & safety operations and data security, and redesigning Reddit’s trust frameworks and transparency efforts. 

Miller has already started in the role and reports directly to Reddit CTO Chris Slowe. 

She has a B.S. in Economics from the University of Pennsylvania and a Master of Business Administration from the University of California at Berkeley.  

Reddit has been operating for more than 16 years, and announced a $250 million Series E funding round earlier this month.

The company says more than 50 million users visit the site daily.

Copyright 2010 Respective Author at Infosec Island
  • February 23rd 2021 at 01:23

SecurityWeek Names Ryan Naraine as Editor-at-Large

SecurityWeek has named Ryan Naraine as Editor-at-Large, adding a veteran cybersecurity journalist and podcaster to its editorial team.

Naraine joins SecurityWeek from Intel Corp., where he most recently served as Director of Security Strategy and leader of the chipmaker’s security community engagement initiatives. Prior to Intel, he managed Kaspersky’s Global Research and Analysis Team (GReAT) in the U.S., a team that researched and documented some of the most well-known Advanced Persistent Threat (APT) groups and targeted attacks around the world. During a career that spanned a decade at Kaspersky, Naraine also co-managed the global Security Analyst Summit (SAS) conference series.

Prior to Kaspersky, he was the Founding Editor at Threatpost, and a security journalist with bylines at ZDNet and eWEEK.

In this newly created role, Naraine will work to expand SecurityWeek’s innovative multimedia content offerings and help execute the publication’s editorial vision.

In addition to editorial responsibilities, Naraine will join the management team of SecurityWeek’s industry-leading cybersecurity events portfolio, including its high-profile CISO ForumIndustrial Control Systems (ICS) Cyber Security Conference series, and the company’s SecuritySummits event series, a lineup of eight (8) fully immersive, topic-specific virtual events.

“Despite the headwinds stemming from a pandemic, SecurityWeek experienced record growth in 2020,” said Mike Lennon, Managing Director at SecurityWeek. “Ryan’s journalistic background, combined with his technical knowledge and vast network in the industry, will help keep the momentum going as we enter our next stage of growth. We are beyond thrilled to have Ryan join the SecurityWeek team and could not be more excited about our company positioning.”

“It’s exciting to return to my roots in journalism,” Naraine said, noting that his work will focus on showcasing the work of innovators building groundbreaking security technologies and executing effective security plans. “Too much of today’s security news focuses on data breaches, zero-day attacks and sensational topics, ignoring the defenders in the trenches building the tools and security programs to keep us all safe. I want to help change that by highlighting the important work being done in the background to help defenders,” Naraine added.

Copyright 2010 Respective Author at Infosec Island
  • January 19th 2021 at 01:49

Why Cyber Security Should Be at the Top of Your Christmas List

Santa has been making his list and checking it twice. Will you (and your organization's cyber security practices) make the Nice list? Or did you fall on the naughty side this year?

Either way, now is the best time to begin preparation so that you are setup for a good Christmas in 2021.

Right up to the end of the year, massive cyber-attacks and high-profile data breaches made headlines in 2020. In the year ahead, organizations must prepare for the unknown, so they have the flexibility to endure unexpected and high impact security events. To take advantage of emerging trends in both technology and cyberspace, businesses need to manage risks in ways beyond those traditionally handled by the information security function, since innovative attacks will most certainly impact both business reputation and shareholder value.

Based on comprehensive assessments of the threat landscape, businesses focus on the following security topics in 2021:

  • Cybercrime: Malware, ID Theft, Ransomware and Network Attacks
  • Insider Threats are Real
  • The Digital Generation Becomes the Scammer’s Dream
  • Edge Computing Pushes Security to the Brink
  • Rushed Digital Transformations Destroy Trust

An overview for each of these areas can be found below:

Cybercrime: Malware, ID Theft, Ransomware and Network Attacks

We have seen an increase in cybercrime targeting the COVID-19 “opportunity”.  Not restricted to ransomware attacks on hospitals, this has also seen targeting of remote workers who are accessing corporate systems. Setting up fraudulent charities, fraudulent loans, extortion along with an increase in traditional phishing and malware are all on the increase. The changing threat landscape requires risk management and security practitioners to pay close attention to how exposures change over the coming months and the circumstances that influence the level of protection.

Insider Threats are Real

The insider threat is one of the greatest drivers of security risks that organizations face as a malicious insider utilizes credentials to gain access to a given organization’s critical assets. Many organizations are challenged to detect internal nefarious acts, often due to limited access controls and the ability to detect unusual activity once someone is already inside their network. The threat from malicious insider activity is an increasing concern, especially for financial institutions, and will continue to be so in 2021.

The Digital Generation Becomes the Scammer’s Dream

The next generation of employees will enter the workplace, introducing new information security concerns to organizations. Their attitudes toward sharing information will fall short of the requirements for good information security. Reckless attitudes to sharing information online will set new norms for security and privacy, undermining awareness activities; attackers will use sophisticated social engineering techniques to manipulate individuals into giving up their employer’s critical information assets.

Edge Computing Pushes Security to the Brink

Edge computing will be an attractive architectural choice for organizations; however, it will also become a key target for attackers. It will create numerous points of failure and will lose many benefits of traditional security solutions. Organizations will lose the visibility, security and analysis capabilities associated with cloud service providers; attackers will exploit blind spots, targeting devices on the periphery of the network environment, causing significant downtime.

Rushed Digital Transformations Destroy Trust

Organizations will undertake evermore complex digital transformations – deploying AI, blockchain or robotics – expecting them to seamlessly integrate with underlying systems. Those that get it wrong will have their data compromised. Consumers and dependent supply chains will lose trust in organizations that do not integrate systems and services effectively; new vulnerabilities and attack vectors will be introduced, attracting opportunistic attackers.

A Continued Need to Involve the Board

The role of the C-Suite has undergone significant transformation over the last decade. Public scrutiny of business leaders is at an all-time high, in part due to massive hacks and data breaches. It’s become increasingly clear in the last two years that in the event of a breach, the hacked organization will be blamed and held accountable.

The executive team sitting at the top of an organization has the clearest, broadest view. A serious, shared commitment to common values and strategies is at the heart of a good working relationship between the C-suite and the board. Without sincere, ongoing collaboration, complex challenges like cyber security will be unmanageable. Covering all the bases—defense, risk management, prevention, detection, remediation, and incident response—is better achieved when leaders contribute from their expertise and use their unique vantage point to help set priorities and keep security efforts aligned with business objectives.

Incidents will happen as it is impossible to avoid every breach. But you can commit to building a mature, realistic, broad-based, collaborative approach to cyber security and resilience. Maturing your organization’s ability to detect intrusions quickly and respond expeditiously will be of the highest importance in 2021 and beyond.

Don't forget. Santa is watching. Make sure you end up on his Nice list in the year to come!

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • December 17th 2020 at 12:26

United States Federal Government’s Shift to Identity-Centric Security

Across the globe, government agencies have begun transformation and modernization of their IT ecosystem to deliver services in an agile, secure, and timely efficient manner, this means broad and rapid adoption of cloud infrastructure and services at pace we've never seen, and now, we are now thrust into adopting changes to how we interact and connect to business applications, systems and data remotely.

Governments are increasingly facing new legislation, standards, frameworks, and policies to protect critical and sensitive information. Such as, NIST and amongst others.

The adversary continues to become more advanced - we must protect our organizations from a broad array of threat actors – with increasing complexity, resources, and persistence.  This increasing number, and overall impact of cybersecurity breaches is staggering and has shown us the identity is the new attack vector.

Federal agencies maintain critical information that could do grave harm to – the country, national security, and more importantly its citizens, if accessed by the wrong person.

User communities have expanded beyond humans to machine identities and process, the amount of data being created is growing exponentially. It is no longer feasible to protect our most sensitive assets behind a single network wall and as identified the fast path for a threat actor to steal data is through a compromised identity.

These challenges leave the agency open to the risks and costs of cyber-attacks, non-compliance, and simple human error. It’s time for a shift in our approach to security.

Taking an identity-centric approach to modern security architecture helps organizations protect the weapons that are being used against us – the identity itself - But are federal agencies ready to shift to an identity-centric security model?

Nearly half of the US federal government agencies are substantially on their way to adopting an identity-focused approach to protecting access to agency resources, but many agencies still rely heavily on perimeter defense tools or policies.

The Zero Trust concept is forcing them to evolve to a model made up of many micro perimeters at each identity domain – Behavior, Data, Credentials, Privileges, Roles and Entitlements – Analytics and Behavior. Instead of building many layers of security from the outside in, it proposes the idea of protecting data from the inside out and building out security controls only where you need them.

In 2019, the United States, White House’s Office of Management and Budget (OMB) released M-19-17, the ICAM Modernization Strategy – the memo outlines the objectives for securing federal IT systems, including a common vision for using identity and access management controls. Some agencies are still developing their approach, many are focusing on creating a baseline of users, objects, and access. Some have started to look to modern security architecture – rooted in identity and device security – extending what has been done in HSPD-12, Derived Credentials and Assured Identities and Credentialing.

Thanks to the US Department of Homeland Security - Continuous Diagnostics and Mitigation Program, and the 2015 governmentwide "cyber sprint" and other recent efforts, US federal agencies now have much better data on their users, devices and network traffic than just a few years ago.

These programs and activities have provided agencies with key objectives, tools and support to establish a baseline of what is connecting to the network, who is connecting to the network, what data is on the network and how access is being used – its providing continuous monitoring of who has access to what? And what they are doing with that access. Building that picture of Privileged and Non-Privileged users alike, as well as Non-person Entities. A lot of the discovery, correlation and visibility is a result of Identity Governance controls and practices they have implemented in the SailPoint platform.

As US federal agencies continue to support large numbers of remote workers, IT leaders have started to evolve their thinking on zero-trust security architectures. Increasingly, they are becoming more comfortable with the concept and are seeking to lay the foundation for deployments.

"The new normal" has become an overused term since the global pandemic upended workplaces, but the surge in telework has indeed changed security conversations - It's been a catalyst for people to think about how that strong network perimeter isn't what they thought it was. 

New or old, however, establishing what is normal in a network is essential to a zero-trust approach.

The Zero Trust concept represents this paradigm shift in cybersecurity – from perimeter-based to identity and device -centric, in which every transaction is verified before access is granted to users and devices. In the US federal government, it is still a relatively nascent approach, with some mature agencies implementing and conducting pilot programs. However, IT leaders seem to recognize that cybersecurity models are increasingly going to be defined by a zero-trust architecture.

In other words, rather than focusing on a perimeter-based defense, practitioners are focusing on the controls on sensitive data stores, applications, systems, and networks themselves; thereby directly guarding assets that matter. Identity-defined Zero Trust is a complex topic and touches almost every aspect of an organization’s IT and security infrastructure. Forward thinking organizations are achieving Zero Trust through the integration of existing identity and security technologies, and, they have implemented architectures that share identity context and provide risk-based access to critical resources, improving security without compromising compliance with government directives, standards, and frameworks.

The Identity is the new perimeter and has never been more important in protecting a nations secrets and citizens. Cybersecurity has become a team sport – requiring many disciplines, stakeholders, and vendors to work together. Is your Identity Governance program ready for modern security architecture?

About the author: Frank Briguglio, Public Sector Identity Governance Strategist at SailPoint, specializes in Government Security and Compliance.

Copyright 2010 Respective Author at Infosec Island
  • December 17th 2020 at 10:26

How Extreme Weather Will Create Chaos on Infrastructure

Extreme weather events will soon become more frequent and widespread, devastating areas of the world that typically don’t experience them and amplifying the destruction in areas that do. We have already seen devastating wildfires and an increase in hurricane activity this year in the United States. Uncovering shortcomings in technical and physical infrastructure, these events will cause significant disruption and damage to IT systems and assets. Data centers will be considerably impacted, with dependent organizations losing access to services and data, and Critical National Infrastructure (CNI) will be put at risk.

Extensive droughts will force governments to divert water traditionally used to cool data centers, resulting in unplanned outages. In coastal areas and river basins, catastrophic flooding, hurricanes, typhoons or monsoons will hit key infrastructure such as the electrical grid and telecommunication systems. Wildfires will lead to prolonged power outages, stretching continuity arrangements to breaking point. The impact of extreme weather events on local staff, who may be unwilling or unable to get to their workplace, will put operational capability in jeopardy. The magnitude of extreme weather events – and their prevalence in areas that have not previously been prone to them – will create havoc for organizations that have not prepared for their impact.

In addition to natural factors, environmental activists will establish a link between global warming and data center power consumption and will consider them to be valid targets for action. For data-centric organizations, the capabilities of data centers and core technical infrastructure will be pushed to the extreme, as business continuity and disaster recovery plans are put to the test like never before.

What are the Global Consequences of This Threat?

Extreme weather events have frightening consequences for people’s lives and have the potential to degrade or destroy critical infrastructure. From wildfires on the West Coast of the United States that wreck power lines, to extreme rainfall and flooding in South Asian communities that poison fresh water supplies and disrupt other critical services, the impacts of extreme weather are pronounced and deadly. They have severe ramifications for the availability of services and information – for example, in 2015 severe flooding in the UK city of Leeds caused a telecommunications data center to lose power, resulting in a large-scale outage.

According to the Intergovernmental Panel on Climate Change (IPCC), human-induced warming from fossil fuel usage, overbreeding of animals and deforestation will contribute to, and exacerbate, the damage caused by extreme weather events. The impact on human lives, infrastructure and organizations around the world will be destructive.

The probability and impact of extreme weather events are increasing and will soon spread to areas of the world that haven’t historically experienced them. Overall, up to 60% of locations across North America, Europe, East Asia and South America are expected to see a threefold increase in various extreme weather events over the coming years. Moreover, the US Federal Emergency Management Agency released new proposed flood maps along the west coast of Florida, showing that many companies that once assumed their data backup solutions were safe will find themselves struggling to deal with rising water levels. These increasingly volatile weather conditions will result in severe damage to infrastructure including telecommunication towers, pipelines, cables and data centers.

A study performed by the Uptime Institute found that 71% of organizations are not preparing for severe weather events and 45% are ignoring the risk of environmental disruption to their data centers, highlighting the need to take more action to ensure preparedness and resilience.

Data centers are some of the biggest users of energy in the world, using up to 416 terawatt hours of energy annually and accounting for 1–3% of the global electricity demand, doubling every four years. According to Greenpeace, only 20% of the energy used by data centers is from renewable resources. Criticism will soon turn to action, with environmental activists targeting organizations that use technical infrastructure that contributes towards harming the environment.

With the likelihood of extreme weather events increasing and becoming more damaging, organizations will be caught off guard, as their core infrastructure is crippled and CNI is taken offline. Combined with a greater scrutiny from environmental activists, data centers and core infrastructure will be put at risk.

How Should Your Business Prepare?

Extreme weather events, coupled with environmental activism, should prompt a fundamental re-examination of and re-investment in organizational resilience. It is critical that organizations risk assess their physical infrastructure and decide whether to relocate, harden it or transfer risk to cloud service providers.

In the short term, organizations should review risk exposure to extreme weather events, considering the location of data centers. Additionally, revise business continuity and disaster recovery plans and conduct a cyber security exercise with an extreme weather scenario.

In the long term, consider relocation of strategic assets that are at high risk and transfer risk to cloud or outsourced service providers. Finally, invest in infrastructure that is more durable in extreme weather conditions.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • October 21st 2020 at 10:40

BSIMM11 Observes the Cutting Edge of Software Security Initiatives

If you want to improve the security of your software—and you should—then you need the Building Security In Maturity Model (BSIMM), an annual report on the evolution of software security initiatives (SSIs). The latest iteration, BSIMM11, is based on observations of 130 participating companies, primarily in nine industry verticals and spanning multiple geographies.

The BSIMM examines software security activities, or controls, on which organizations are actually spending time and money. This real-world view—actual practices as opposed to someone’s idea of best practices—is reflected in the descriptions written for each of the 121 activities included in the BSIMM11.

Since the BSIMM is completely data-driven, this report is different from any earlier ones. That’s because the world of software security evolves. The changes in BSIMM11 reflect that evolution. Among them:

New software security activities 

BSIMM10 added new activities to reflect the reality that some organizations were working on ways to speed up security to match the speed with which the business delivers functionality to market.

To those, BSIMM11 adds activities for implementing event-driven security testing and publishing risk data for deployable artifacts. Those directly reflect the ongoing DevOps and DevSecOps evolution and its intersection with traditional software security groups.

Don’t just shift left: Shift everywhere

When the BSIMM’s authors began writing about the concept of shifting left around 2006, it was addressing a niche audience. But the term rapidly became a mantra for product vendors and at security conferences, dominating presentations and panel discussions. At the February 2020 RSA conference in San Francisco, you couldn’t get through any of the sessions in the DevSecOps Days track without hearing it multiple times.

And the point is an important one: Don’t wait until the end of the SDLC to start looking for security vulnerabilities.

But the concept was never meant to be taken literally, as in “shift (only) left.”

“What we really meant is more accurately described as shift everywhere—to conduct an activity as quickly as possible, with the highest fidelity, as soon as the artifacts on which that activity depends are made available,” said Sammy Migues, principal scientist at Synopsys and a co-author of the BSIMM since its beginning.

Engineering demands security at speed

Perhaps you could call it moving security to the grassroots. Because while in some organizations tracked in the BSIMM there is only a small, centralized software security group focused primarily on governance, in a growing number of cases engineering teams now perform many of the software security efforts, including CloudSec, ContainerSec, DeploymentSec, ConfigSec, SecTools, OpsSec, and so on.

That is yielding mixed results. Being agile, those teams can perform those activities quickly, which is good, but it can be too fast for management teams to assess the impact on organizational risk. Not so good. Few organizations so far have completely harmonized centralized governance software security efforts and engineering software security efforts into a cohesive, explainable, defensible risk management program.

Still, engineering groups are making it clear that feature velocity is a priority. Security testing tools that run in cadence and invisibly in their toolchains—even free and open source tools—likely have more value today than more thorough commercial tools that create, or appear to create, more friction than benefit. The message: We’d love to have security in our value streams—if you don’t slow us down.

The cloud: Division of responsibility

The advantages of moving to the cloud are well known. It’s cheaper, it makes collaboration of a dispersed workforce easier, and it increases mobility, which is practically mandatory during an extended pandemic.

But using the cloud effectively also means outsourcing to the cloud vendor at least parts of your security architecture, feature provisioning, and other software security practice areas that are traditionally done locally.

As the BSIMM notes, “cloud providers are 100% responsible for providing security software for organizations to use, but the organizations are 100% responsible for software security.”

Digital transformation: Everybody’s doing it

Digital transformation efforts are pervasive, and software security is a key element of it at every level of an organization.

At the executive (SSI) level, the organization must move its technology stacks, processes, and people toward an automate-first strategy.

At the SSG level, the team must reduce analog debt, replacing documents and spreadsheets with governance as code.

At the engineering level, teams must integrate intelligence into their tooling, toolchains, environments, software, and everywhere else.

Security: Getting easier—and more difficult

Foundational software security activities are simultaneously getting easier and harder. Software inventory used to be an Excel spreadsheet with application names. It then became a (mostly out-of-date) configuration management database.

Now organizations need inventories of applications, APIs, microservices, open source, containers, glue code, orchestration code, configurations, source code, binary code, running applications, etc. Automation helps but there are an enormous number of moving parts.

 “Primarily, we see this implemented as a significant acceleration in process automation, in applying some manner of intelligence through sensors to prevent people from becoming process blockers, and in the start of a cultural acceptance that going faster means that not everything (all desired security testing) can be done in-band of the delivery lifecycle,” Migues said.

Your roadmap to a better software security initiative starts here

There is much more detail in BSIMM11, which reports in depth on the 121 activities grouped under 12 practices that are, in turn, grouped under four domains: governance, intelligence, secure software development life cycle (SSDL) touchpoints, and deployment.

In addition to helping an organization start an SSI, the BSIMM also gives them a way to evaluate the maturity of their SSI, from “emerging,” or just starting; to “maturing,” meaning up and running, including some executive support and expectations; to “optimizing,” which describes organizations that are fine-tuning their existing security capabilities to match their risk appetite and right-size their investment for the desired posture.

Wherever organizations are on that journey, the BSIMM provides a roadmap to help them reach their goals.

About the author: Taylor Armerding is an award-winning journalist who has been comvering the field of information security for years.

Copyright 2010 Respective Author at Infosec Island
  • October 21st 2020 at 10:35

Sustaining Video Collaboration Through End-to-End Encryption

The last several months have been the ultimate case study in workplace flexibility and adaptability. With the onset of the COVID-19 pandemic and widespread emergency activation plans through March and April, businesses large and small have all but abandoned their beautiful campuses and co-working environments. These communal, collaborative and in-person working experiences have been replaced by disparate remote environments that rely on a combination of video, chat and email to ease the transition and keep businesses productive.

The embrace of remote collaboration, and specifically video collaboration, has been swift and robust. In the first few months of the pandemic, downloads of video conferencing apps skyrocketed into the tens of millions, and traffic at many services surged anywhere from 10-fold to 100-fold. While uncertainty remains on what exactly a post-pandemic working experience will look like, it is without a doubt that video will remain a fundamental part of the collaboration tool kit.

While video has proven to be an effective bulwark against a disconnected workforce, the relative newness of the channel combined with its massive spike in popularity has revealed some fault lines. Most notably, several high-profile intrusions of ill-intended and disruptive individuals into private meetings. From a wider security perspective, this represents one of the most significant barriers to the long-term viability of video collaboration. Highly sensitive information and data are now shared over video – board meetings, product development brainstorms, sales reviews, negotiations – and the possibility that any of this information could be seen by the wrong eyes is a business-critical risk.

Yet, the vulnerabilities and threats presented by video conferencing are not insurmountable. In fact, there is a growing movement among CIOs and IT executives to further educate themselves on the nature of these platforms and identify the right solutions that fit the unique needs, opportunities and challenges of their businesses. As a result, there’s been a robust interest in  encryption.

The most common forms of encryption protect data when it is most vulnerable: in transit between one system and another.  However, in these common forms, communications are often not encrypted when they go through a variety of intermediaries, like internet or application service providers.  That leaves them susceptible to intrusion at varying points. If just one link in the chain is weak – or broken entirely – the entire video stream could be compromised.

Comprehensive and thorough protection of sensitive data requires a more robust solution – what’s known as end-to-end encryption. That means only the authorized participants in a video chat are able to access the video or audio streams. Consider it the structural equivalent of a digital storage locker. You may rent the space from the provider, but only the approved participants have the key.

It is important to note that secure video conferencing isn’t only important for large enterprises. Startups and small businesses are just as (if not more) vulnerable and benefit greatly from setting a high bar for security. Whether it’s protecting customers, meeting standards for business partnerships or even leaning into security as an additional value-add, higher levels of security can profoundly impact the growth of an organization.

As the future of work relies increasingly on digital workplace tools like video conferencing, security-first instincts and strong encryption are essential to prevent malicious actors from disrupting business continuity and productivity amid times of uncertainty. Video conferencing has enabled dispersed teams to achieve new opportunities and has a bright future ahead of it. By infusing end-to-end encryption into any video strategy, it ensures not only the sustainability of the channel, but the businesses that rely on it.

About the author: Michael Armer is Vice President and Chief Information Security Officer at 8x8

Copyright 2010 Respective Author at Infosec Island
  • October 21st 2020 at 10:27

Will Robo-Helpers Help Themselves to Your Data?

Over the coming years, organizations will experience growing disruption as threats from the digital world have an impact on the physical. Invasive technologies will be adopted across both industrial and consumer markets, creating an increasingly turbulent and unpredictable security environment. The requirement for a flexible approach to security and resilience will be crucial as a hybrid threat environment emerges.

While robots may seem like the perfect helpers, by 2022, the Information Security Forum (ISF) anticipates that a range of robotic devices, developed to perform a growing number of both mundane and complex human tasks, will be deployed in organizations and homes around the world. Friendly-faced, innocently-branded, and loaded with a selection of cameras and sensors, these constantly connected devices will roam freely. Poorly secured robo-helpers will be weaponized by attackers, committing acts of corporate espionage and stealing intellectual property. Attackers will exploit robo-helpers to target the most vulnerable members of society, such as the elderly or sick at home, in care homes or hospitals, resulting in reputational damage for both manufacturers and corporate users.

Organizations will be caught unawares as compromised robo-helpers such as autonomous vacuum cleaners, remote telepresence devices and miniature delivery vehicles roam unattended and unmonitored. The potential for these invasive machines to steal intellectual property and corporate secrets through a range of onboard cameras and sensors will become a significant concern. Organizations developing and using care-bots, a type of robo-helper designed for healthcare, will face significant financial and reputational damage when vulnerable individuals suffer emotional, physical, psychological and financial harm when care-bots are compromised.

This proliferation of robo-helpers into the home, offices, factories and hospitals will provide attackers with a range of opportunities to make financial gains and cause operational damage. Nation states and competitors will target robo-helpers that have access to sensitive areas in order to steal critical information. Organized criminal groups and hackers will also use manipulative techniques to frighten and coerce individuals into sending money or giving up sensitive information.

Imagine this scenario: the building maintenance division of a large pharmaceutical organization decides to replace its staff at the research and development (R&D) site with a range of outsourced, automated robots. These robo-helpers carry out building maintenance and sanitation operations in place of their human counterparts. Each unit is fitted with cameras and sensors and requires network connectivity in order to operate. Shortly after their deployment, details of an early phase experimental drug trial are leaked to the media.

Are you sure that your robo-helpers are secure?

What is the Justification for This Threat?

The extent to which robo-helpers are adopted and used, especially in homes and office spaces, currently differs significantly depending on geography and culture. Japan, China and South Korea, amongst other Asian nations, are typically more accepting of robots, whereas Western nations are currently less so. Robo-helpers are particularly seen in a positive light in Japan, with The International Federation of Robotics attributing the cultural influence of the Japanese religion of Shinto – where both people and objects are believed to possess a spirit – as a key enabler for the high rate of robotics adoption in Japan. China, the US and Japan are currently the biggest exporters of robots in the world, with overall growth expected to increase worldwide.

There is a growing acceptance of robots in the home and workplace, which may indicate that organizations are ready to accelerate the rate of robo-helper adoption. In offices and homes, a growing number of semi-autonomous robo-helpers are due to hit global consumer markets as early as 2020, all built with a range of networked cameras and sensors. As with poorly secured IoT devices that are constantly connected to an organization’s network, a security flaw or vulnerability in a robo-helper will further broaden attack surfaces, presenting yet another access point for attackers to exploit.

Robotics have been used in manufacturing for decades, but as they become more popular these robo-helpers will perform a greater range of tasks, giving them access to a wealth of sensitive data and locations. In the education sector robots will soon be used in schools, with developers in Silicon Valley creating robo-helpers for teachers that can scan students’ facial expressions and provide one-to-one support for logical subjects such as languages and mathematics. In healthcare there have also been breakthroughs – in November 2019 the world’s first brain aneurysm surgery using a robo-helper was completed, demonstrating that robot-assisted procedures enhance flexibility, control and precision.

As these robots gain greater autonomy and perform a greater number of surgeries over time, the need to secure them will become ever more urgent. In logistics, delivery-bots have seen significant investment and improvement, now using onboard cameras and sensors to navigate difficult terrain and unfamiliar environments.

Robo-helpers will make their way into the lives of more vulnerable individuals in care homes, schools and community centers and people will increasingly feel comfortable sharing sensitive information about their lives with them. Attackers will realize this, aiming to exploit these non-tech-savvy members of society into transferring funds or giving up sensitive information. Organizations developing these products or using them in their business will face serious reputational damage, as well as legal and financial repercussions when their customers become victims.

With the proliferation of robo-helpers across a growing number of countries and into a greater number of industries and homes, the opportunities for attackers to compromise individuals and organizations that use them will be alarming.

How Should Your Organization Prepare?

Organizations using robo-helpers in their business, or providing them to others, should ensure that devices are properly protected against attacks and cannot be used to compromise the privacy and rights of customers.

In the short term, organizations should restrict robo-helper access to sensitive locations. We recommend that they segregate access and monitor traffic between robo-helpers and the corporate network and ensure that robo-helpers using cameras and sensors comply with data protection regulations. Finally, dispose of robo-helpers securely.

In the long term, gain assurance over robo-helpers used in the organization and limit the capabilities of robo-helpers to ensure that ethical norms are not breached. Monitor specific robo-helpers for signs of fraudulent or dangerous activities and provide training and awareness around appropriate use and behaviors.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • September 8th 2020 at 08:20

Securing the Hybrid Workforce Begins with Three Crucial Steps

The global shift to a remote workforce has redefined the way organizations structure their business models. As executives reestablish work policies to accommodate remote employees well beyond the initially anticipated duration, a new era of work will emerge: the hybrid workforce, one more largely split between office and remote environments. While this transition brings a wave of opportunity for organizations and employees, it also opens new doors for bad actors to capitalize on strained IT departments who have taken on additional responsibility to ensure sensitive data remains secure, whether on or off the corporate network.

While threats to company data range in attack method, ransomware continues to be the most prominent risk known to organizations worldwide, with a 41% increase in 2019 alone. It’s important that companies focus on acknowledging this threat and deploying strategies to prepare, defend and repair incidents, before adapting to a hybrid workforce model. This process will prevent organizations from falling victim to attacks where data loss or ransom payment are the only unfortunate options. To win the war on ransomware, organizations should incorporate a plan for IT organizations that ensures they have the resilience needed to overcome any attack. Let’s explore three crucial steps for ransomware resilience in more detail.

Focus on education first, avoid reactive approaches to threats later

Education – beginning after threat actors are identified – should be the first step taken on the path towards resilience. To avoid being caught in a reactive position, should a ransomware incident arise, it’s important to understand the three main mechanisms for entry: internet-connected RDP or other remote access, phishing attacks and software vulnerabilities. Once organizations know where the threats lie, they can tactfully approach training with strategies to refine IT and user security, putting additional preparation tactics in place. Identifying the top three mechanisms can help IT administration isolate RDP servers with backup components, integrate tools to assess the threat of phishing attacks to help spot and respond correctly, and inform users on recurrent updates to critical categories of IT assets, such as operating systems, applications, databases and device firmware.

Additionally, preparing how to use the ransomware tools in place will help IT organizations familiarize themselves with different restore scenarios. Whether it be a secure restore process that will abort when malware is detected or software that can detect ransomware ahead of restoring a system, the ability to perform different restore scenarios will become invaluable to organizations. When an attack does happen, they will recognize, understand and have confidence in the process of working towards recovery. By taking the education aspect of these steps seriously, organizations can decrease the ransomware risks, costs and pressure of dealing with a ransomware incident unprepared.

Implement backup solutions that maintain business continuity 

An important part of ransomware resiliency is the implementation of backup infrastructure to create and maintain strong business continuity. Organizations need to have a reliable system in place that protects their servers and keeps them from ever having to pay to get their data back. Consider keeping the backup server isolated from the internet and limit shared accounts that grant access to all users. Instead, assign specific tasks within the server that are relevant for users and require two-factor authentication for remote desktop access. Additionally, backups with an air-gapped, offline or immutable copy of data paired with the 3-2-1 rule will provide one of the most critical defenses against ransomware, insider threats and accidental deletion.

Furthermore, detecting a ransomware threat as early as possible gives IT organizations a significant advantage. This requires tools in place to flag possible threat activity. For endpoint devices displaced remotely, backup repositories that are set up to identify risks will give IT further insight into an incredible surface area to analyze for potential threat introduction. If implementations don’t prohibit attacks, another viable option is encrypting backups wherever possible for an additional layer of protection – threat actors charging ransom to prevent leaking data do not want to have to decrypt it. When it comes to a ransomware incident, there isn’t one single way to recover, but there are many options aside from these that organizations can take. The important thing to remember is that resiliency will be predicated on how backup solutions are implemented, the behavior of threat and the course of remediation. Take time to research the options available and ensure that solutions are implemented to protect your company.

Prepare to remediate an incident in advance

Even when there are steps in place that leverage education and implementation techniques to combat ransomware before an attack hits, organizations should still be prepared to remediate a threat if introduced. Layers of defense against attacks are invaluable, but organizations need to also map out specifically what to do when a threat is discovered. Should a ransomware incident happen, organizations need to have support in place to guide the restore process so that backups aren’t put at risk. Communication is key, having a list of security, incident response, and identity management contacts in place if needed – inside the organization or externally – will help ease the process towards remediation.

Next, have a pre-approved chain of decision makers in place. When it comes time to make decisions, like whether to restore or to fail over company data in an event of an attack, organizations should know who to turn to for decision authority. If conditions are ready to restore, IT should be familiar with recovery options based on the ransomware situation. Implement additional checks for safety before putting systems on the network again – like an antivirus scan before restoration completes – and ensure the right process is underway. Once the process is complete, implement a sweeping forced change of passwords to reduce the threat resurfacing.

The threat that ransomware poses to organizations both large and small is real. While no one can predict when or how an attack will happen, IT organizations that have a strong, multi-layered defense and strategy in place have a greater chance for recovery. With the right preparation, the steps outlined here can increase any organization’s resiliency – whether in office, remote or a combination of the two – against a ransomware incident and avoid data loss, financial loss, business reputation damage or more.

About the author: Rick Vanover is senior director of product strategy for Veeam.

Copyright 2010 Respective Author at Infosec Island
  • September 2nd 2020 at 08:30

A New Strategy for DDoS Protection: Log Analysis on Steroids

Anyone whose business depends on online traffic knows how critical it is to protect your business against Distributed Denial of Service (DDoS) attacks. And with cyber attackers more persistent than ever – Q1 2020 DDoS attacks surged by 80% year over year and their average duration rose by 25%—you also know how challenging this can be.

Now imagine you’re responsible for blocking, mitigating, and neutralizing DDoS attacks where the attack surface is tens of thousands of websites. That’s exactly what HubSpot, a top marketing and sales SaaS provider, was up against. How they overcame the challenges they faced makes for an interesting case study in DDoS response and mitigation.

Drinking from a Firehouse

HubSpot’s CMS Hub powers thousands of websites across the globe. Like many organizations, HubSpot uses a Content Delivery Network (CDN) solution to help bolster security and performance.

CDNs, which are typically associated with improving web performance, are built to make content available at edges of the network, providing both performance and data about access patterns across the network. To handle the CDN log data spikes inherent with DDoS attacks, organizations often guesstimate how much compute they may need and maintain that higher level of resource (and expenditure) for their logging solution. Or if budgets don’t allow, they dial back the amount of log data they retain and analyze.

In HubSpot’s case, they use Cloudflare CDN as the first layer of protection for all incoming traffic on the websites they host. This equates to about 136,000 requests/second, or roughly 10TB/day, of Cloudflare log data that HubSpot has at its disposal to help triage and neutralize DDoS attacks. Talk about drinking from a firehouse!

HubSpot makes use of Cloudflare’s Logpushservice to push Cloudflare logs that contain headers and cache statuses for each request directly to HubSpot’s Amazon S3 cloud object storage. In order to process that data and make it searchable, HubSpot’s dedicated security team deployed and managed their own open-source ELK Stack consisting of Elasticsearch (a search database), Logstash (a log ingestion and processing pipeline), and Kibana (a visualization tool for log search analytics). They also used open source Kafka to queue logs into the self-managed ELK cluster.

To prepare the Cloudflare logs for ingestion into the ELK cluster, HubSpot had created a pipeline that would download the Cloudflare logs from S3 into a Kafka pipeline, apply some transformations on the data, insert into a second Kafka queue whereby Logstash would then process the data, and output it into the Elasticsearch cluster. The security team would then use Kibana to interact with the Cloudflare log data to triage DDoS attacks as they occur.

Managing an Elasticsearch cluster dedicated to this Cloudflare/DDoS mitigation use case presented a number of continuing challenges. It required constant maintenance by members of the HubSpot Elasticsearch team. The growth in log data from HubSpot’s rapid customer base expansion was compounded by the fact that DDoS attacks themselves inherently generate a massive spike in log data while they are occurring. Unfortunately, these spikes often triggered instability in the Elastic cluster when they were needed most, during the firefighting and mitigation process. 

Cost was also a concern. Although Elasticsearch, Logstash, and Kibana open source applications can be acquired at no cost, the sheer volume of existing and incoming log data from Cloudflare required HubSpot to manage a very large and increasingly expensive ELK cluster. Infrastructure costs for storage, compute, and networking to support the growing cluster grew faster than the data. And certainly, the human capital in time spent monitoring, maintaining, and keeping the cluster stable and secure was significant. The team constantly had discussions about whether to add more compute to the cluster or reduce data retention time. To accommodate their Cloudflare volume, which was exceeding 10TB/day and growing, HubSpot was forced to limit retention to just five days. 

The Data Lake Way

Like many companies whose business solely or significantly relies on online commerce, HubSpot wanted a simple, scalable, and cost-effective way to handle the continued growth of their security log data volume.

They were wary of solutions that might ultimately force them to reduce data retention to a point where the data wasn’t useful. They also needed to be able to keep up with huge data throughput at a low latency so that when it hit Amazon S3, HubSpot could quickly and efficiently firefight DDoS attacks.

HubSpot decided to rethink its approach to security log analysis and management. They embraced a new approach that consisted primarily of these elements:

- Using a fully managed log analysis serviceso internal teams wouldn’thave to manage the scaling of ingestion or query side components and could eliminate compute resources

- Leveraging the Kibana UIthat the security team is already proficient with

- Turning their S3 cloud object storage into a searchable analytic data lakeso Cloudflare CDN and other security-related log data could be easily cleaned, prepared, and analyzed in place, without data movement or schema management

By doing this, HubSpot can effectively tackle DDoS challenges. They significantly cut their costs and can easily handle the 10TB+/day flow of Cloudflare log data, without impacting performance.

HubSpot no longer has to sacrifice data retention time. They can retain Cloudflare log data for much longer than 5 days, without worrying about costs, and can dynamically scale resources so there is no need to invest in compute that’s not warranted. This is critical for long-tail DDoS protection planning and execution, and enables HubSpot to easily meet SLAs for DDoS attack response time.

Data lake-based approaches also enable IT organizations to unify all their security data sources in one place for better and more efficient overall protection. Products that empower data lake thinking allow  new workloads to be added on the fly with no provisioning or configuration required, helping organizations gain even greater value from log data for security use cases. For instance, in addition to storing and analyzing externally generated log data within their S3 cloud object storage, HubSpot will be storing and monitoring internal security log data to enhance insider threat detection and prevention.

Incorporating a data lake philosophy into your security strategy is like putting log analysis on steroids. You can store and process exponentially more data volume and types, protect better, and spend much less.

About the author: Dave Armlin is VP of Customer Success and Solutions Architecture at ChaosSearch. Dave has spent his 25+ year career building, deploying, and evangelizing secure enterprise and cloud-based architectures.

Copyright 2010 Respective Author at Infosec Island
  • August 26th 2020 at 06:49

COVID-19 Aside, Data Protection Regulations March Ahead: What To Consider

COVID-19 may be complicating organizations’ cybersecurity efforts as they shift more of their operations online, but that doesn’t lessen the pressure to comply with government regulations that are placing increased scrutiny on data privacy.

Despite the pandemic, companies are obligated to comply with many laws governing data security and privacy, including the two most familiar to consumers -- the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). With CCPA enforcement set to begin July 1, organizations’ regulatory responsibilities just got tougher.

The CCPA is similar to GDPR in that it is designed to improve privacy rights and consumer protection, giving Californians the right to know when their personal data is being collected, whether their personal data is being disclosed or sold, and to whom. It allows them to access their personal data, say no to its sale, and request that a business delete it.

The law applies to any business with gross revenues over $25 million and that has personal information on 50,000 or more California citizens, whether the company is based in California or not. Violations can result in stiff fines.

Like GDPR before it, CCPA makes data security and regulatory compliance more of a challenge and requires businesses to create a number of new processes to fully understand what data they have stored in their networks, who has access to it, and how to protect it.

The challenge is especially rigorous for large organizations that collect and store high volumes of data, which is often spread across multiple databases and environments. And CCPA’s enforcement date comes as companies have already been scrambling to deal with COVID-19’s impact – enabling remote workforces while guarding against hackers trying to exploit fresh openings to infiltrate networks.

Here are four things that every business should consider in maintaining a rigid security posture to protect its most important asset – its data – and meet rising regulatory requirements:

1.    Protect headcount.

We may be in an economic downturn, but now is not the time to lay off anyone with data security and privacy responsibility. Oftentimes when a company is forced to fire people, the pain is spread equally across the organization – say 10 percent for each department. Because the CISO organization (as well as the rest of IT) are usually considered “general and administrative” overhead, the target on its back can be just as large.

In the current environment, security staff certainly needs to be exempt from cuts. Most security teams have little to no overlap – there is a networking expert, an endpoint specialist, someone responsible for cloud, etc. And one person who focuses on data and application security, if you’re lucky enough to have this as a dedicated resource.

The data and application security role has never been more vital, both to safeguard the organization as more data and applications move online and to handle data security regulatory compliance, an onus companies continue to carry despite the pandemic. This person should be considered untouchable in any resource action.

2.    Don’t drop the ball on breach notification.

It’s a question mark to what extent officials are aggressively conducting audits to vigorously enforce these laws during the pandemic. However, I would advise companies to assume that stringent enforcement remains the norm.

This is another reason that fostering strong security is all the more crucial now. For example, companies are still required to notify the relevant governing body if it suffers a breach. This initiates a process involving its IT, security, and legal teams, and any other relevant departments. Who wants that distraction anytime, and especially during a global crisis?

Beyond regulatory factors, companies simply owe it to their customers to handle their data responsibly. This was of course true before COVID-19 and CCPA enforcement, but its importance has intensified. A Yahoo-style scandal now could cause reputational damage that the company never recovers from.

3.    Ask the critical questions that regulations raise.

Where is personal data stored? Companies must scan their networks and servers to find any unknown databases, identify sensitive data using dictionary and pattern-matching methods, and pore through database content for sensitive information such as credit card numbers, email addresses, and system credentials

Which data has been added or updated within the last 12 months? You need to monitor all user database access -- on-premises or in the cloud -- and retain all the audit logs so you can identify the user by role or account type, understand whether the data accessed was sensitive, and detect non-compliant access behaviors.

Is there any unauthorized data access or exfiltration? Using machine learning and other automation technologies, you need to automatically uncover unusual data activity, uncovering threats before they become breaches.

Are we pseudonymizing data? Data masking techniques safeguard sensitive data from exposure in non-production or DevOps environments by substituting fictional data for sensitive data, reducing the risk of sensitive data exposure.

4.    Assume more regulation will come.

As digital transformation makes more and more data available everywhere, security and privacy concerns keep growing. One can assume that GDPR and CCPA may just be the tip of the regulatory iceberg. Similar initiatives in Wisconsin, Nevada, and other states show that it behooves organizations to get their data protection houses very much in order. Compliance will need to be a top priority for organizations for many years into the future.

About the author: Terry Ray has global responsibility for Imperva's technology strategy. He was the first U.S.-based Imperva employee and has been with the company for 14 years. He works with organizations around the world to help them discover and protect sensitive data, minimize risk for regulatory governance, set data security strategy and implement best practices.

Copyright 2010 Respective Author at Infosec Island
  • August 26th 2020 at 05:53

SecurityWeek Extends ICS Cyber Security Conference Call for Presentations to August 31, 2020

The official Call for Presentations (speakers) for SecurityWeek’s 2020 Industrial Control Systems (ICS) Cyber Security Conference, being held October 19 – 22, 2020 in SecurityWeek’s Virtual Conference Center, has been extended to August 31st.

As the premier ICS/SCADA cyber security conference, the event was originally scheduled to take place at the InterContinental Atlanta, but will now take place in a virtual environment due to COVID-19.

“Due to the impact of COVID-19 and transition to a fully virtual event, we have extended the deadline for submissions to allow more time for speakers to put together their ideas under the new format,” said Mike Lennon, Managing Director at SecurityWeek. “Given SecurityWeek’s global reach and scale, we expect this to be the largest security-focused gathering of its kind serving the industrial and critical infrastructure sectors.” 

ICS Cyber Security ConferenceThe 2020 Conference is expected to attract thousands of attendees from around the world, including large critical infrastructure and industrial organizations, military and state and Federal Government. 

SecurityWeek has developed a fully immersive virtual conference center on a cutting- edge platform that provides attendees with the opportunity to network and interact from anywhere in the world.

As the original ICS/SCADA cyber security conference, the event is the longest-running cyber security-focused event series for the industrial control systems sector. 

With an 18-year history, the conference has proven to bring value to attendees through the robust exchange of technical information, actual incidents, insights, and best practices to help protect critical infrastructures from cyber-attacks.

Produced by SecurityWeek, the conference addresses ICS/SCADA topics including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

Through the Call for Speakers, a conference committee will accept speaker submissions for possible inclusion in the program at the 2020 ICS Cyber Security Conference.

The conference committee encourages proposals for both main track, panel discussions, and “In Focus” sessions. Most sessions will be mixed between 30 and 45 minutes in length including time for Q&A.

Submissions will be reviewed on an ongoing basis so early submission is highly encouraged. Submissions must include proposed presentation title, an informative session abstract, including learning objectives for attendees if relevant; and contact information and bio for the proposed speaker.

All speakers must adhere to the 100% vendor neutral / no commercial policy of the conference. If speakers cannot respect this policy, they should not submit a proposal.

To be considered, interested speakers should submit proposals by email to events(at)securityweek.com with the subject line “ICS2020 CFP” by August 31, 2020.

Plan on Attending the 2020 ICS Cyber Security Conference? Online registration is open, with discounts available for early registration.

Copyright 2010 Respective Author at Infosec Island
  • August 12th 2020 at 17:08

SecurityWeek to Host Cloud Security Summit Virtual Event on August 13, 2020

Enterprise Security Professional to Discuss Latest Cloud Security Trends and Strategies Via Fully Immersive Virtual Event Experience

SecurityWeek will host its 2020 Cloud Security Summit virtual event on Thursday, August 13, 2020.

Through a fully immersive virtual environment, attendees will be able to interact with leading solution providers and other end users tasked with securing various cloud environments and services.

“As enterprises adopt cloud-based services to leverage benefits such as scalability, increased efficiency, and cost savings, security has remained a top concern,” said Mike Lennon, Managing Director at SecurityWeek. “SecurityWeek’s Cloud Security Summit will help organizations learn how to utilize tools, controls, and design models needed to properly secure cloud environments.”

The Cloud Security Summit kicks off at 11:00AM ET on Thursday, August 13, 2020 and features sessions, including:

  • Augmenting Native Cloud Security Services to Achieve Enterprise-grade Security
  • Measuring and Mitigating the Risk of Lateral Movement
  • Weathering the Storm: Cyber AI for Cloud and SaaS
  • Securing Cloud Requires Network Policy and Segmentation
  • Managing Digital Trust in the Era of Cloud Megabreaches
  • The Rise of Secure Access Service Edge (SASE)
  • Fireside Chat with Gunter Ollmann, CSO of Microsoft’s Cloud and AI Security Division

Sponsors of the 2020 Cloud Security Summit include: DivvyCloud by Rapid7, Tufin, Darktrace, SecurityScorecard, Bitglass, Orca Security, Auth0 and Datadog.

Register for the Cloud Security Summit at: https://bit.ly/CloudSec2020

Copyright 2010 Respective Author at Infosec Island
  • August 12th 2020 at 12:18

Avoiding Fuelling the Cyber-Crime Economy

We all know that the prices of key commodities such as oil, gold, steel and wheat don’t just impact individual business sectors as they fluctuate according to supply and demand:  they also power international trading markets and underpin the global economy. And it’s exactly the same with cyber-crime.

The prices of key commodities in the cyber-crime economy – such as stolen credentials, hacked accounts, or payment card details – not only reflect changes in supply and usage, but also influence the types of attack that criminals will favor.  After all, criminals are just as keen to maximise return on their investments and create ‘value’ as any legitimate business.

A recent report gave the current average prices during 2020 for some of these cyber-crime commodities on the Dark Web. Stolen credit-card details start at $12 each, and online banking details at $35. ‘Fullz’ (full identity) prices are typically $18, which is cheaper than just two years ago due to an oversupply of personally identifiable information following several high-profile breaches. A very basic malware-as-a-service attack against European or U.S. targets starts at $300, and a targeted DDoS attack starts at $10 per hour.

Extortion evolves

These prices help to explain one of the key shifts in cyber crime over the past two years:  the move away from ransomware to DDoS attacks for extortion. Ransomware has been around for decades, but on a relatively small scale, because most types of ransomware were unable to spread without users’ intervention. This meant attacks were limited in their scope to scrambling data on a few PCs or servers, unless the attacker got lucky.

But in 2017, the leak of the ‘EternalBlue’ exploit changed the game. Ransomware designed to take advantage of it – 2017’s WannaCry and NotPetya – could spread automatically to any vulnerable computer in an organization. All that was needed was a single user to open the malicious attachment, and the organization’s network could be paralyzed in minutes – making it much easier for criminals to monetize their attacks.

While this drove an 18-month bubble of ransomware attacks, it also forced organizations to patch against EternalBlue and deploy additional security measures, meaning attacks became less effective. Sophisticated malware like WannaCry and NotPetya cost time and money to develop, and major new exploits like EternalBlue are not common. As such, use of ransomware has declined, returning to its roots as a targeted attack tool.

DDoS deeds, done dirt cheap

DDoS attacks have replaced ransomware as the weapon of choice for extortion attempts. As mentioned earlier, a damaging attack is cheap to launch, using one of the many available DDoS-for-hire services at just $10 per hour or $60 for 24 hours (like any other business looking to attract customers, these services offer discounts to customers on bigger orders).

Why are DDoS attacks so cheap?  One of the key reasons is DDoS-for-hire service operators are increasingly using the scale and flexibility of public cloud services, just as legitimate organizations do. Link11’s researchshows the proportion of attacks using public clouds grew from 31% in H2 2018 to 51% in H2 2019. It’s easy to set up public cloud accounts using a $18 fake ID and a $12 stolen credit card, and simply hire out instances as needed to whoever wants to launch a malicious attack. When that credit card stops working, buy another.

Operating or renting these services is also very low-risk:  the World Economic Forum's ‘Global Risks Report 2020’ states that in the US, the likelihood of a cybercrime actor being caught and prosecuted is as low as 0.05%.  Yet the impact on the businesses targeted by attacks can be huge:  over $600,000 on average, according to Ponemon Institute´s Cost of Cyber Crime Study.

Further, the Covid-19 pandemic has made organizations more vulnerable than ever to the loss of online services, with the mass shift to home working and consumption of remote services – making DDoS attacks even more attractive as an extortion tool, as they cost so little, but have a strong ROI. This means any organization could find itself in attackers’ cross-hairs:  from banks and financial institutions to internet infrastructure, retailers, online gaming site, as well as public sector organizations and local governments.  If services are taken offline, or slowed to a crawl for just a few hours, employees’ normal work will be disrupted, customers won’t be able to transact, and revenues and reputation will take a hit. 

Make sure crime doesn’t pay

To avoid falling victim to the new wave of DDoS extortion attacks, and fuelling the cyber-crime economy through ransom payments, organizations need to defend their complex, decentralized and hybrid environments with cloud-based protection. This should route all traffic to the organization’s networks via an external cloud service, that identifies and filters out all malicious traffic instantly using AI techniques before an attack can impact on critical services – helping to ensure that those services are not disrupted.  Online crime may continue to be profitable for threat actors – but with the right defences, individual organizations can ensure that they’re not contributing.

Copyright 2010 Respective Author at Infosec Island
  • August 11th 2020 at 14:22

Expect Behavioral Analytics to Trigger a Consumer Backlash

In the coming years, organizations’ insatiable desire to understand consumers through behavioral analytics will result in an invasive deployment of cameras, sensors and applications in public and private places. A consumer and regulatory backlash against this intrusive practice will follow as individuals begin to understand the consequences.

Highly connected ecosystems of digital devices will enable organizations to harvest, repurpose and sell sensitive behavioral data about consumers without their consent, with attackers targeting and compromising poorly secured systems and databases at will.

Impacts will be felt across industries such as retail, gaming, marketing and insurance that are already dependent on behavioral analytics to sell products and services. There are also a growing number of sectors that will see an increased dependency on behavioral analytics, including finance, healthcare and education.

Organized criminal groups, hackers and competitors will begin stealing and compromising these treasure troves of sensitive data. Organizations whose business model is dependent on behavioral analytics will be forced to backtrack on costly investments as their practices are deemed to be based on mass surveillance and seen as a growing privacy concern by regulators and consumers alike.

What is the Justification for This Threat?

Data gathered from sensors and cameras in the physical world will supplement data already captured by digital platforms to build consumer profiles of unprecedented detail. The gathering and monetization of data from social media has already faced widespread condemnation, with regulators determining that some organizations’ practices are unethical.

For example, Facebook’s role in using behavioral data to affect political advertising for the European Referendum resulted in the UK's Information Commissioner’s Office fining the organization the maximum penalty of £500,000 in late 2019 – citing a lack of protection of personal information and privacy and failing to preserve a strong democracy.

Many organizations and governments will become increasingly dependent on behavioral analytics to underpin business models, as well as for monitoring the workforce and citizens. The development of ‘smart cities’ will only serve to amplify the production and gathering of behavioral data, with people interacting with digital ecosystems and technologies throughout the day in both private and public spaces. Data will be harvested, repurposed and sold to third parties, while the analysis will provide insights about individuals that they didn’t even know themselves.

An increasing number of individuals and consumer-rights groups are realizing how invasive behavioral analytics can be. An example of an associated backlash involved New York’s Hudson Yard in 2019, where the management required visitors to sign away the rights to their own photos taken of a specific building. However, this obligation was hidden within the small print of the contract signed by visitors upon entry. These visitors boycotted the building and sent thousands of complaints, resulting in the organization backtracking and rewriting the contracts.

Another substantial backlash surrounding invasive data collection occurred in London when Argent, a biometrics vendor, used facial recognition software to track individuals across a 67-acre site surrounding King's Cross Station without consent.

Attackers will also see this swathe of highly personal data as a key target. For example, data relating to individuals’ personal habits, medical and insurance details, will present an enticing prospect. Organizations that do not secure this information will face further scrutiny and potential fines from regulators.

How Should Your Organization Prepare?

Organizations that have invested in a range of sensors, cameras and applications for data gathering and behavioral analysis should ensure that current technical infrastructure is secure by design and is compliant with regulatory requirements.

In the short term, organizations should build and incorporate data gathering principles into a corporate policy. Additionally, they need to create transparency over data gathering practices and use and fully understand the legal and contractual exposure on harvesting, repurposing and selling data.

In the long term, implement privacy by design across the organization and identify the use of data in supply chain relationships. Finally, ensure that algorithms used in behavioral analytical systems are not skewed or biased towards particular demographics.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • August 10th 2020 at 16:16

Holding public cloud security to account

At one of the last cyber-security events I attended before the Covid-19 enforced lockdowns, I was talking with an IT director about how his organization secures its public cloud deployments. He told me: “We have over 500 separate AWS accounts in use, it helps all our development and cloud teams to manage the workloads they are responsible for without crossover or account bloat, and it also makes it easier to control cloud usage costs: all the accounts are billed centrally, but each account is a separate cost center with a clear owner.”

I asked about security, and he replied that each AWS account had different logins, meaning fewer staff had access to each account, which helped to protect each account.

While it’s true that having hundreds of separate public cloud accounts will help to keep a closer eye on cloud costs, it also creates huge complexity when trying to manage the connectivity and security of applications and workloads.  Especially when making changes to applications that cross different public cloud accounts, or when introducing infrastructure changes that touch many – or even all- accounts.

As I covered in my recent article on public cloud security, securing applications and data in these environments can be challenging. It’s far easier for application teams to spin up cloud resources and move applications to them, than it is for IT and security teams to get visibility and control across their growing cloud estates.

Even if you are using a single public cloud platform like AWS, each account has its own security controls – and many of them. Each VPC in every region within the account has separate security groups and access lists: even if they embody the same policy, you need to write and deploy them individually. Any time you need to make a change, you need to duplicate the work across each of these elements.

Then there’s the question of how security teams get visibility into all these cloud accounts with their different configurations, to ensure they are all properly protected according to the organization’s security policy. It’s almost impossible to do this using manual processes without overlooking – or introducing – potential vulnerabilities.

So how do the teams in charge of those hundreds of accounts manage them effectively? Here are my three key steps:

1. Gain visibility across your networks

The first challenge to address is a lack of visibility into all your AWS cloud accounts, from one vantage point. The security teams need to be able to observe all the security controls, across all account/region/VPC combinations.

2. Manage changes from a single console

The majority of network security policy changes need to touch a mix of the cloud providers’ own security controls as well as other controls, both in the cloud and on-premise. No cloud application is an island that is entire of itself – it needs to access resources in other parts of the organization’s estate. When changes to network security policies in all these diverse security controls are managed from a single system, security policies can be applied consistently, efficiently, and with a full audit trail of every change.

3. Automate security processes

In order to manage multiple public cloud accounts efficiently, automation is essential. Security automation dramatically accelerates change processes, avoids manual processing mistakes and misconfigurations, and enables better enforcement and auditing for regulatory compliance. It also helps organizations overcome skill gaps and staffing limitations.

With an automation solution handling these steps, organizations can get holistic, single-console security management across all their public cloud accounts, as well as their private cloud and on-premise deployments – which ensures they can count on robust security across their entire IT estate. 

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
  • August 10th 2020 at 15:15

No Silver Bullet for Addressing Cybersecurity Challenges During Pandemic

Infosec professionals have always had their work cut out for them, as the threat landscape continuously challenges existing security measures to adapt, improve and cope with the unexpected. As the coronavirus pandemic forced organizations to migrate their entire workforce to a work-from-home context, practically overnight, security professionals faced a new challenge for which half of them had not planned.

A recent Bitdefender survey reveal that 83 percent of US security and IT professionals believe the COVID-19 pandemic will change the way their business operates, mostly because their infrastructure had to adapt to accommodate remote work. Another concern for companies is that employees tend to be more relaxed about security (34 percent) and that working remotely means they will not be as vigilant in identifying and flagging suspicious activity and sticking to security protocols (34 percent).

Lessons learned

Having managed the initial work-from-home technology transition challenges, 1 in 4 security professionals understands the significant value and deployment of endpoint risk assessment tools. As mobility shifted to 100% for all employees, organizations could no longer rely on infrastructure-embedded and perimeter defense technologies to protect endpoints. Augmenting the endpoint security stack with risk assessment and risk analytics tools became mandatory in order to give infosec professionals needed visibility and more control over remote employee devices.

In addition to deploying risk analytics, 31 percent of infosec professionals indicated they would also increase employee training, as the current threat landscape has been witness to more socially engineered threats than actual malware sophistication. Employees are more at risk of clicking the wrong link or opening a tainted attachment, potentially compromising both their devices and company infrastructure.

With a greater need for visibility of weak spots within their infrastructure, 28 percent of security professionals have also had to adjust security policies. For instance, pre-pandemic policies that took into account infrastructure hardware and security appliances became useless in a remote work context.

The New Normal

While some companies have transitioned to the new normal faster than others, businesses understand they need to provide additional cybersecurity measures for employees, and to permanently increase their capability to monitor and protect devices outside of the office. There’s never been a silver bullet for addressing cybersecurity challenges, and the current post-pandemic era is further proof that security is a living organism that needs to adapt to ensure business continuity.

Nothing new to the role of an infosecurity professional.They still need to deploy the right people, the proper process and products, and the correct procedures to achieve long-term safety and success.

About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.

Copyright 2010 Respective Author at Infosec Island
  • August 10th 2020 at 15:12

Could the Twitter Social Engineering Hack Happen to You?

Learning from the experiences of others should be a key job requirement for all cybersecurity, AppSec, DevSecOps, CISO, CRMO and SecSDLC professionals. The recent attack against Twitter where high-profile accounts were compromised to promote a Bitcoin scam is one such opportunity.

As new information comes to light (and I sincerely hope that Twitter continues to provide meaningful details), everyone within the cybersecurity realm should look to both their internal IT and application development practices as well as those of your suppliers for evidence that this particular attack pattern couldn’t be executed against your organization.

What we know as of now is that on July 15th, an attack was launched against Twitter that targeted 130 accounts. Of those 130, 45 had their passwords reset and eight had their Twitter data downloaded. While the initial public focus was on Twitter Verified accounts, those eight accounts were not verified.

The attack itself was based on the concept of social engineering where the targets were Twitter employees with access to an administrative tool capable of modifying account access of individual Twitter employees.

The attacker’s actions included posting a Bitcoin scam on prominent accounts, but it has also been reported that there was an effort to acquire Twitter accounts with valuable names.

That the attack had a prominent component of a Bitcoin scam and a secondary component of account harvesting, there is an obvious first question we should be thinking about: With the level of access the attackers had, why wasn’t their attack more disruptive? This is a perfect example of attackers defining the success criteria and thus the rules of their attack.

That being said, it’s entirely plausible that the true goal of this attack has yet to be identified and that the attackers might easily have installed backdoors in Twitter’s systems that could lay dormant for some time.

Looking solely at the known information, everyone working with user data should be asking these types of questions:

  • Which accounts have administrator, super administrator or God-mode privileges?
  • Can a normal user possess administrator capabilities, or do they need to request them with specific justification?
  • Are all administrator-level changes logged and auditable?
  • Can an administrator modify logs of their activities?
  • Are there automated alerts to identify abnormal administrator activity, which might occur from rarely used accounts?
  • What limits are in place surrounding administrator access to user data?
  • What controls are in place to limit damage should an administrator misuse their credentials, either intentionally or as the result of a credential hack?

For most organizations, administrator access is something given to their most trusted employees. For some, this trust might stem from how long the employee has been with the organization. For others, trust might stem from a variety of background checks. None-the-less, administrators are humans and humans make errors in judgement – precisely the type of scenario social engineering targets.

Knowing that an administrator, particularly one with God-mode access rights, will be a prime target for social engineering efforts, any access granted to an administrator should be as limited as possible. This includes scenarios where an administrator is called upon to resolve users access issues.

After all, someone claiming to be locked out from their account could easily be an attacker attempting to coerce someone in tech support to transfer rightful ownership into their hands. This implies that on occasion a successful account takeover will occur, and that the legitimate owner will retain control of the original contact methods, such as email address, phone numbers and authenticator apps.

If the business sends a confirmation notice to the previous contact method when it changes, that then offers an additional level of warning for users who may be potential targets. The same situation should play out with any security settings such as recovery questions or 2FA configuration.

Since this attack on Twitter exploited weaknesses in their account administration process, it effectively targeted some of the most trusted people and processes within Twitter. Every business has trusted processes and people, which means that they could be equally vulnerable to such an attack.

This then serves as an opportunity for all businesses to reassess how they build and deploy applications with an eye on how they would be administered and what process weaknesses could be exploited.

About the author: Tim Mackey is Principal Security Strategist, CyRC, at Synopsys. Within this role, he engages with various technical communities to understand how to best solve application security problems. He specializes in container security, virtualization, cloud technologies, distributed systems engineering, mission critical engineering, performance monitoring, and large-scale data center operations.

Copyright 2010 Respective Author at Infosec Island
  • August 10th 2020 at 15:04

Augmented Reality Will Compromise the Privacy and Safety of Attack Victims

In the coming years, new technologies will further invade every element of daily life with sensors, cameras and other devices embedded in homes, offices, factories and public spaces. A constant stream of data will flow between the digital and physical worlds, with attacks on the digital world directly impacting the physical and creating dire consequences for privacy, well-being and personal safety.

Augmented Reality (AR) technologies will provide new opportunities for attackers to compromise the privacy and safety of their victims. Organizations rushing to adopt AR to enhance products and services will become an attractive target for attackers.

Compromised AR technologies will have an impact on a range of industries as they move beyond the traditional entertainment and gaming markets into areas such as retail, manufacturing, engineering and healthcare. Attackers will perform man-in-the-middle attacks on AR-enabled devices and infrastructure, gaining access to intimate and sensitive information in real-time. Ransomware and denial of service attacks will affect the availability of AR systems used in critical processes such as surgical operations or engineering safety checks. Attacks on the integrity of data used in AR systems will threaten the health and safety of individuals and the reputations of organizations.

As AR begins to pervade many elements of life, organizations, governments and consumers will begin using it more frequently and with greater dependency. AR will bridge the digital and physical realms. But as a relatively immature technology it will present nation states, organized criminal groups, terrorists and hackers with new opportunities to distort reality.

What is the Justification for This Threat?

AR has been heralded as the future visual interface to digital information systems. With 5G networks reducing latency between devices, AR technologies will proliferate across the world, with significant investment in the UK, US and Chinese markets.

The estimated global market value for AR technologies is set to grow from $4bn in 2017 to $60 billion by 2023, with use cases already being developed in the entertainment, retail, engineering, manufacturing and healthcare industries. There are increasing signs that AR will be promoted by major technology vendors such as Apple, which is said to be developing an AR headset for launch in 2020.

Vulnerabilities in devices, mobile apps and systems used by AR will give attackers the opportunity to compromise information, steal highly valuable and sensitive intellectual property, send false information to AR headsets and prevent access to AR systems.

The development of AR technologies across the manufacturing and engineering sectors is being driven by digital transformation and the desire for lower operational costs, increased productivity and streamlined processes. As AR systems and devices become the chosen medium for displaying schematics, blueprints and manuals to workers, attackers will be able to manipulate the information provided in real-time to compromise the quality and safety of products, as well as threatening the lives of users.

Many industries will become dependent on AR technologies for their products and services. For example, within air traffic control, AR displays are being evaluated as an aid to understanding aircraft movements in conditions of poor visibility. In the logistics and transport industries, AR will build upon systems such as GPS and voice assistants. With the help of Internet of Things (IoT) sensors, AI technologies, 5G and edge computing, AR systems will be able to overlay information to drivers in real-time. This will include demonstrating where live traffic accidents are happening, assisting during poor weather conditions, providing accurate journey times, and highlighting vehicle performance.

If the integrity or availability of data used in such systems is compromised, it will lead to significant operational disruption as well as risks to health and safety.

The healthcare industry is already a major target for cyber-attacks and the adoption of immature and vulnerable AR technologies in medical administration and surgical environments is likely to accelerate this trend. Medical professionals will be able to access sensitive records such as medical history, medication regimens and prescriptions through AR devices. This will create a greater attack surface as data is made available on more devices, resulting in a growing number of breaches and thefts of sensitive personal information.

AR promises much, but organizations will soon find themselves targeted by digital attacks that distort the physical world, disrupting operations and causing significant financial and reputational damage.

How Should Your Organization Prepare?

Organizations should be wary of the risks posed by AR. Many of the opportunities that AR ushers in will need to be risk assessed, with mitigating controls introduced to ensure that employees and consumers are safe and that privacy requirements are upheld.

In the short term, organizations should enhance vulnerability scanning and risk assessments of AR devices and software. They should also ensure that AR systems and devices that have records relating to personal data are secure. Additionally, create work-arounds, business continuity plans and redundancy processes in the event of failure of critical AR systems and devices.

In the long term, limit data propagation and sharing across AR environments. Organizations should also ensure that security requirements are included when procuring AR devices and purchase comprehensive insurance coverage for AR technology. Finally, establish and maintain skillsets required for individuals in roles that are reliant upon AR technology.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • July 8th 2020 at 05:38

Ending the Cloud Security Blame Game

Like many things in life, network security is a continuous cycle. Just when you’ve completed the security model for your organization’s current network environment, the network will evolve and change – which will in turn demand changes to the security model. And perhaps the biggest change that organizations’ security teams need to get to grips with is the cloud.

This was highlighted by a recent survey, in which over 75% of respondents said the cloud service provider is entirely responsible for cloud security. This rather worrying finding was offset by some respondents stating that security is also the responsibility of the customer to protect their applications and data in the cloud service, which shows at least some familiarity with the ‘shared responsibility’ cloud security model. 

What exactly does ‘shared responsibility’ mean? 

In reality, the responsibility for security in the cloud is only shared in the same way that an auto manufacturer installs locks and alarms in its cars. The security features are certainly there: but they offer no protection at all unless the vehicle owner actually activates and uses them.  

In other words, responsibility for security in the public cloud isn’t really ‘shared’.  Ensuring that applications and data are protected rests entirely on the customer of those services. Over recent years we’ve seen how several high-profile companies unwittingly exposed large volumes of data in AWS S3 buckets. These issues were not caused by problems in Amazon: they were the result of users misconfiguring the Amazon S3 services they were using, and not using proper controls when uploading sensitive data to the services. The data was placed in the buckets protected by only weak passwords (and in some cases, no password at all).

Cloud exposure

It’s important to remember that cloud servers and resources are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server that stores sensitive data, it is still likely to be protected by other security measures by default. It will probably sit behind the main corporate gateway, or other firewalls used to segment the network internally. Its databases will be accessible only from well-defined network segments. Users logging into it will have their accounts controlled by the centralized passwords management system. And so on.

In contrast, when you provision a server in the public cloud, it may easily be exposed to and accessible from any computer, anywhere in the world. Apart from a password, it might not have any other default protections in place. Therefore, it’s up to you to deploy the controls to protect the public cloud servers you use, and the applications and data they process. If you neglect this task and a breach occurs, the fault will be yours, not the cloud provider’s.

This means that it is the responsibility of your security team to establish perimeters, define security policies and implement controls to manage connectivity to those cloud servers. They need to set up controls to manage the connection between the organization’s public cloud and on-premise networks, for example using a VPN, and consider whether encryption is needed for data in the cloud. These measures will also require a logging infrastructure to record actions for management and audits, to get a record of what changes were made and who made them.

Of course, all these requirements across both on-premise and cloud environments add significant complexity to security management, demanding that IT and security teams use multiple different tools to make network changes and enforce security. However, using a network security policy management solution will greatly simplify these processes, enabling security teams to have visibility of their entire estate and enforce policies consistently across public clouds and the on-premise network from a single console.

The solution’s network simulation capabilities can be used to easily answer questions such as: ‘is my application server secure?’, or ‘is the traffic between these workloads protected by a security gateway?’ It can also quickly identify issues that could block an application’s connectivity (such as misconfigured or missing security rules, or incorrect routes) and then plan how to correct the connectivity issue across the relevant security controls. What’s more, the solution keeps an audit trail of every change for compliance reporting.

Remember that in the public cloud, there’s almost no such thing as ‘shared responsibility.’ Security is primarily your responsibility – with help from the cloud provider. But with the right approach to security management, that responsibility and protection is easy to maintain, without having to play the blame game.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
  • July 8th 2020 at 05:34

Edge Computing Set to Push Security to the Brink

In the coming years, the requirement for real-time data processing and analysis will drive organizations to adopt edge computing in order to reduce latency and increase connectivity between devices – but adopters will inadvertently bring about a renaissance of neglected security issues. Poorly secured edge computing environments will create multiple points of failure, and a lack of security oversight will enable attackers to significantly disrupt operations.

Organizations in industries such as manufacturing, utilities, or those using IoT and robotics will be dependent upon edge computing to connect their ever-expanding technical infrastructure. However, many will not have the visibility, security or analysis capabilities that have previously been associated with cloud service providers – information risks will be transferred firmly back within the purview of the organization. Attackers will exploit security blind spots, targeting devices on the periphery of the network environment. Operational capabilities will be crippled by sophisticated malware attacks, with organizations experiencing periods of significant downtime and financial damage.

Poor implementation of edge computing solutions will leave organizations open to attack. Nation states, hacking groups, hacktivists and terrorists aiming to disrupt operations will target edge computing devices, pushing security to the brink of failure and beyond.

What is the Justification for This Threat?

As the world moves into the fourth industrial revolution, the requirement for high-speed connectivity, real-time data processing and analytics will be increasingly important for business and society. With the combined IoT market size projected to reach $520 billion by 2021, the development of edge computing solutions alongside 5G networks will be required to provide near-instantaneous network speed and to underpin computational platforms close to where data is created.

The transition of processing from cloud platforms to edge computing will be a requirement for organizations demanding speed and significantly lower latency between devices. With potential use cases of edge computing ranging from real-time maintenance in vehicles, to drone surveillance in defense and mining, to health monitoring of livestock, securing this architecture will be a priority.

With edge computing solutions, security blind spots will provide attackers with an opportunity to access vital operational data and intellectual property. Moreover, organizations will be particularly susceptible to espionage and sabotage from nation states and other adversarial threats. Edge computing environments, by their nature, are decentralized and unlikely to benefit from initiatives such as security monitoring. Many devices sitting within this type of environment are also likely to have poor physical security while also operating in remote and hostile conditions. This creates challenges in terms of maintaining these devices and detecting any vulnerabilities or breaches.

Organizations that adopt edge computing will see an expansion of their threat landscape. With many organizations valuing speed and connectivity over security, the vast number of IoT devices, robotics and other technologies operating within edge computing environments will become unmanageable and hard to secure.

Edge computing will underpin critical national infrastructure (CNI) and many important services, reinforcing the necessity to secure them against a range of disruptive attacks and accidental errors. Failures in edge computing solutions will result in financial loss, regulatory fines and significant reputational damage. An inability to secure this infrastructure will be detrimental to the operational capabilities of the business as attackers compromise both physical and digital assets alike. Human lives may also be endangered, should systems in products such as drones, weaponry and vehicles be compromised.

How Should Your Organization Prepare?

Organizations that are planning to adopt edge computing should consider if this architectural approach is suitable for their requirements.

In the short term, organizations should review physical security and potential points of failure for edge computing environments in the context of operational resilience. Carry out penetration testing on edge computing environments, including hardware components. Finally, identify blind spots in security event and network management systems.

In the long term, generate a hybrid security approach that incorporates both cloud and edge computing. Create a secure architectural framework for edge computing and ensure security specialists are suitably trained to deal with edge computing-related threats.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • June 13th 2020 at 12:29

Make It So: Accelerating the Enterprise with Intent-Based Network Security

Sometimes, it seems that IT and security teams can’t win. They are judged on how quickly they can deploy their organization’s latest application or digital transformation initiative, but they’re also expected to safeguard those critical applications and data in increasingly complex hybrid networks – and in an ever more sophisticated threat landscape. That’s not an easy balancing act. 

When an enterprise rolls out a new application, or migrates a service to the cloud, it can take days, or even weeks, to ensure that all the servers and network segments can communicate with each other, while blocking access to hackers and unauthorized users. This is because the network fabric can include hundreds of servers and devices (such as firewalls and routers) as well as virtualized devices in public or private clouds.

When making changes to all these devices, teams need to ensure that they don’t disrupt the connectivity that supports the application, and don’t create any security gaps or compliance violations. But given the sheer complexity of today’s networks, it’s not too surprising that many organizations struggle with doing this. Our 2019 survey of managing security in hybrid and multi-cloud environments found that over 42% of organizations had experienced application or network outages caused by simple human errors or misconfigurations. 

What’s more, most organizations already have large network security policies in place with thousands, or even millions of policy rules deployed on their firewalls and routers. Removing any of these rules is often a very worrisome task, because the IT teams don’t have an answer to the big question of “why does this rule exist?”

The same question arises in many other scenarios, such as planning a maintenance window or handling an outage (“which applications are impacted when this device is powered off?”, “who should be notified”?), dealing with an insecure rule flagged by an audit, or limiting the blast radius of a malware attack (“What will be impacted if we remove this rule”?).

Intent-based networking (IBN) promises to solve these problems. Once security policies are properly annotated with the intent behind them, these operational tasks become much clearer and can be handled efficiently and with minimal damage. Instead of “move fast and break things” (which is unattractive in a security context, because “breaking” might mean “become vulnerable”) – wouldn’t it be better to “move fast and NOT break things”?

Intentions versus reality

As such, it’s no surprise that IBN is appealing to larger enterprises: it has the potential to ensure that networks can quickly adapt to the changing needs of the business, boosting agility without creating additional risk. However, while there are several IBN options available today, the technology is not yet fully mature. Some solutions offer IBN capabilities only in single-vendor network environments, while others have limited automation features. 

This means many current solutions are of limited use in the majority of enterprises which have hybrid network environments. To satisfy security and compliance demands, an enterprise’s network management and automation processes must cover its entire heterogeneous fabric, including all security devices and policies (whether in the data center, at its perimeter, across on-premise networks or in the cloud) to enable true agility without compromising protection.

So how can enterprises with these complex, hybrid environments align their network and security management processes closely to the needs of the business? Can they automate the management of business-driven application and network changes with straightforward, high level ‘make it so’ commands?

Also, where would the “intent” information come from? In an existing “brown-field” environment, how can we find out, in retrospect, what was the intent behind the existing policies?

The answer is that it is possible to do all this with network security policy management (NSPM) solutions. These can already deliver on IBN’s promise of enabling automated, error-free handling of business-driven changes, and faster application delivery across heterogenous environments – without compromising the organizations’ security or compliance postures. 

Intent-based network security

The right solution starts with the ability to automatically discover and map all the business applications in an enterprise, by monitoring and analyzing the network connectivity flows that support them. Through clustering analysis of netflow traffic summaries, modern NSPM solutions can automatically identify correlated business applications, and label the security policies supporting them – thereby automatically identifying the intent.

NSPM solutions can also identify the security devices and policies that support those connectivity flows across heterogeneous on-premise, SDN and cloud environments. This gives a ‘single source of truth’ for the entire network, storing and correlating all the application’s attributes in a single pane of glass, including configurations, IP addresses and policies.

With this holistic application and network map, the solution enables business application owners to request changes to network connectivity for their business applications without having to understand anything about the underlying network and security devices that the connectivity flows pass through.

The application owner simply makes a network connectivity request in their own high-level language, and the solution automatically understands and defines the technical changes required directly on the relevant network security devices. 

As part of this process the solution assesses the change requests for risk and compliance with the organization’s own policies, as well as industry regulations. If the changes carry no significant security risk, the solution automatically implements them directly on the relevant devices, and then verifies the process has been completed – all with zero touch. 

This means normal change requests are processed automatically — from request to implementation — in minutes, with little or no involvement of the networking team. Manual intervention is only required if a problem arises during the process, or if a request is flagged by the solution as high risk, while enabling IT, security and application teams to continuously monitor the state of the network and the business applications it supports. 

Network security management solutions realize the potential of IBN, as they: 

  1. Offer an application discovery capability that automatically assigns the intent to existing policies
  2. Translate and validate high-level business application requests into the relevant network configuration changes.
  3. Automate the implementation of those changes across existing heterogenous network infrastructure, with the assurance that changes are processed compliantly.
  4. Maintain awareness of the state of the enterprise network to ensure uptime, security and compliance. 
  5. Automatically alert IT staff to changes in network and application behaviors, such as an outage or break in connectivity, and recommend corrective action to maintain security and compliance.

These intent-based network security capabilities allow business application owners to express their high-level business needs, and automatically receive a continuously maintained, secure and continuously compliant end-to-end connectivity path for their applications. They also enable IT teams to provision, configure and manage networks far easier, faster and more securely. This achieves the delicate balance of meeting business demands for speed and agility, while ensuring that risks are minimized.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
  • June 13th 2020 at 10:24

Threat Horizon 2022: Cyber Attacks Businesses Need to Prepare for Now

The digital and physical worlds are on an irreversible collision course. By 2022, organizations will be plunged into crisis as ruthless attackers exploit weaknesses in immature technologies and take advantage of an unprepared workforce. At the same time, natural forces will ravage infrastructure.

Over the coming years organizations will experience growing disruption as threats from the digital world have an impact on the physical. Invasive technologies will be adopted across both industrial and consumer markets, creating an increasingly turbulent and unpredictable security environment. The requirement for a flexible approach to security and resilience will be crucial as a hybrid threat environment emerges.

The impact of threats will be felt on an unprecedented scale as ageing and neglected infrastructure is attacked, with services substantially disrupted due to vulnerabilities in the underlying technology. Mismanagement of connected assets will provide attackers with opportunities to exploit organizations.

A failure to understand the next generation of workers, the concerns of consumers and the risk posed by deceptive technology will erode the trust between organizations, consumers and investors. As a result, the need for a digital code of ethics will arise in order to protect brand reputation and profitability.

Organizations will have to adapt quickly to survive when digital and physical worlds collide. Those that don’t will find themselves exposed to threats that will outpace and overwhelm them.

At the Information Security Forum, we recently released Threat Horizon 2021, the latest in an annual series of reports that provide businesses a forward-looking view of the increasing threats in today’s always-on, interconnected world. In Threat Horizon 2021, we highlighted the top three threats to information security emerging over the next two years, as determined by our research.

Let’s take a quick look at these threats and what they mean for your organization:

THREAT #1: INVASIVE TECHNOLOGY DISRUPTS THE EVERYDAY

New technologies will further invade every element of daily life with sensors, cameras and other devices embedded in homes, offices, factories and public spaces. A constant stream of data will flow between the digital and physical worlds, with attacks on the digital world directly impacting the physical and creating dire consequences for privacy, well-being and personal safety.

Augmented Attacks Distort RealityThe development and acceptance of AR technologies will usher in new immersive opportunities for businesses and consumers alike. However, organizations leveraging this immature and poorly secured technology will provide attackers with the chance to compromise the privacy and safety of individuals when systems and devices are exploited.

Behavioral Analytics Trigger A Consumer Backlash: Organizations that have invested in a highly connected nexus of sensors, cameras and mobile apps to develop behavioral analytics will find themselves under intensifying scrutiny from consumers and regulators alike as the practice is deemed invasive and unethical. The treasure trove of information harvested and sold will become a key target for attackers aiming to steal consumer secrets, with organizations facing severe financial penalties and reputational damage for failing to secure their information and systems.

Robo-Helpers Help Themselves to Data: A range of robotic devices, developed to perform a growing number of both mundane and complex human tasks, will be deployed in organisations and homes around the world. Friendly-faced, innocently-branded, and loaded with a selection of cameras and sensors, these constantly connected devices will roam freely. Poorly secured robo-helpers will be weaponized by attackers, committing acts of corporate espionage and stealing intellectual property. Attackers will exploit robo-helpers to target the most vulnerable members of society, such as the elderly or sick at home, in care homes or hospitals, resulting in reputational damage for both manufacturers and corporate users.

THREAT #2: NEGLECTED INFRASTRUCTURE CRIPPLES OPERATIONS

The technical infrastructure upon which organizations rely will face threats from a growing number of sources: man-made, natural, accidental and malicious. In a world where constant connectivity and real-time processing is vital to doing business, even brief periods of downtime will have severe consequences. It is not just the availability of information and services that will be compromised – opportunistic attackers will find new ways to exploit vulnerable infrastructure, steal or manipulate critical data and cripple operations.

Edge Computing Pushes Security to the Brink:In a bid to deal with ever-increasing volumes of data and process information in real time, organizations will adopt edge computing – an architectural approach that reduces latency between devices and increases speed – in addition to, or in place of, cloud services. Edge computing will be an attractive choice for organizations, but will also become a key target for attackers, creating numerous points of failure. Furthermore, security benefits provided by cloud service providers, such as oversight of particular IT assets, will also be lost.

Extreme Weather Wreaks Havoc on Infrastructure:Extreme weather events will increase in frequency and severity year-on-year, with organizations suffering damage to their digital and physical estates. Floodplains will expand; coastal areas will be impacted by rising sea levels and storms; extreme heat and droughts will become more damaging; and wildfires will sweep across even greater areas. Critical infrastructure and data centers will be particularly susceptible to extreme weather conditions, with business continuity and disaster recovery plans pushed to breaking point.

The Internet of Forgotten Things Bites Back: IoT infrastructure will continue to expand, with many organizations using connected devices to support core business functions. However, with new devices being produced more frequently than ever before, the risks posed by multiple forgotten or abandoned IoT devices will emerge across all areas of the business. Unsecured and unsupported devices will be increasingly vulnerable as manufacturers go out of business, discontinue support or fail to deliver the necessary patches to devices. Opportunistic attackers will discover poorly secured, network-connected devices, exploiting organizations in the process.

THREAT #3: A CRISIS OF TRUST UNDERMINES DIGITAL BUSINESS

Bonds of trust will break down as emerging technologies and the next generation of employee’s tarnish brand reputations, compromise the integrity of information and cause financial damage. Those that lack transparency, place trust in the wrong people and controls, and use technology in unethical ways will be publicly condemned. This crisis of trust between organizations, employees, investors and customers will undermine organizations’ ability to conduct digital business.

Deepfakes Tell True Lies: Digital content that has been manipulated by AI will be used to create hyper-realistic copies of individuals in real-time – deepfakes. These highly plausible digital clones will cause organizations and customers to lose trust in many forms of communication. Credible fake news and misinformation will spread, with unwary organizations experiencing defamation and reputational damage. Social engineering attacks will be amplified using deepfakes, as attackers manipulate individuals with frightening believability.

The Digital Generation Become the Scammer’s Dream: Generation Z will start to enter the workplace, introducing new information security concerns to organizations. Attitudes, behaviors, characteristics and values exhibited by the newest generation will transcend their working lives. Reckless approaches to security, privacy and consumption of content will make them obvious targets for scammers, consequently threatening the information security of their employers.

Activists Expose Digital Ethics Abuse: Driven by huge investments in pervasive surveillance and tracking technologies, the ethical element of digital business will enter the spotlight. Activists will begin targeting organizations that they deem immoral, exposing unethical or exploitative practices surrounding the technologies they develop and who they are sold to. Employees motivated by ethical concerns will leak intellectual property, becoming whistle-blowers or withdrawing labor entirely. Brand reputations will suffer, as organizations that ignore their ethical responsibilities are placed under mounting pressure.

Preparation Must Begin Now

Information security professionals are facing increasingly complex threats—some new, others familiar but evolving. Their primary challenge remains unchanged; to help their organizations navigate mazes of uncertainty where, at any moment, they could turn a corner and encounter information security threats that inflict severe business impact.

In the face of mounting global threats, organization must make methodical and extensive commitments to ensure that practical plans are in place to adapt to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.

The three themes listed above could impact businesses operating in cyberspace at break-neck speeds, particularly as the use of the Internet and connected devices spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large, even if they seem distant. The future arrives suddenly, especially when you aren’t prepared.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • May 1st 2020 at 19:32

Why the Latest Marriott Breach Should Make Us "Stop and Think" About Security Behaviors

Marriott International has experienced their second data breach after two franchise employee logins were used to access more than five million guest records beginning in January. Contact details, airline loyalty program account numbers, birth dates and more were collected -- but likely not Bonvoy loyalty account numbers, PINs or payment information.

As noted, this is the second breach that Marriott has undergone in recent times, the first being through its acquired Starwood brand of hotels back in 2018 when it lost a large amount of personal information relating to its customers.  So here we go again. While this breach may not be as serious this time around, the big question is what will this do for customer trust in Marriott’s brand and reputation. 

“Fool me once, shame on you, fool me twice, shame on me” comes to mind.

Most organizations who have gone through a breach review their security procedures and policies – no one wants it to happen to them again – traditionally extra funding is provided to deal with necessary remediation, which of itself can run into millions of dollars when, at the most basic level, funding a personal information monitoring service for victims along with the inevitable fines and cost of brand rebuild are taken into account. 

Therefore, the issue that Marriott will need to address is how did this happen again, within a short period of time of the last breach and for some, particularly those accustomed to the European GDPR notification period, the question may also be and why did it take a month from discovery for Marriott to notify those affected? 

Well the answer to the second question is simple, the U.S. has no national data breach notification requirement, and the patchwork quilt of 48 state laws that exist typically require notification within 30 to 45 days – this is clearly quite a bit longer that the mandatory 72-hour GDPR breach notification period in Europe.  As for the bigger question of how did it happen again, well only time will tell, but for me this highlights a key challenge for many organisations, not just in the hospitality sector, namely that of how do you secure your third party suppliers?

The breach occurred at one of Marriott’s franchise properties by accessing the login credentials of two employees at the property.  From a security standpoint this shines a light on two key challenges for security professionals today: the third party supplier and awareness about the insider threat.  Unfortunately, third parties are becoming more of a vulnerability than ever before.

Organizations of all sizes need to think about the consequences of a trusted third party, in this case a franchisee, providing accidental, but harmful, access to their corporate information. Information shared in the supply chain can include intellectual property, customer or employee data, commercial plans or negotiations, and logistics. To address information risk, breach or data leakage across third parties, organizations should adopt robust, scalable and repeatable processes – obtaining assurance proportionate to the risk faced. Whether or not this was the case with Marriott remains to be seen. 

Supply chain information risk management should be embedded within existing procurement and vendor management processes, so supply chain information risk management becomes part of regular business operations.  Will this also help address the insider threat?  Well it should certainly help raise awareness but the reality is that the insider threat is unlikely to diminish in the coming years. Efforts to mitigate this threat, such as additional security controls and improved vetting of new employees, will remain at odds with efficiency measures.  

Organizations need to shift from promoting awareness of the problem to creating solutions and embedding information security behaviors that affect risk positively. The risks are real because people remain a ‘wild card’ and our businesses today exist on sharing of critical information with third party providers.

Many organizations recognize people as their biggest asset, yet many still fail to recognize the need to secure ‘the human element’ of information security. In essence, people should be an organization’s strongest control. Instead of simply making people aware of their information security responsibilities and how they should respond, the answer for businesses of all sizes is to embed positive information security behaviors that will result in “stop and think” behavior becoming a habit and part of an organization’s information security culture.

While many organizations have compliance activities which fall under the general heading of ‘security awareness’, the commercial driver should be risk, and how new behaviors can reduce that risk. For some, that message may come too late and it may take a breach or two to drive the message home.  

The real question is for how much longer will consumers accept that the loss of their data is a cost of doing business before voting with their feet and taking their business to more trusted providers?

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • April 7th 2020 at 07:07

Examining Potential Election Vulnerabilities: Are They Avoidable?

In the U.S and global communities, election security is a large concern because so many aspects of it can be insecure and open to attacks that may shift public opinion or be used for personal gain. Not only does the complexity of the U.S. government raise concerns about security, campaigns also have weak points that make it a target for attacks.

Limited IT Resources Put Campaigns and Voters at Risk

Given limited IT budgets, volunteers who often work directly with voters, sometimes use their own personal devices and applications to communicate with other team members and supporters; they also have access to key private data belonging to candidates and team members. These personal devices are also used to access campaign systems such as the Voter Activation Network (NGP VAN) that include voter information to support operations such as phone banking and door-to-door canvassing. Without proper security controls, these personal devices can be used by adversaries to put both the campaign and voters at risk. Additionally, the threat of fake news has evolved with the advent of deepfake technology, which in recent times has been combined with artificial intelligence (AI), video and audio to create media that appears to be authentic but is not. 

Although security controls such as two-factor authentication (2FA) are helpful, campaigns and voters may still be at risk. Abel Morales, a security engineer at Exabeam, recommends that campaigns use user and entity behavior analysis (UEBA) to detect anomalous authentications. “By monitoring staffers’ behaviors and detecting anomalies from their typical workflows, IT would be able to reduce the impact of threats introduced through social engineering, phishing and other malicious techniques.” This method also can be used to detect voter anomalies as well.

The continuing threat of ransomware attacks and nation-state attacks 

Ransomware attacks on voter databases and systems can facilitate payments in exchange for voter information. Ransomware encrypts data until a ransom is paid and could also be used to manipulate voting results or lock administrators out of critical data during an election therefore compromising voter confidence. Additionally, the increase in nation-state attacks are another major concern. Some officials believe that foreign influence on our elections will more likely come through social media to shape public opinion towards whatever direction serves their specific goals. In particular, the FBI is worried that Russia will use social media to cause further division between the political parties or hack campaign websites to spread misinformation.   

Does the government’s structure make election security more difficult?.

The intricacies of the U.S. voting system also affect the security of elections because state and local governments are not forced to use the federal government’s testing standards. State and local governments have the option to adopt these security standards, use their own, or a hybrid. Also, testing for state and local governments can be completed by private companies or local universities, as there is no single federal test certification program. This deviation from the federal standard is also seen in the lack of mandatory audits to verify the integrity of the machines and testing procedures, and the management of the voter registration database system which contains voter records. Many of these database systems are outdated and ill-equipped to handle today’s cybersecurity threats, making it easier for adversaries to delete or add voters. Although these differences can be detrimental to the security of elections, they make it difficult for attackers to launch a large-scale, coordinated attack. 

The makeup of the voting machine market is a huge risk

Three companies make up more than 90 percent of the voting machine market, suggesting that a compromise of just one of these three companies could have a significant impact on any election. Manipulation is not a formidable task given many of these machines are running outdated software with existing vulnerabilities. As transitioning to machines running newer Windows operating systems in time for the 2020 election may not be possible, Microsoft has committed to providing free updates for all certified voting machines in operation running on Windows 7.

Internet-connected devices increase risk

Our U.S. voting system is comprised of many different types of devices with varying functions including tallying and reporting votes. Security experts note that web-based systems such as election-reporting websites, candidate websites and voter roll websites are easier to attack compared to a voting machine. Many of these systems are IoT devices that have their own unique security challenges. Often, they are shipped with factory-set, hardcoded passwords; they’re unable to be patched or updated; and have outdated protocols and lack encryption. They are also susceptible to botnets that can exploit large numbers of devices in a short period. IoT attacks could also compromise a user’s browser to manipulate votes and cut power to polling stations

Proactive responses to help understaffed election IT teams

To prevent targeted attacks, campaign IT tech teams and staffers are performing training courses to learn how to detect and report suspicious emails. The DNC has created a security checklist for campaigns with recommendations, and the Center for Internet Security has also developed a library of resources to help campaigns including a Handbook for Elections Infrastructure Security. Machine-based learning systems enable limited teams to operate 50 percent more efficiently through automation – which is essential given the scale and number of elections. Security orchestration, automation, and response (SOAR) as part of a modern SIEM can also orchestrate remediation in response to an identified anomaly through playbooks. SOAR automatically identifies and prioritizes cybersecurity risks and responds to low-level security events, which is extremely useful for state and local government agencies that operate with small cybersecurity teams.  

Republicans and Democrats unite to offer a helping hand

In late 2019, recognizing the seriousness of election attacks and the lack of security resources, former campaign managers for Hillary Clinton and Mitt Romney launched a non-profit organization, Defending Digital Campaigns (DDC), which offers free to low-cost security technology and services to federal election campaigns. Some experts predict that the 2020 election will be one of the most anticipated digital security events in U.S. history. Given the complexity of the election process and voting system, security automation, behavior analytics and security education can be a part of the solution for managing a secure voting process.

image

About the author: Tim Matthews brings over 20 years of experience building and running software marketing teams and a focus on the security market. Prior to Exabeam, he was Vice President of Marketing at Imperva, where he led a worldwide marketing team.

Copyright 2010 Respective Author at Infosec Island
  • April 7th 2020 at 06:58

Google Skips Chrome 82, Resumes Stable Releases

Google is on track to resume the roll-out of stable Chrome releases next week, but says it will skip one version of the browser.

Last week, the Internet search giant said it was pausing upcoming releases of the browser, following an adjusted work schedule due to the COVID-19 (coronavirus) pandemic, and that both Chrome and Chrome OS releases would be affected.

At the time, the company revealed it would focus on the stability and security of releases, and that it would prioritize security updates for Chrome 80.

Now, Google says it is ready to resume pushing releases to the Stable channel as soon as the next week, with security and critical fixes meant for version 80 of the browser.

Moving forth, the company is planning the release of Chrome 81 in early April, but says it would then jump directly to Chrome 83, which is set to arrive in mid-May, thus skipping Chrome 82.

“M83 will be released three weeks earlier than previously planned and will include all M82 work as we cancelled the M82 release (all channels),” Google said.

This week, the company will resume the Canary, Dev and Beta channels, with Chrome 83 moving to Dev.

“We continue to closely monitor that Chrome and Chrome OS are stable, secure, and work reliably. We’ll keep everyone informed of any changes on our schedule,” the Internet giant said.

The company hasn’t shared any details on when Chrome 84 releases would start arriving, but said it would provide the information in a future update.

Following Google’s announcement last week, Microsoft said it would pause stable Edge releases, to align with the Chromium Project. Today, the Redmond-based tech company announced that Edge build 83.0.461.1 was released to the Dev channel.

“As you can see, this is the first update from major version 83.  This is a slight deviation from our normal schedule due to current events,” Microsoft says, adding that version 81 is heading for the Stable channel soon.

Related: Google Patches High-Risk Chrome Flaws, Halts Upcoming Releases

RelatedChrome 80 Released With 56 Security Fixes

Related: Chrome Will Block Insecure Downloads on HTTPS Pages

Copyright 2010 Respective Author at Infosec Island
  • March 29th 2020 at 16:14

Benchmarking the State of the CISO in 2020

Driving digital transformation initiatives while safeguarding the enterprise is a mammoth task. In some aspects, it might even sound counter-intuitive when it comes to opening up IT infrastructure, or converging IT and OT networks to allow external parties such as partners and customers to closely interact with the organization to embrace new business models and collaboration (think cloud applications, APIs, sensors, mobile devices, etc.).

Although new technology is being adopted quickly, especially web frontends, applications and APIs, much of the underlying IT infrastructure as well as the supporting processes and governance models are somewhat legacy, and struggle to keep up.

For its 2020 CISO Benchmark Report, Cisco surveyed some 2,800 CISOs and other IT decision-makers from 13 countries, how they cope with that, and they came up with a number of interesting findings.

Cyber-threats are a global business risk

The World Economic Forum says business leaders view cyber-attacks as the #2 global risk to business in advanced economies, taking a back seat only to financial crises. Not surprisingly,89 percent of the respondents in the Cisco study say their executives still view security as a high priority, but this number is down by 7 percent from previous years.

Nine out of ten respondents felt their company executives had solid measures for gauging the effectiveness of their security programs. This is encouraging, as clear metrics are key to a security framework, and it’s often difficult to get diverse executives and security players to agree on how to measure operational improvement and security results.

Leadership matters

The share of companies that have clarified the security roles and responsibilities on the executive team has risen and fallen in recent years, but it settled at 89 percent in 2020. Given that cyber-security is being taken more seriously and there is a major need for security leaders at top levels, the need to continue clarifying roles and responsibilities will remain critical.

The frequency with which companies are building cyber-risk assessments into their overall risk assessment strategies has shrunk by five percent from last year. Still, 91 percent of the survey respondents reported that they’re doing it. Similarly, 90 percent of executive teams are setting clear metrics to assess the effectiveness of their security programs, although this figure too is down by six percent from last year.  

Cloud protection is not solid

It’s almost impossible for a company to go digital without turning to the cloud. The Cisco report found that in 2020, over 83 percent of organizations will be managing (internally or externally) more than 20 percent of their IT infrastructure in the cloud. But protecting off-premises assets remains a challenge.

A hefty 41percent of the surveyed organizations say their data centers are very or extremely difficult to defend from attacks. Thirty-nine percent report that they struggle to keep applications secure. Similarly, private cloud infrastructure is a major security issue for organizations; half of the respondents said it was very or extremely difficult to defend.

The most problematic data of all is data stored in the public cloud. Just over half (52 percent) of the respondents find it very or extremely challenging to secure.Another 41 percent of organizations find network infrastructure very or extremely challenging to defend.

Time-to-remediate scores most important

The Cisco study enquired about the after-effects of breaches using measures such as downtime, records, and finances. How much and how often are companies suffering from downtime? It turns out that organizations across the board issued similar answers. Large enterprises (10,000 or more employees) are more likely to have less downtime (between zero and four hours) because they typically have more technology, money, and people available to help respond and recover from the threats. Small to mid-sized organizations made up most of the five- to 16-hour recovery timespans. Potentially business-killing downtimes of 17-48 hours were infrequent among companies of all sizes.

After a security incident, rapid recovery is critical to keeping disruption and damages to a minimum. As a result, of all the metrics, time-to-remediate (also known as “time-to-mitigate”) scores are the ones most important when reporting to the C-suite or the company’s board of directors, the study concludes.

Automating security is not optional – it’s mandatory

The total number of daily security alerts that organizations are faced with is constantly growing. Three years ago, half of organizations had 5,000 or fewer alerts per day. Today, that number is only 36 percent. The number of companies that receive 100,000 or more alerts per day has risen to 17 percent this year, from 11 percent in 2017. Due to the greater alert volumes and the considerable resources needed to process them, investigation of alerts is at a four-year low: just under 48 percent of companies say they can keep up. That number was 56 percent in 2017, and it’s been shrinking every year since. The rate of legitimate incidents (26 percent) has remained more or less constant, which suggests that a lot of investigations are coming up with false positives.

Perhaps the biggest side-effect of this never-ending alert activity is cyber-security fatigue. Of the companies that report that it exists among their ranks, 93 percent of them receive more than 5,000 security warnings every day.

A sizeable majority (77 percent) of Cisco’s survey respondents expect to implement more automated security solutions to simplify and accelerate their threat response times. No surprise here. These days, they basically have no choice but to automate.

Vigilance pays dividends

Organizations that had 100,000 or more records affected by their worst security incident increased to 19 percent this year, up four percent from 2019. The study also found that a major breach can impact nine critical areas of a company, including operations and brand reputation, finances, intellectual property, and customer retention.

Three years ago, 26 percent of the respondents said their brand reputation had taken a hit from a security incident; this year, 33 percent said the same. This is why, to help minimize damages and recover fast, it’s key to incorporate crisis communications planning into the company’s broader incidence response strategy.

Finally, the share of survey respondents that reported that they voluntarily disclosed a breach last year (61 percent) is the highest in four years.The upshot is that overall, companies are actively reporting breaches. This may be due to new privacy legislation (GDPR and others), or because they want to maintain the trust and confidence of their customers. In all likelihood, it’s both.

In conclusion, the CISO Benchmark report shows a balance of positives and negatives. Organizations are looking to automate security processes to accelerate response times, security leadership is strengthening and setting metrics to improve overall protection, and more breaches are being identified and reported.  But there’s still work to be done to embed security into everything organizations do as they evolve their business.

About the author: Marc Wilczek is Chief Operating Officer at Link11, an IT security provider specializing in DDoS protection, and has more than 20 years of experience within the information and communication technology (ICT) space.

Copyright 2010 Respective Author at Infosec Island
  • March 27th 2020 at 16:14

Cyberattacks a Top Concern for Gov Workers

More than half of city and state employees in the United States are more concerned about cyberattacks than they are of other threats, a new study discovered.

Conducted by The Harris Poll on behalf of IBM, the survey shows that over 50% of city and state employees are more concerned about cyberattacks than natural disasters and terrorist attacks. Moreover, three in four government employees (73% of the respondents) are concerned about impending ransomware threats.

With over 100 cities across the U.S. reported as being hit with ransomware in 2019, the concern is not surprising. However, the survey suggests that ransomware attacks might be even more widespread, as 1 in 6 respondents admitted that their department was impacted.

Alarmingly though, despite the increase in the frequency of these attacks, only 38% of the surveyed government employees said they received general ransomware prevention training, and 52% said that budgets for managing cyberattacks haven’t seen an increase.

“The emerging ransomware epidemic in our cities highlights the need for cities to better prepare for cyber-attacks just as frequently as they prepare for natural disasters,” said Wendi Whitmore, VP of threat intelligence at IBM Security.

While 30% of the respondents believe their employer is not investing enough in prevention, 29% believe their employer is not taking the threat of a cyberattack seriously enough. More than 70% agreed that responses and support for cyberattacks should be on-par with those for natural disasters.

On the other hand, when asked about their ability to overcome cyberattacks, 66% said their employer is prepared, while 74% said they were confident in their own ability to recognize and prevent an attack.

“The data in this new study suggests local and state employees recognize the threat but demonstrate over confidence in their ability to react to and manage it. Meanwhile, cities and states across the country remain a ripe target for cybercriminals,” Whitmore also said.

The respondents also expressed concerns regarding the impending 2020 election in the U.S., with 63% admitting concern that a cyberattack could disrupt the process.

While half of them say they expect attacks in their community to increase in the following year, six in ten even expect for their workplace to be hit. Administrative offices, utilities and the board of elections were considered the most vulnerable.

Employees in education emerged as those less prepared to face a cyberattack, with 44% saying they did not receive basic cyber-security training, and 70% admitting to not receiving adequate training on how to respond to cyberattacks.

The survey was conducted online, from January 16 through February 3, 2020, among 690 employees who work for state or local government organizations in the United States. All respondents were adults over 18, employed full time or part time.

Related: Christmas Ransomware Attack Hit New York Airport Servers

Related: Ransomware Attack Hits Louisiana State Servers

Related: Massachusetts Electric Utility Hit by Ransomware

Copyright 2010 Respective Author at Infosec Island
  • March 3rd 2020 at 14:30

Hackers Target Online Gambling Sites

Threat Actor Targets Gambling and Betting in Southeast Asia

Gambling and betting operations in Southeast Asia have been targeted in a campaign active since May 2019, Trend Micro reports. 

Dubbed DRBControl, the adversary behind the attacks is using a broad range of tools for cyber-espionage purposes, including publicly available and custom utilities that allow it to elevate privileges, move laterally in the compromised environments, and exfiltrate data. 

The intrusion begins with spear-phishing Microsoft Word files, with three different document versions identified: they embed an executable, a BAT file, and PowerShell code, respectively. Two very similar variations of the employed phishing content were observed.

The first two document versions execute the same payload onto the target system, and the third one is believed to be leading to the same piece of malware too. 

DRBControl employed two previously unknown backdoors in this campaign, but also used known malware families, such as the PlugX RAT, the Trochilus RAT, and the HyperBro backdoor, along with various custom post-exploitation tools, Trend Micro explains in a detailed report (PDF).

Both of the backdoors use DLL side-loading through the Microsoft-signed MSMpEng.exe, with the malicious code then injected into the svchost.exe

Written in C++, the first of the threat actor’s backdoors can bypass user account control (UAC), achieve persistence via a registry key, sends out information such as hostname, computer name, user privileges, Windows version, current time, and a campaign identifier. 

A recent version of the malware was observed using Dropbox for command and control (C&C), with multiple repositories employed to store the infected machine’s information, store commands and post-exploitation tools, and store files exfiltrated from the machine. 

The Dropbox-downloaded backdoor has keylogging functions and can receive commands to enumerate drives and files, execute files, move/copy/delete/rename files, upload to Dropbox, execute commands, and run binaries via process hollowing. 

Also written in C++, the second backdoor too has UAC bypass and keylogging capabilities. The security researchers discovered an old version of this backdoor being delivered by a Word document from July 2017, suggesting that DRBControl has been active for a long time. 

Post exploitation tools employed by the threat actor include a clipboard stealer, a network traffic tunnel EarthWorm, public IP address retriever, NBTScan tool for enumerating NetBIOS shares, brute-force tool, and an elevation of privilege tool for exploiting CVE-2017-0213. Multiple password dumpers, tools for bypassing UAC, and code loaders were also identified. 

The use of the same domain in one of the backdoors, a PlugX sample, and Cobalt Strike allowed the researchers to link DRBControl to all three malware families. Additionally, the researchers identified connections with Winnti (via mutexes, domain names, and issued commands) and Emissary Panda (the HyperBro backdoor appears to be exclusive to Emissary Panda). 

This cyber-espionage campaign was targeted at gambling and betting companies in Southeast Asia, with no attacks in other parts of the world being confirmed to date. 

“The threat actor described here shows solid and quick development capabilities regarding the custom malware used, which appears to be exclusive to them. The campaign exhibits that once an attacker gains a foothold in the targeted entity, the use of public tools can be enough to elevate privileges, perform lateral movements in the network, and exfiltrate data,” Trend Micro concludes. 

RelatedNew APT10 Activity Detected in Southeast Asia Copyright 2010 Respective Author at Infosec Island
  • February 20th 2020 at 02:10

When Data Is Currency, Who’s Responsible for Its Security?

In a year that was all about data and privacy, it seems only fitting that we closed out 2019 in the shadow of a jumbo data leak where more than a billion records were found exposed on a single server.

Despite this being one of the largest data exposures from a single source in history, it didn’t cause nearly the public uproar that one might expect from a leak involving personal information such as names, email addresses, phone numbers, LinkedIn and Facebook profiles. Instead, this quickly became yet another case of consumer information being mishandled, impacting many of the same consumers that have been burned several times already by companies they trusted.

What’s different about this leak – and what should have given consumers and businesses alike pause – is the way in which this case highlights a more complex problem with data that exists today.

There’s no question that data is a very valuable asset. Organizations have done a great job figuring out how to capture consumer data over the last decade and are now beginning to use and monetize it. The problem is, that data can also be used in many different ways to inflict serious pain on victims in their personal and business lives. So, when that data goes through someone’s hands (business or individual), how much responsibility do they – and those up the lifecycle chain – have for where it ends up?

Beginning at the consumer level, users can opt out of sharing data and should do so at any chance they get if they are concerned about having their information exposed. The good news is that new regulations like the GDPR and CCPA are making this easier to do retroactively than ever before. The challenge is that the system isn’t perfect. Aliases and other databases can still be difficult to opt out of because although they may have information captured, errors like misspellings can prevent consumers from getting to their own data.

With this particular incident, we also caught a glimpse of the role that data enrichment, aggregators and brokers play in security. Although it didn’t come directly from their own servers, the exposed data was likely tied to enrichment firms People Data Labs (PDL) and OxyData. While several data brokers today are taking more responsibility and offering security and privacy education to their customers, it was alarming to see that neither data broker in this case could rule out the possibility that their data was mishandled by a customer. In fact, rather than pushing for a solution, Oxydata seemed to shirk responsibility entirely when speaking with WIRED.

Data brokers need to own up to this challenge and look at better screening of their customers to ensure their use of data has valid purposes. A case study by James Pavur, DPhil student at Oxford University, underscored these failings in the system when he used GDPR Subject Access Requests to obtain his data from about 20 companies, many of which didn't ask for sufficient ID before sharing the information. He went on to try and get as much data as possible about his fiancée, finding he could access a range of sensitive data, including everything from addresses and credit card numbers to travel itineraries. None of this should be possible with proper scredaening in place.

Ultimately, whoever owns the server where the leak originated is the one that will be held legally and fiscally responsible. But should data brokers be emulating the shared responsibility model in use by cloud services like AWS? Either way, by understanding the lifecycle of data and taking additional responsibility upstream, we can begin to cut down on the negative impact when exposures like this inevitably occur.

About the author: Jason Bevis is the vice president of Awake Security Labs at Awake Security. He has extensive experience in professional services, cybersecurity MDR solutions, incident response, risk management and automation products.

Copyright 2010 Respective Author at Infosec Island
  • February 11th 2020 at 19:13

SEC Shares Cybersecurity and Resiliency Observations

The U.S. Securities and Exchange Commission (SEC) this week published a report detailing cybersecurity and operational resiliency practices that market participants have adopted. 

The 10-page document (PDF) contains observations from the SEC's Office of Compliance Inspections and Examinations (OCIE) that are designed to help other organizations improve their cybersecurity stance.

OCIE examines SEC-registered organizations such as investment advisers, investment companies, broker-dealers, self-regulatory organizations, clearing agencies, transfer agents, and others.

Through its reviews, OCIE has observed approaches that some organizations have taken in areas such as governance and risk management, access rights and controls, data loss prevention, mobile security, incident response and resiliency, vendor management, and training and awareness. 

Observed risk management and governance measures include senior level engagement, risk assessment, testing and monitoring, continuous evaluation and adapting to changes, and communication. Practices observed in the area of vendor management include establishing a program, understanding vendor relationships, and monitoring and testing. 

Strategies related to access rights and controls that were observed include access management and access monitoring. Utilized data loss prevention measures include vulnerability scanning, perimeter security, patch management, encryption and network segmentation, and insider threat monitoring, among others. 

In terms of mobile security, organizations adopted mobile device management (MDM) applications or similar technology, implemented security measures, and trained employees. Strategies for incident response include inventorying core business operations and systems, and assessing risk and prioritizing business operation. 

By sharing these observations, SEC hopes to determine organizations to review their practices, policies and procedures and assess their level of preparedness. 

The presented measures should help any organization become more secure, OCIE says, admitting that “there is no such thing as a “one-size fits all” approach.” In fact, it also points out that not all of these practices may be appropriate for all organizations. 

“Through risk-targeted examinations in all five examination program areas, OCIE has observed a number of practices used to manage and combat cyber risk and to build operational resiliency. We felt it was critical to share these observations in order to allow organizations the opportunity to reflect on their own cybersecurity practices,” Peter Driscoll, Director of OCIE, said. 

RelatedCyber Best Practices Requires a Security-First Approach

Related: Best Practices for Evaluating and Vetting Third Parties

Related: Perception vs. Reality in Federal Government Security Practices

Copyright 2010 Respective Author at Infosec Island
  • January 30th 2020 at 20:09

What Does Being Data-Centric Actually Look Like?

“Data-centric” can sometimes feel like a meaningless buzzword. While many companies are vocal about the benefits that this approach, in reality, the term is not widely understood.

One source of confusion is that many companies have implemented an older approach – that of being “data-driven” – and just called this something else. Being data-centric is not the same as being data-driven. And, being data-centric brings new security challenges that must be taken into consideration. 

A good way of defining the difference is to talk about culture. In Creating a Data-Driven Organization, Carl Anderson starts off by saying, “Data-drivenness is about building tools, abilities, and, most crucially, a culture that acts on data.” In short, being data-driven is about acquiring and analyzing data to make better decisions.

Data-centric approaches build on this but change the managerial hierarchy that informs it. Instead of data teams collecting data, management teams making reports about it, and then CMOs taking decisions, data centrism aims to give everyone (or almost everyone) direct access to the data that drives your business. In short, creating a data-driven culture is no longer enough: instead, you should aim to make data the core of your business by ensuring that everyone is working with it directly.

This is a fairly high-level definition of the term, but it has practical implications. Implementing a data-centric approach includes the following processes.

1. Re-Think Your Organizational Structure

Perhaps the most fundamental aspect of data-centric approaches is that they rely on innovative (and sometimes radical) management structures. As Adam Chicktong put it a few years ago, these structures are built around an inversion of traditional hierarchies: instead of decisions flowing from executives through middle management to data staff, in data-centric approaches everyone’s “job is to empower their team do their job and better their career”.

This has many advantages. In a recent CMO article, Maile Carnegie talked about the ‘frozen middle’ where middle-management is inherently structured to resist change. By looking closely at your hierarchy and identifying departments and positions likely to resist change, you’ll be able to streamline the structure to allow transformation to more easily filter through the business. As she puts it, “Increasingly, most businesses are getting to a point where there are people in their organization who are no longer are experts in a craft, and who have graduated from doing to managing and basically bossing other people around and shuffling PowerPoints.”

2. Empowering the Right People

Once these novel managerial structures are in place, the focus must necessarily shift toward empowering, rather than managing, staff. Effectively employing a data-centric approach means giving the right people access to the data that underpins your business, but also allowing them to affect the types of data you are collecting. 

Let’s take access first. At the moment, many businesses (and even many of those that claim to be data-driven) employ extremely long communicative chains to work with the data they collect. IT staff report their findings, ultimately, to the executive level, who then disseminate this to marketing, PR, risk and HR departments. One of the major advantages of new data infrastructures, and indeed one of the major advantages of cloud storage, is that you can grant these groups direct access to your cloud storage solution. 

Not only does this cut down the time it takes for data to flow to the "correct" teams, making your business more efficient. If implemented skillfully, it can also be a powerful way of eliciting input from them on what kinds of data you should be collecting. Most businesses would agree, I think, that executives don't always have a granular appreciation for the kind of data that their teams need. Empowering these teams to drive novel forms of data collection short-circuits these problems by encouraging direct input into data structures.

3. Process Not Event

Third, transitioning to a data-centric approach entails not just a change in managerial structure, responsibility, and security. At the broadest level, this approach requires a change in the way that businesses think about development.

Nowadays, running an online business is not as simple as identifying a target audience, creating a website, and waiting to see if it is effective. Instead, the previously rigid divide between the executive, marketing, and data teams means that every business decision should be seen as a process, not an event.

4. Security and Responsibility

Ultimately, it should also be noted that changing your managerial structure in this way, and empowering teams to take control of your data collection processes, also raises significant problems when it comes to security.

At a basic level, it’s clear that dramatically increasing the number of people with access to data systems simultaneously makes these systems less secure. For that reason, implementing a data-centric approach must also include the implementation of extra security measures and tools. 

These include managerial systems to ensure responsible data retention, but also training for staff who have not worked with data before, and who may not know how to take basic security steps like using secure browsers and connecting to the company network through a VPN when using public WiFi. On the other hand, data centrism can bring huge benefits to the overall security of organizations. 

Alongside the approach’s contribution to marketing and operational processes, data-centric security is also now a field of active research. In addition, the capability to share emerging threats with almost everyone in your organization greatly increases the efficacy of your cybersecurity team.

Data-centric approaches are a powerful way of increasing the adaptability and profitability of your business, but you should also note that becoming truly data-centric involves quite radical changes in the way that your business is organized. Done correctly, however, this transition can offer huge advantages for almost any business.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island
  • January 17th 2020 at 15:46

The Big 3: Top Domain-Based Attack Tactics Threatening Organizations

Nowadays, businesses across all industries are turning to owned websites and domains to grow their brand awareness and sell products and services. With this dominance in the e-commerce space, securing owned domains and removing malicious or spoofed domains is vital to protecting consumers and businesses alike. This is especially important because domain impersonation is an increasingly popular tactic among cybercriminals. One example of this is ‘look-a-like’ urls that trick customers by mimicking brands through common misspellings, typosquatting and homoglyphs. With brand reputation and customer security on the line, investing in domain protection should be a top priority for all organizations.

Domain-based attacks are so popular, simply because of how lucrative they can be. As mentioned above, attackers often buy ‘look-alike’ domains in order to impersonate a specific brand online. To do this, bad actors can take three main approaches: copycatting, piggybacking and homoglyphs/typosquatting. From mirroring legitimate sites to relying on slight variations that trick an untrained eye, it’s important to understand these top tactics cybercriminals use so you can defend your brand and protect customers. Let’s explore each in more detail.

1. Copycatting Domains

One tactic used by bad actors is to create a site that directly mirrors the legitimate webpage. Cybercriminals do so by copying a top-level domain (TLD), or TLD, that the real domain isn’t using, or by appending multiple TLDs to a domain name. With these types of attacks, users are more likely to be tricked into believing they are interacting with the legitimate organization online. This simplifies the bad actor’s journey as the website appears to be legitimate, and will be more successful than an attack using a generic, throwaway domain. To amplify these efforts, bad actors will also use text and visuals that customers would expect to see on a legitimate site, such as the logo, brand name, and products. This sense of familiarity and trust puts potential victims at ease and less aware of the copycat’s red flags. 

2. Piggybacking Name Recognition

The first approach attackers utilize is spoofed or look-alike domains that help them appear credible by piggybacking off the name recognition of established brands. These domains may be either parked or serving live content to potential victims. Parked domains are commonly leveraged to generate ad revenue, but can also be used to rapidly serve malicious content. They are also often used to distribute other brand-damaging content, like counterfeit goods.

3. Tricking Victims with Homoglyphs and Typosquatting

This last tactic has two main methods --  typosquatting and homoglyphs -- and looks for ways to trick unsuspecting internet users where they are unlikely to look or notice they are being spoofed. 

  • Typosquatting involves the use of common URL misspellings that either a user is likely to make on their own accord or that users may not notice at all, i.e. adding a letter to the organization’s name. If an organization has not registered domains that are close to their legitimate domain name, attackers will often purchase them to take advantage of typos. Attackers may also infringe upon trademarks by using legitimate graphics or other intellectual property to make malicious websites appear legitimate.
  • With homoglyph, the basic principles of domain spoofing remain the same, but an attacker may substitute a look-a-like character of an alphabet other than the Latin alphabet -- i.e., the Cyrillic “а” for the Latin “a.” Although these letters look identical, their Unicode values is different and as such, they will be processed differently by the browser. With over 100,000 Unicode characters in existence, bad actors have an enormous opportunity. Another benefit of this type of attack is that they can be used to fool traditional string matching and anti-abuse algorithms. 

Why domain protection is necessary

Websites are a brand’s steadfast in the digital age, as they are often the first source of engagement between a consumer, partner, prospective employee and your organization. Cyberattackers see this as an opportunity to capitalize on that interaction. If businesses don’t take this problem seriously, their brand image, customer loyalty and ultimately financial results will be at risk. 

While many organizations monitor domains related to their brand in order to ensure that their brand is represented in the way it is intended, this is challenging for larger organizations composed of many subsidiary brands. Since these types of attacks are so common and the attack surface is so large, organizations tend to feel inundated with alerts and incidents. As such, it is crucial that organizations proactively and constantly monitor for domains that may be pirating their brand, products, trademarks or other intellectual property.

About the author: Zack Allen is both a security researcher and the director of threat intelligence at ZeroFOX. Previously, he worked in threat research for the US Air Force and Fastly.

Copyright 2010 Respective Author at Infosec Island
  • January 17th 2020 at 15:37

Security Compass Receives Funding for Product Development and Expansion

Toronto, Canada-based Security Compass has received additional funding from growth equity investment firm FTV Capital. The amount has not been disclosed, indicating that it is likely to be on the smaller side.  

According to the security firm, the purpose of the cash injection is to allow it to enhance its product portfolio and accelerate a planned global expansion.  

The company was founded by Nish Bhalla in 2005. Former COO Rohit Sethi becomes the new CEO. Bhalla remains on the Board, and is joined by Liron Gitig and Richard Liu from FTV Capital.  

Long-serving Sethi was Security Compass' first hire, and was an integral part of the creation of the company's SD Elements platform -- now the focus of the firm' operations. SD Elements helps customers put the Sec into DevOps without losing DevOps's development agility.   

"The strong trends towards agile development in DevOps," he says, "increased focus on application security and on improving risk management are on course for collision. Security Compass is uniquely positioned to help organizations address the inherent conflicts. With FTV's investment, we're poised to accelerate our growth while maintaining the culture of excellence we've worked so hard to build."  

The worldwide growth in security and privacy regulations, such as GLBA, FedRAMP, GDPR, CCPA and many others, requires that security is built into the whole product development lifecycle. "Security Compass' SD Elements solution," says FTV Capital partner Gitig, "is uniquely focused on the software stack, enabling DevOps at scale by helping enterprises develop secure, compliant code from the start."  

He continued, "SD Elements provides both engineering and non-engineering teams with a holistic solution for managing software security requirements in an efficient and reliable manner, alleviating meaningful friction in the software development life cycle, accelerating release cycles and improving business results. We are excited to work with the Security Compass management team in its next phase of global growth as a trusted information security partner."  

Security Compass claims more than 200 enterprise customers in banks, federal government and critical industries use its solutions to manage the risk of tens of thousands of applications.  

RelatedChef Launches New Version for DevSecOps Automated Compliance 

RelatedChatOps is Your Bridge to a True DevSecOps Environment 

RelatedShifting to DevSecOps Is as Much About Culture as Technology and Methodology   

Copyright 2010 Respective Author at Infosec Island
  • January 17th 2020 at 14:39

Password Shaming Isn’t Productive – Passwords Are Scary Business

We’ve all been in the situation trying to set a new password – you need one uppercase character, one number and one character from a special list. Whatever password we come up with needs to be between 8 and 24 characters long. Once created, we need to remember that password and heaven help us should we need to reset it. Yes, that’s the dreaded “you can’t reuse the last five passwords” message – but IT security requires the password to be changed every month. If you’ve lived in the corporate world, this experience is quite familiar. So too is this a common experience with most web properties.

Then along comes the dreaded “your account was part of a set of accounts which may have been breached” letter. As a consumer, you’re now left with some anxiety over what data might be in the hands of proverbial “bad guys”. Part of the anxiety comes from the prospect that these same bad guys might also now know your password, so you need to change it. If you’re like many people, that password likely was used in many places so the anxiety increases as you recall each of the websites you now need to update your password on – just to be safe.

Into this mess we have security pundits suggesting that multiple security factors are the solution. The net results being that not only do users need to remember their password, but they also need to enter a second code – often a set of numbers – in order to access their account. While each of these password complexity, password expiration, and multiple factor authentication rules can deter attempts to compromise an account, they do nothing to simplify the experience and when it comes to consumer grade devices or consumer websites, simplification is what we should be striving for.

Consider the current situation with Ring customers. It’s being reported that some users of Ring video devices are experiencing random voices speaking through their video devices. Some have even reported threats against them. These users are rightfully concerned for their safety, but some have been quick to lay the blame for the situation at the feet of the user. When someone states that “you should have a more secure password” or “you should enable 2FA”, those statements are fundamentally a form of victim shaming. The end user likely isn’t a security expert, but an expectation is being set that they should know how best to secure these devices.

The current situation with Ring devices isn’t new. We need only look back to September of 2016 when the US saw a major internet outage caused by an attack on the DNS infrastructure. This attack originated from a large quantity of DVRs, webcams and other consumer grade devices which weren’t properly password protected. At the time, there were similar cries that ‘password123’ wasn’t an effective password and users shouldn’t use it. This situation even prompted major service providers like GitHub to advise their customers to change their password – not because the user’s data had been part of a breach, but that the password had itself been part of a set of data sold on the black market.

These examples highlight a key challenge with product security– how to properly prevent unauthorized access while maintaining ease of use. This goal can’t be met if we shame users based on their security choices. Instead, product designers should look at the ways to use context to best secure systems. In the case of a video camera, access to the camera in all forms should be from approved devices. For example, if a user configured the camera from an Android phone, then that device is by definition an approved device to access the camera. Since the phone can’t be in two locations in two places at the same time, if the app is running on the phone, then there is only one possible way to access the camera until the user authorizes additional devices from within the app. This entire example doesn’t rely on password complexity to secure the camera, but rather uses user context as part of the overall system security where passwords are but one component. The net result being that while a simple password may not be advised from a security pundit perspective, the contextual information helps ensure that users don’t harm themselves. With the complexity of consumer devices only increasing, contextual security should be a priority for all – a situation which would avoid password shaming.

About the author: Tim Mackey is Principal Security Strategist, CyRC, at Synopsys. Within this role, he engages with various technical communities to understand how to best solve application security problems.

Copyright 2010 Respective Author at Infosec Island
  • January 15th 2020 at 20:25

Five Key Cyber-Attack Trends for This Year

‘It’s not if, but when’ is a long-established trope in the world of cybersecurity, warning organizations that no matter how robust their defenses, nor how sophisticated their security processes, they cannot afford to be complacent.

In 2020, little has changed – and yet everything has changed. The potential scale and scope of distributed denial of service (DDoS) attacks is far greater than it ever has been. Attackers can call on massive botnets to launch attacks, thanks to the ongoing rapid growth in cloud usage and expansion of the IoT, which has given more devices and resources which can be exploited. Furthermore, the vulnerabilities that these botnets can target are challenging to protect using standard network security solutions.

So what attack types will we see during this year? Here are 5 key trends that I expect to see developing during the coming months.

Attacks will reach unprecedented scale

According to the Department for Homeland Security, the scale of DDoS attacks has increased tenfold over the last five years. The DHS has also stated that if this trend continues, it not certain whether corporate and critical national infrastructures will be able to keep up.

A perfect storm of factors is feeding into the growth in DDoS scale. Criminals are hijacking cloud resources, or simply renting public cloud capacity using stolen card details to massively amplify their attacks.  At the same time, the explosion in IoT devices gives criminals more potential recruits as soldiers for their botnet armies.  As a result, the gap between an organization’s available bandwidth on its internet connection and the size of an average DDoS attack is widening.  Even the biggest security appliances currently available cannot compete with attack volumes that in many cases are over 50 times greater than the capacity of an organization’s internet connection.

Game-changing industrialized attacks

Furthermore, DDoS attacks are no longer the realm of digital vandalism, launched primarily by individuals interested in testing their own capabilities or causing a nuisance. The underground economy is booming, with new marketplaces for cybercrime tools and techniques being introduced all the time. There is a clear recognition amongst bad actors that cyberattacks, including DDoS attacks, can be enormously profitable – whether for criminal or even political purposes.  Criminals are monetizing their investments in creating massive botnets by offering DDoS-for-hire services to anyone that wants to launch an attack, for just a few dollars per minute. 

And on the subject of politics, with a US presidential election coming up in 2020, and following recent destabilizing events in the Middle East, the potential for a major politically-motivated cyberattack is higher than ever. It would not be the first such attack – Estonia fell victim to a country-wide DDoS attack over a decade ago – but the blackout-level potential of today’s attacks is far greater. Simultaneously, it is becoming ever easier to obfuscate the true source of an attack, making definite attack attribution very difficult. From a political perspective, the ability to ‘frame’ an enemy for a large-scale attack has obvious, and worrying consequences.

Power infrastructures under targeted attack

On a related point, targeting industrial controls has become an increasing focus for nation-state attacks. The US power grid, and power infrastructure in Ukraine are both known to have been targeted by state-sponsored Russian hackers.

As more industrial systems are exposed to the public internet, a targeted DDoS attack against these could easily cause outages that interrupt critical power, gas or water supplies (think industry 4.0). And at the other end of the supply chain, Trend Micro’s recent Internet of Things in the Cybercrime Undergroundreport described how hackers are sharing information on how to hack Internet-connected gas pumps and related devices often found in industrial applications. These devices could either be flooded to cause a wide-ranging blackout, or infected and recruited into botnets for use in DDoS attacks, or to manipulate industrial processes. 

APIs are the weakest link

However, DDoS attacks are no longer limited to merely attacking or exploiting organizations’ infrastructure. In 2020, I expect attacks against APIs to move into the spotlight. As we know, more and more organizations are moving workloads into the cloud, and this means that APIs are increasing in volume.

Every single smart device within an IoT ecosystem, for example, is ultimately interacting with an API. And far less bandwidth is needed to attack APIs, and they can rapidly become hugely disruptive bottlenecks. Unlike a traditional DDoS attack which bombards a website or network with bogus traffic so that infrastructure grinds to a halt, an API DDoS attack focuses on specific API requests which generate so much legitimate internal traffic that the system is attacking itself – rather like a massive allergic reaction.  Many cloud-based organizations are vulnerable to this, and APIs are harder to protect using conventional methods.  So I expect attackers to increasingly exploit this vulnerable spot in organizations’ defensive armor.

The cloud is not a safe haven

There is an assumption in the market that migrating workloads to public cloud providers automatically makes businesses better off – and in many ways of course, this is true. Flexibility, scalability, agility, cost-effectiveness – there are myriad business benefits to be gleaned from the cloud. Yet the assumption that the major providers automatically offer attack-proof security is an illusion. In October 2019, AWS was taken offline for eight hours, demonstrating that even the biggest public cloud providers are vulnerable to DDoS attacks, with hugely disruptive potential knock-on effects to their customers. Some studies estimate that knocking out a single cloud provider could already cause $50 billion to $120 billion in economic damage—on a par with the aftermath resulting from Hurricane Katrina and Hurricane Sandy.

In conclusion, these points may paint a bleak picture for 2020. But companies that adopt the mindset of ‘not if, but when’ will be well positioned to counter the escalating threats.  Using solutions which are capable of fending off high-volume DDoS attacks as well as resource-intensive exploits on protocols and application levels, organizations can stay a step ahead of threat actors, and avoid becoming their next victim.

About the author: Marc Wilczek is Chief Operating Officer at Link11, an IT security provider specializing in DDoS protection.

Copyright 2010 Respective Author at Infosec Island
  • January 14th 2020 at 13:21

20/20 Vision on 2020's Network Security Challenges

As the new year starts, it’s natural to think about the network security challenges and opportunities that organizations are likely to face over the next 12 months – and how they will address them. Of course, we are likely to see brand-new threats emerging and unpredictable events unfolding. But here are four key security challenges that I believe will be at the top of enterprise agendas this year.

Managing misconfigurations

The first challenge that organizations will address is data and security breaches due to misconfigurations. These have been a constant problem for enterprises for decades, with the most recent example being the large-scale incident which impacted Capital One in 2019. These are usually caused by simple human error, leaving a security gap that is exploited by actors from outside the organization. Unfortunately, humans are not getting any more efficient in avoiding mistakes, so breaches due to misconfigurations will continue to be a problem that needs to be fixed.

At the same time, the technology environment that the network security staff is working within is getting ever more complex. There are more network points to secure – both on-premise and in public or private clouds – and therefore a much larger attack surface. The situation is getting worse – as highlighted in our 2019 cloud security survey, which showed that two thirds of respondents use multiple clouds, with 35% using three or more cloud vendors, and over half operating hybrid environments. The only solution to this growing complexity is network security automation. Humans need tools to help them set and manage network configurations more accurately and more efficiently, so the demand for security automation is only going to increase.

Compliance complexity

Achieving and maintaining regulatory compliance has long been a major challenge for networking staff, and as networks become more complex it is only getting harder. In recent years, we have seen a raft of new compliance frameworks introduced across multiple verticals and geographical regions. Regulators worldwide are flexing their muscles.

The crucial point to understand is that new regulations typically don’t replace existing regimes – rather, they add to what is already in place. The list of regulatory demands facing organizations is getting longer and achieving and demonstrating compliance is becoming an ever-larger commitment for organizations.  Once again, the only solution is more automation: Being in “continuous compliance”, with automatic creation of audit-ready reports for all the relevant regulations, delivers both the time and resource savings that organizations need in order to meet their compliance demands.

The turn to intent-based network security

What do I mean by intent-based network security? It is ultimately about asking a simple question – why is this security control configured the way it is?

Understanding the intent behind individual network security rules is crucial for a wide range of network maintenance and management tasks, from responding to data breaches to undertaking network cleanups, from working through vulnerability reports to dealing with planned or unplanned downtime. In every scenario, you need to understand why the security setting is the way it is, and who to notify if something has gone wrong or if you want to amend or remove the rule.

And the answer is always that a particular business application needed connectivity from point A to point B. The organization “just” needs to find out which application that was – and that’s 95% of the intent.

The trouble is that organizations are usually not diligent enough about recording this intent.  The result is a huge number of undocumented rules whose intent is unclear. In other words, organizations are in a ‘brownfield’ situation; they have too many rules, and not enough information about their intent.

So, I believe that this year, we will see more and more deployment of technologies that allow a retrospective understanding of the intent behind security rules, all based on the traffic observed on the network. By listening to this traffic and applying algorithms, these new technologies can reverse-engineer and ultimately identify, and document, the original intent.

Embracing automation

Public cloud vendors are providing more and more security features and controls, and this trend looks set to continue, with more security controls becoming available as part of their core offerings. This is a good thing. The more controls available, the more secure organizations can be – if they take advantage of the additional capabilities.

But this doesn’t mean less work for IT and security teams. They need to take ownership of these new capabilities, and to configure and manage them properly – and this takes us straight back to the misconfiguration issue I outlined earlier.

In conclusion, to distil my predictions for network security over this year into a single point, it would be the need to embrace more automation across all security and compliance-related processes. This is at the core of enabling organizations to manage the ever-growing complexity of their networks and responding to the constantly evolving threat landscape.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
  • January 13th 2020 at 18:20

Is Cybersecurity Getting Too Complex?

Weighing SMB Security Woes Against the Managed Security Promise

Looking strictly at the numbers, it appears small to mid-sized businesses (SMBs) are sinking under the weight of their own IT complexity. To be more efficient and competitive, SMBs are reaching to the same IT solutions that large enterprises consume: hybrid/multi-cloud solutions (61% have a multi-cloud strategy, with 35% claiming hybrid cloud use), remote work tools, and a dizzying array of platforms. But unlike the large enterprise, SMBs often have fewer dedicated information security staff to manage the increasing attack surface these systems create. As if to prove the point, attacks on the SMB are escalating: 66% experienced a cyberattack in the past year, with average incident costs on the rise. In a world where smaller business data is as monetizable as that of the large enterprise, it’s not surprising that bad actors target organizations they may reasonably assume have weaker defenses.

I think it’s safe to say the SMB is keeping pace with their larger brethren in terms of IT complexity (if not scale) but falling short in terms of the methods to keep a handle on it—and they appear to be suffering the consequences.

Are Managed Security Solutions the Answer?

While it appears many SMBs could use a lifeline, the extent to which managed security services (MSS) are that holistic answer requires a deeper analysis of the organization’s unique strengths and weaknesses. Cyber risk is not a simple problem, and solutions are not “one-size-fits-all.” On the plus side, MSS offers companies the ability to quickly augment internal capabilities with a high degree of specialized expertise, tools, and solutions they may lack without having to take on the daily maintenance, hire from a competitive labor pool, or burden existing staff. By outsourcing these capabilities, companies can leverage teams that are highly specialized in security, enabling them to improve their security defenses in key areas at a lower overall cost as measured against the CapEx, OpEx, and time requirements of standing up the same capabilities internally. Any measure of relative costs must also include the value of mitigating cyber risk—such risks, if capitalized upon by malicious actors, carries significant costs of its own.

However, there is a wide range of managed security services out there—and most providers would happily sell them all to every prospective customer. The burden is on the SMB to fully understand whether and in what areas they need that extra support to supplement the tools, people, processes, and capabilities they already have.

Managed Security Services: Assessing for Optimal Value

Most organizations have made investments in information security tools and resources. A few outperformers (usually large enterprises) may already be at best-practice security in many areas, with dedicated staff, their own Security Operations Center and endpoint detection and response capabilities. Such enterprises may have little need to outsource security functions. Others may focus little on security and require across-the-board help. Most organizations will be somewhere in the middle. Ultimately, the goal should be to maximize the use of the investments already made and augment staff with MSS only where you can get the most strategic value for the expenditure.

To begin, organizations should consider executing a security risk assessment—preferably against a security framework such as the NIST Cybersecurity Framework (CSF) or other, potentially required industry-specific framework (HITRUST would be an example in the healthcare sector). These can be conducted in house or via third-party assessment firms. The output should enable the organization to take an in-depth look at their people, processes, and technology and get a realistic view of where their gaps lie. This up-front work should help isolate areas where MSS would be of great value; and it may identify areas where a few investments may be enough to build internal capabilities sufficiently to manage in house. 

At the end of the day, businesses must ensure they have enough resources to do everything from basic blocking and tackling on security—such as log monitoring, patching, sorting through alerts (routine, repetitive, time-consuming tasks) to incident readiness and response and security for endpoints, cloud, and Software as a Service (SaaS), among others. Because the SMB is indeed getting vastly more complex and difficult to defend, this span of specialized security requirements is where gaps often will lie in obvious pockets of both tools and people, leaving direct pointers to where MSS can potentially provide a lifeline.

Managed Security Services for the SMB: The Net-Net

There is no across-the-board answer for whether MSS is right for every SMB and which services offer the most value. Yet applied strategically, MSS can greatly help SMBs bridge the divide between their growing complexity (and associated security vulnerabilities) and that elusive utopia called “Best-Practice Security.” MSS providers do nothing but security and can help address the cybersecurity skills shortage. But to find the right services that complement specific resource gaps, enterprises should first fully assess their own security current state to find out where MSS will add the most value.

About the author: Sam Rubin is a Vice President at The Crypsis Group, where he leads the firm’s Managed Security Services business, assists clients, and develops the firm’s business expansion strategies.

Copyright 2010 Respective Author at Infosec Island
  • January 13th 2020 at 18:14

Global Security Threats Organizations Must Prepare for in 2020

As we kickoff a new decade, it's time, once again, to gaze into our crystal ball and look at the year ahead.

In 2020, businesses of all sizes must prepare for the unknown, so they have the flexibility to withstand unexpected and high impact security events. To take advantage of emerging trends in both technology and cyberspace, businesses need to manage risks in ways beyond those traditionally handled by the information security function, since new attacks will most certainly impact both shareholder value and business reputation.

After reviewing the current threat landscape, there are three dominant security threats that businesses need to prepare for in 2020. These include, but are not limited to:

  • The Race for Technology Dominance 
  • Third Parties, the Internet of Things (IoT) and the Cloud 
  • Cybercrime – Criminals, Nation States and the Insider

An overview for each of these areas can be found below:

The Race for Technology Dominance 

Technology has changed the world in which we live. Old norms are changing, and the next industrial revolution will be entirely technology driven and technology dependent. In short, technology will enable innovative digital business models and society will be critically dependent on technology to function. Intellectual property will be targeted as the battle for dominance rages. 

Evidence of fracturing geopolitical relationships started to emerge in 2018 demonstrated by the US and China trade war and the UK Brexit. In 2020, the US and China will increase restrictions and protectionist measures in pursuit of technology leadership leading to a heightened digital cold war in which data is the prize.  This race to develop strategically important next generation technology will drive an intense nation-state backed increase in espionage. The ensuing knee jerk reaction of a global retreat into protectionism, increased trade tariffs and embargos will dramatically reduce the opportunity to collaborate on the development of new technologies. The UK’s exclusion from the EU Galileo satellite system, as a result of the anticipated Brexit, is one example.

New regulations and international agreements will not be able to fully address the issues powered by advances in technology and their impact on society.  Regulatory tit for tat battles will manifest across nation states and, rather than encourage innovation, is likely to stifle and constrain new developments, pushing up costs and increasing the complexity of trade for multinational businesses.

Third Parties, the IoT and the Cloud 

A complex interconnection of digitally connected devices and superfast networks will prove to be a security concern as modern life becomes entirely dependent on technology. Highly sophisticated and extended supply chains present new risks to corporate data as it is necessarily shared with third party providers. IoT devices are often part of a wider implementation that is key to the overall functionality.

Few devices exist in isolation, and it is the internet component of the IoT that reflects that dependency. For a home or commercial office to be truly 'smart', multiple devices need to work in cooperation. For a factory to be 'smart', multiple devices need to operate and function as an intelligent whole. However, this interconnectivity presents several security challenges, not least in the overlap of consumer and operational/industrial technology.

Finally, since so much of our critical data is now held in the cloud, opening an opportunity for cyber criminals and nation states to sabotage the cloud, aiming to disrupt economies and take down critical infrastructure through physical attacks and operating vulnerabilities across the supply chain. 

Cybercrime – Criminals, Nation States and the Insider

Criminal organizations have a massive resource pool available to them and there is evidence that nation states are outsourcing as a means of establishing deniability. Nation states have fought for supremacy throughout history, and more recently, this has involved targeted espionage on nuclear, space, information and now smart technology. Industrial espionage is not new and commercial organizations developing strategically important technologies will be systematically targeted as national and commercial interests blur. Targeted organizations should expect to see sustained and well-funded attacks involving a range of techniques such as zero-day exploits, DDoS attacks and advanced persistent threats.

Additionally, the insider threat is one of the greatest drivers of security risks that organizations face as a malicious insider utilizes credentials to gain access to a given organization’s critical assets. Many organizations are challenged to detect internal nefarious acts, often due to limited access controls and the ability to detect unusual activity once someone is already inside their network. 

The threat from malicious insider activity is an increasing concern, especially for financial institutions, and will continue to be so in 2020.

Don’t Get Left Behind

Today, the stakes are higher than ever before, and we’re not just talking about personal information and identity theft anymore. High level corporate secrets and critical infrastructure are constantly under attack and organizations need to be aware of the emerging threats that have shifted in the past year, as well as those that they should prepare for in the coming year.

By adopting a realistic, broad-based, collaborative approach to cyber-security and resilience, government departments, regulators, senior business managers and information security professionals will be better able to understand the true nature of cyber-threats and respond quickly and appropriately. This will be of the highest importance in 2020 and beyond.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

 

Copyright 2010 Respective Author at Infosec Island
  • January 8th 2020 at 20:43

Six Reasons for Organizations to Take Control of Their Orphaned Encryption Keys

A close analysis of the cybersecurity attacks of the past shows that, in most cases, the head of the cyber kill chain is formed by some kind of privilege abuse. In fact, Forrester estimates that compromised privileged credentials play a role in at least 80 percent of data breaches. This is the reason privileged access management (PAM) has gained so much attention over the past few years. With securing and managing access to business-critical systems at its core, PAM aims to provide enterprises with a centralized, automated mechanism to regulate access to superuser accounts. PAM solutions ideally do this by facilitating end-to-end management of the privileged identities that grant access to these accounts.

However, the scope of privileged access security is often misconceived and restricted to securing and managing root account passwords alone. Passwords, beyond a doubt, are noteworthy privileged access credentials. But the constant evolution of technology and expanding cybersecurity perimeter calls for enterprises to take a closer look at the other avenues of privileged access, especially encryption keys—which despite serving as access credentials for huge volumes of privileged accounts, are often ignored. 

This article lays focus on the importance encryption key management—why enforcing SSH key and SSL certificate management is vital, and how by doing so, you can effectively bridge the gaps in your enterprise privileged access security strategy. 

1. Uncontrolled numbers of SSH keys trigger trust-based attacks

The average organization houses over 23,000 keys and certificates many of which grant sweeping access to root accounts, says a Ponemon survey. Also, a recent report about the Impact of unsecured digital identities states that 71% of the respondents did not have any idea about the number of keys or the extent of their access within the organization. Without a centralized key management approach, anybody in the network can create or duplicate any number of keys. These keys are often randomly generated as needed and are soon forgotten once the task they are associated with is done. Malicious insiders can take advantage of this massive ocean of orphaned SSH keys to impersonate admins, hide comfortably using encryption, and take complete control of target systems.

2. Static keys create permanent backdoors

Enterprises should periodically rotate their SSH keys to avoid privilege abuse, but huge volumes of unmanaged SSH keys make key rotation an intimidating task for IT administrators. Moreover, due to a lack of proper visibility on which keys can access what, there is widespread apprehension about rotating keys in fear of accidentally blocking access to critical systems. This leads to a surge of static SSH keys, which have the potential to function as permanent backdoors. 

3. Unintentional key duplication increases the chance of privilege abuse

For the sake of efficiency, SSH keys are often duplicated and circulated among various employees in an organization. Such unintended key duplication creates a many-to-many key-user relationship, which highly increases the possibility of privilege abuse. This also makes remediation a challenge since administrators have to spend a good amount of time revoking keys to untangle the existing relationships before creating and deploying fresh, dedicated key pairs.

4. Failed SSL certificate renewals hurt your brand's credibility

SSL certificates, unlike keys, have a set expiration date. Failing to renew SSL certificates on time can have huge implications on website owners as well as end users. Browsers don't trust websites with expired SSL certificates; they throw security error messages when end users try to access such sites. One expired SSL certificate can drive away potential customers in an instant, or worse, lead to personal data theft for site visitors. 

5. Improper SSL implementations put businesses at risk

Many businesses rely completely on SSL for internet security, but they often don't realize that a mere implementation of SSL in their network is not enough to eliminate security threats. SSL certificates need to be thoroughly examined for configuration vulnerabilities after they are installed. When ignored, these vulnerabilities act as security loopholes which cybercriminals exploit to manipulate SSL traffic and launch man-in-the-middle (MITM) attacks.

6. Weak certificate signatures go unheeded

The degree of security provided by any SSL certificate depends on the strength of the hashing algorithm used to sign the certificate. Weak certificate signatures make them vulnerable to collision attacks. Cybercriminals exploit such vulnerabilities to launch MITM attacks and eavesdrop on communication between users and web servers. Organizations need to isolate certificates that bear weak signatures and replace them with fresh certificates containing stronger signatures. 

Bridging the gaps in your PAM strategy

All the above scenarios highlight how important it is to widen the scope of your privileged access security strategy beyond password management. Even with an unyielding password manager in place, cybercriminals have plenty of room to circumvent security controls and gain access to superuser accounts by exploiting various unmanaged authentication identities, including SSH keys and SSL certificates. Discovering and bringing all such identities that are capable of granting privileged access under one roof is one important step enterprises should take to bridge gaps in their privileged access security strategy. For, today's unaccounted authentication identities could become tomorrow's stolen privileged credentials!

About the author: Shwetha Sankari is an IT security product consultant at ManageEngine. With key area of expertise in content marketing, she spends her time researching the latest trends in the IT security industry and creating informative user education content.

Copyright 2010 Respective Author at Infosec Island
  • December 19th 2019 at 21:32

The Cybersecurity Skills Gap: An Update for 2020

The gap in trained, experienced cybersecurity workers is one of those perennial problems: much ink is spilled every year in assessing the scale of the problem, and what can be done about it. We have recently pointed out, for instance, the importance of stopping attacks before they happen, and the fact that you can’t hire your way out of the skills shortage.

As we move into 2020, it's apparent that despite this focus on the problem, it has not been solved. There is still a huge skills gap when it comes to cybersecurity, and in many ways, it is getting worse. According to Cyber Crime Magazine, there may be as many as 3.5 million unfilled cybersecurity jobs by 2021, and recent high-profile cyber breaches provide further evidence that the problem is already becoming acute.

That said, there are some new trends emerging when it comes to managing this crisis. In this article, we'll take a look at some of the innovative ways that companies are getting around the problem.

The Widening Gap

First, some context. At the most basic level, the skills gap in cybersecurity is the product of a simple fact: there are more cybersecurity positions that need to be filled than there are qualified graduates to fill them. This is despite colleges encouraging students to study cybersecurity, and despite companies encouraging their existing employees to retrain.

Look a little deeper, however, and some other reasons for the shortage becomes apparent. One is that a worrying number of qualified professionals are leaving the cybersecurity sector. At cybersecurity conferences, it’s not uncommon to see entire tracks about managing mental health, addiction, and work stress. As these experienced professionals leave the sector, this puts more pressure on younger, less experienced colleagues.

Secondly, a major source of stress for cybersecurity professionals is that they are often assigned total (or at least partial) responsibility for the losses caused by data breaches. In many cases, this is unfair, but persists because many companies still see "security" as a discrete discipline that can be dealt with in isolation from other IT tasks, corporate processes, and reputation management.

Training and Development

Addressing these issues requires more than just increasing the number of qualified graduates. Instead, businesses need to take more innovative approaches to hire, train, and retain cybersecurity staff.

These approaches can be broken down into three types. The first is that cybersecurity training needs to change from an event into a process. Some have argued that traditional, classroom-based cybersecurity training doesn’t reflect the field and that this training needs to be delivered in a more vocational way. Instead of hiring one cybersecurity expert, companies should look to train all of their employees in the basics of cybersecurity. 

In fact, even cybersecurity professionals might benefit from this type of training. Despite companies being resistant to spending more on employee training, investing in training has one of the highest ROI that investors can make. In addition, recent developments have made it clear that continuous training is needed – concerns about the security implications of 5G networks, for example, are now forcing seasoned professionals to go back to school.

Secondly, dramatic gains in cybersecurity can be achieved without employing dedicated staff. One of the major positive outcomes of the cybersecurity skills gap, in fact, has been the proliferation of free, easy to use security tools (like VPNs and secure browsers), which aim to make cybersecurity "fool-proof", even for staff with little or no technical training. These tools can be used to limit the risk of cyberattacks without the necessity of complex (and expensive) dedicated security solutions.

Third, the rise of "security as a service" suggests that the cybersecurity sector of the future is one that relies on outsourcing and subcontracting. Plenty of companies already outsource business processes that would have been done in-house just a few years ago – everything from creating a website to outsourcing pen testing – and taking this approach may provide a more efficient way to use the limited cybersecurity professionals that are available. 

AI Tools: The Future?

Another striking feature of the cybersecurity skills debate, and one which is especially apparent as we move into 2020, is the level of discussion around AI tools. 

Unfortunately, assessing the level of efficacy of AI tools when it comes to improving cybersecurity is difficult. That's because many cybersecurity professionals are skeptical when it comes to AI is a useful ally in this fight. In some ways, they are undoubtedly correct: in a recent study, one popular AI-powered antivirus was defeated with just a few lines of text appended to popular malware.

On the other hand, it must be recognized that cybersecurity pros have a vested interest in talking down how effective AI tools are. If AIs were able to protect networks on their own, after all, cybersecurity pros would be out of a job. Or rather they would be if there were not so many unfilled cybersecurity vacancies.

Ultimately, given the lack of qualified or trained professionals, AI tools are likely to continue to be a major focus of investment for companies from 2020 onwards. This, in turn, entails that IT professionals overcome some of their reticence about working with them, and begin to see AIs less as competitors and more as collaborators.

The Bottom Line

It's also worth pointing out that the individual trends we've mentioned can be seen as working against each other. In some cases, companies have attempted to overcome the skills gap by training large numbers of employees to perform cybersecurity roles. Others have gone in the other direction – outsourcing specific aspects of their cybersecurity to hyper-specialized companies. Others are taking a gamble that AI tools are going to eventually replace the need for (at least some of their) cybersecurity professionals.

Which of these trends is eventually going to dominate the market remains to be seen, but one thing is clear: 2020 is a critical juncture for the entire cybersecurity sector.

Copyright 2010 Respective Author at Infosec Island
  • December 18th 2019 at 06:11

Modernizing Web Filtering for K-12 Schools

In today’s 24/7 internet access world, network administrators need next-generation web filtering to effectively allow access to the internet traffic they do want, and stop the traffic they don’t want. How does this affect the education vertical, with students in K-12? Well, for starters, a lot has changed since the Children’s Internet Protection Act (CIPA) was enacted in 2000. Adding on to dated Acts, let’s not forget that almost two-decades later, the landscape in academics has shifted drastically. We are no longer working from computer labs on a single-network, we are in the world of various personal devices expecting consistent wi-fi and cloud access.

The internet is insurmountable - and as it continues to rapidly evolve, so should the filtering tactics used to keep students safe. But while the law requires schools and public libraries to block or filter obscene and harmful content, that still leaves room for interpretation.

How Much Web Filtering is Too Much?

A 2017 survey shows that 63% of K-12 teachers regularly use laptops and computers in the classroom, making the topic of web filtering in K-12 environments crucial. With the rise of tech-savvy students and classroom settings, precautions must be taken, however, there is such a thing as ‘over-filtering’ and ‘over-blocking.’ 

Current laws and guidelines that prevent students from accessing crucial learning and research materials, have become a rising issue that schools and parents are constantly battling with the FCC. As mentioned on the Atlantic, excessive filtering can limit students research on topics that can be useful to, for example, debate teams or students seeking anti-bullying resources. Instead of enforcing the same rules across the entire school or district, network administrators need to develop a solution that offers flexibility and customizable options, pinpointing specific websites, applications and categories that each grade level may access.

Working Together to Clearly Define Web Access

In the past, schools practiced the over-zealous “block everything” approach. Now, it is important for school administrators and IT departments to collectively work together to define web-access by grade, age, project duration and keyword search. This allows students access to educational resources while administrators maintain acceptable parameters in-place - blocking inappropriate content from sites or applications

Assessing Network Necessities

Academic boards can take it one-step further putting access controls on all school networking, including wi-fi networks to control the use of personal devices during school hours.

In addition to Web Filtering, adding controls such as enforcing safe search on popular search engines, and using restricted mode on YouTube will increase productivity, limit cyberbullying, and deny access to students searching for ways to inflict self-harm or perform other acts of violence.

Why limit students education by blocking crucial learning and research materials? By custom-configuring a network to meet the needs of each grade-level and classroom, educators are encouraging students to become academically resourceful. IT departments and school administrators must form a partnership to generate a solution that will allow students, teachers, and administrators access to the educational tools they need.

It’s time to break down the glass wall and acknowledge the presence of educational materials and information that is now available through various media channels and platforms. The internet which was once a luxury accessible to only a few, is now an amenity available to almost anyone - including young students - signifying the importance of fine-tuned web filters and content security across K-12 networks.

Copyright 2010 Respective Author at Infosec Island
  • December 18th 2019 at 06:05

University of Arizona Researchers Going on Offense and Defense in Battle Against Hackers

The global hacker community continues to grow and evolve, constantly finding new targets and methods of attack. University of Arizona-led teams will be more proactive in the battle against cyberthreats thanks to nearly $1.5 million in grants from the National Science Foundation.   The first grant, for nearly $1 million, will support research contributing to an NSF program designed to protect high-tech scientific instruments from cyberattacks. Hsinchun Chen, Regents Professor of management information systems at the Eller College of Management, says the NSF's Cybersecurity Innovation for Cyberinfrastructure program is all about protecting intellectual property, which hackers can hold for ransom or sell on the darknet.   "You have infrastructure for people to collect data from instruments like telescopes," Chen said. "Scientists use that to collaborate in an open environment. Any environment that is open has security flaws."   A major hurdle to protecting scientific instruments, Chen said, is that the risks to science facilities have not been properly analyzed. He will lead a team using artificial intelligence to study hackers and categorize hundreds of thousands of risks, then connect those risks to two partner facilities at the University of Arizona.   Chen's team is working with CyVerse, a national cyberinfrastructure project led by the University of Arizona, and the Biosphere 2 Landscape Evolution Observatory project. CyVerse develops infrastructure for life sciences research and had researchers involved in this year's black hole imaging. Biosphere 2's LEO project collects data from manmade landscapes to study issues including water supply and how climate change will impact arid ecosystems.   The team will comb through hacker forums to find software tools designed to take advantage of computer system flaws, scan CyVerse and LEO internal and external networks, and then link specific tools found in the forums to specific network vulnerabilities.   "The University of Arizona is a leader in scientific discovery, and we are actively working on solutions to the world's biggest challenges. To do that, it is imperative to keep our state-of-the-art instruments safe from cyberattacks," said UArizona President Robert C. Robbins. "Hsinchun Chen is once again at the forefront of innovation in cybersecurity infrastructure, and this funding will help ensure the data and discoveries at CyVerse and Biosphere 2 are protected, which ultimately enables our researchers to keep working toward a bright future for us all."   Chen's co-principal investigators on the project include: Mark Patton, senior lecturer in management information systems; Peter Troch, science director at Biosphere 2; Edwin Skidmore, director of infrastructure at the BIO5 Institute, which houses CyVerse; and Sagar Samtani, assistant professor in the University of South Florida information systems and decision sciences department and one of Chen's former students.   Chen is also leading an effort to improve the process of collecting and analyzing data from international hacker communities. The NSF, through its Secure and Trustworthy Cyberspace program, has awarded a $500,000 grant to Chen and a team of researchers to gather and analyze data on emerging threats in international hacker markets operating in Russia and China.   "We're creating infrastructure and technologies based on artificial intelligence to study darknet markets," Chen said, "meaning the places where you can buy credit cards, malware to target particular industries or government, service to hack other people, opioids, drugs, weapons — it's all part of the dark web."   The effort will focus on developing techniques to address challenges in combating international hacking operations, including the ability to collect massive amounts of data and understand common hacker terms and concepts in other countries and languages.   Chen's co-principal investigator on the research is Weifing Li, assistant professor of management information systems at the University of Georgia.   SOURCE: The University of Arizona Copyright 2010 Respective Author at Infosec Island
  • December 4th 2019 at 18:01

Securing the Internet of Things (IoT) in Today's Connected Society

The Internet of Things (IoT) promises much: from enabling the digital organization, to making domestic life richer and easier. However, with those promises come risks: the rush to adoption has highlighted serious deficiencies in both the security design of IoT devices and their implementation.

Coupled with increasing governmental concerns around the societal, commercial and critical infrastructure impacts of this technology, the emerging world of the IoT has attracted significant attention.

While the IoT is often perceived as cutting edge, similar technology has been around since the last century. What has changed is the ubiquity of high-speed, low-cost communication networks, and a reduction in the cost of compute and storage. Combined with a societal fascination with technology, this has resulted in an expanding market opportunity for IoT devices, which can be split into two categories: consumer and industrial IoT.

Consumer IoT

Consumer IoT products often focus on convenience or adding value to services within a domestic or office environment, focusing on the end user experience and providing a rich data source that can be useful in understanding consumer behavior.

The consumer IoT comprises a set of connected devices, whose primary customer is the private individual or domestic market. Typically, the device has a discrete function which is enabled or supplemented by a data-gathering capability through on-board sensors and can also be used to add functionality to common domestic items, such as refrigerators. Today’s 'smart' home captures many of the characteristics of the consumer IoT, featuring an array of connected devices and providing a previously inaccessible source of data about consumer behavior that has considerable value for organizations.

Whilst the primary target market for IoT devices is individuals and domestic environments, these devices may also be found in commercial office premises – either an employee has brought in the device or it has been installed as an auxiliary function.

Industrial IoT

Industrial IoT deployments offer tangible benefits associated with digitization of processes and improvements in supply chain efficiencies through near real-time monitoring of industrial or business processes.

The industrial IoT encompasses connected sensors and actuators associated with kinetic industrial processes, including factory assembly lines, agriculture and motive transport. Whilst these sensors and actuators have always been prevalent in the context of operational technology (OT), connectivity and the data processing opportunities offered by cloud technologies mean that deeper insight and near real-time feedback can further optimize industrial processes. Consequently, the industrial IoT is seen as core to the digitization of industry.

Examples of industrial usage relevant to the IoT extend from manufacturing environments, transport, utilities and supply chain, through to agriculture.

The IoT is a Reality

The IoT has become a reality and is already embedded in industrial and consumer environments. It will further develop and become a critical component of not just modern life, but critical services. Yet, at the moment, it is inherently vulnerable, often neglects fundamental security principles and is a tempting attack target. This requires a change.

There is a growing momentum behind the need for change, but a lot of that momentum is governmental and regulatory-focused which, as history tells us, can be problematical. The IoT can be seen as a form of shadow IT, often hidden from view and purchased through a non-IT route. Hence, responsibility for its security is often not assigned or misassigned. There is an opportunity for information security to take control of the security aspects of the IoT, but this is not without challenges: amongst them skills and resources. Nevertheless, there is a window of opportunity to tame this world, by building security into it. As most information security professionals will know, this represents a cheaper and less disruptive option than the alternative.

In the face of rising, global security threats, organizations must make systematic and wide-ranging commitments to ensure that practical plans are in place to acclimate to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.Enterprises with the appropriate expertise, leadership, policy and strategy in place will be agile enough to respond to the inevitable security lapses. Those who do not closely monitor the growth of the IoT may find themselves on the outside looking in.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • November 19th 2019 at 15:16

What Is Next Generation SIEM? 8 Things to Look For

The SIEM market has evolved and today most solutions call themselves “Next Generation SIEM.” Effective NG SIEM should provide better protection and equally important, if not more, a much more effective, next gen user experience. What you should look for when evaluating a next generation SIEM?

The state of cybersecurity has evolved one threat at a time, with organizations constantly adding new technologies to combat new threats. The result? Organizations are left with complex and costly infrastructures made up of many products that are out of sync with one another, and thus simply cannot keep pace with the velocity of today’s dizzying threat landscape.

Traditional security information and event management (SIEM) solutions tried to make sense of the mess but fell short. Then came “Next Generation SIEM” or NG-SIEM. No vendor today will admit that they sell legacy SIEM, but there is no ISO style organization doling out official NG SIEM stamps of approval. So how is a security professional to know if the technology in front of him or her really brings the benefits they need, or if it’s just another legacy vendor calling itself NG-SIEM?

The basic capabilities of legacy SIEM are well known – data ingestion, analytics engines, dashboards, alerting and so on. But with these legacy SIEM capabilities your security team will still drown in huge amounts of logs. That’s because even many NG-SIEMs in the market still let copious amounts of threats and logs pass through – straight to the doorstep of your security team.

Working Down the Pyramid

A true Next Generation SIEM will enable the security team to work from the top down, rather than bottom up. If we look at the above pyramid, most security analysts have to sift through the bottom layer of logs and alerts – or create manual correlation rules for new attacks that can then move logs up the pyramid. This is extremely time-consuming and frustrating. Essentially security teams (especially small teams of one or two analysts) simply don’t have the bandwidth to go through all the logs, meaning attacks slip through the cracks (and analysts burn out).

Artificial Intelligence technologies available today can help to automatically create correlation rules for existing attacks - and even new attacks - before they occur. The significance of this for security teams is enormous: It means they can begin at the top of the pyramid by going through a small number of logs.  For those threats the analyst deems require further examination, the mid-level and raw data needs to be readily available and easily searchable. 

The Checklist for NG-SIEM

To make sure your NG-SIEM of choice will be effective, look for the following capabilities:

  1. Data lake – a solution that is able to ingest ALL types of data from various sources, making sure data retention can be supported, with very high search performance, including securing the data in transit and at rest.
  2. Data classification – relies on structured and un-structured data classification technologies (such as NLP) in order to sort all collected data into classes of security groups such as MITRE techniques and tactics – representing the data through one language. This will allow much faster investigation.
  3. Behavioral analytics – Built in NTA and UEBA engines. These engines by themselves lack the ability to cover the entire cyber kill chain, therefore need to be part of the NG-SIEM in order to allow correlating them with other signals, thus reducing the noise that typifies them.
  4. Auto-Investigation (or SOAR) can mean many things. The bottom line is that effective auto-investigation needs both to perform prioritization (entity prioritization, supporting all identity types including ip, host, user, email, etc.) and allow impact analysis. Impact analysis is the ability to analyze the level of actual or potential impact that each risk-prioritized entity has on the organization, so that response actions can be prioritized effectively.  
  5. Auto-Mitigation – will not necessarily be implemented on day one, however, a NG-SIEM must have the ability to automatically execute mitigation actions, even if these, in the beginning, are triggered in very narrow security use cases.
  6. Automation – Automation – Automation – nothing can be 100% automated, but in general the NG-SIEM Vendor needs to present at least 80% automation of the legacy SIEM operations. Otherwise we are missing the whole point of what NG-SIEM is all about, supporting the data pyramid approach.
  7. Data relevancy analyst support tools – Manual investigation will always be part of the analyst’s job. A NG-SIEM must present search and hunting tools that support the analyst’s advanced investigation actions, and response. In this way the NG-SIEM will support the analyst efficiently in their route of investigating the data from the top of the pyramid, through only the relevant (related) information at the bottom of it. This way we make sure advanced investigations are done quickly and efficiently.
  8. Community - solutions which have an opensource component will create a dynamic avenue for constant improvement of the NG-SIEM, through community contributions.

All of the above will create a SIEM with a user experience which allows security analysts to work top down rather than bottom up, starting with the highest risk data.

A SIEM platform that can tick off all these boxes will provide performance that is truly “next generation” and enable the organization to respond faster to relevant threats, at lower cost, improved ROI, and will make for a stable and happy security team.

About the author: Avi Chesla is the founder and CEO of empow (empow.co) - a cyber security startup distrupting the SIEM category with our "no rules" AI and NLP based i-SIEM, integrated with the Elastic Stack. Before empow he was CTO at Radware. Avi holds 25 patents in the cyber security arena.

Copyright 2010 Respective Author at Infosec Island
  • November 14th 2019 at 13:59

Cybersecurity and Online Trading: An Overview

Trade and cybersecurity are inherently linked. The promise of the information revolution was always that it would allow people to connect internationally, and that it would make international investment available for everyday citizens.

It has certainly done that, but as trade and investment grow ever more complex, the risks also grow. Alongside the development of international investment networks has developed another, shadowy network of hackers and unscrupulous investment companies. As the Internet of Things (IoT) and Artificial Intelligence (AI) technologies are adopted, the complexity and vulnerability of trading platforms is also going to increase. 

In this article, we’ll take a look at how and why the risks of international trade are increasing, and the political response to this.

The Security Risks Of Trade

There is one primary reason why digital trade is more at risk from cyberattack than ever before: a huge increase in the number of people using online trading platforms. Whilst this increase has greatly increased the ability of individuals to invest internationally, it has also opened up many opportunities for hackers.

In other cases, technologies that have been developed in order to increase the security of international trade can have the opposite effect. The move to cloud storage and Software as a Service (SaaS), for example, has been driven by the perception that there are many security benefits of cloud storage: as research firm BlueTree.ai notes, 83 percent of successful American businesses were planning a SaaS strategy for the coming year, due in part to data security concerns.

Whilst cloud storage can be a more secure way for traders to protect their data (and profits), cloud systems are also an order of magnitude more complex than more 'traditiona;' trading systems. That means that they require similarly complex cybersecurity protocols to be put in place in order to stop the spread of malware infection, or simply the interception of sensitive commercial data.

The Political Response

These concerns have led many governments to seek to regulate and control digital trading, in order to protect both individuals and firms against cyberattack. According to some estimates, up to 50 countries have now put in place – or are planning to put in place – policies that seek to limit the vulnerability of their citizens.

At the moment, however, these measures have largely been adopted on a per-country basis. Since international trading is, by definition, international, this has severely limited the efficacy of these systems. 

Add to the simmering mix the reality that many individual investors simply don’t have the technical know-how to avoid scams and hacks. The Foreign exchange (Forex) market, in particular, has had a reputation for being a sort of online Wild West ever since it opened to retail traders in the late 90’s. Many jumped in (and continue to do so) without even a rudimentary knowledge of basic currency trading strategy, which contributes to the steady and still almost unbelievable 96% failure rate. Combine these poor trading skills with a mostly unregulated brokerage industry and you have a perfect storm preying on mass ignorance.

And this was before cryptocurrency was even a glimmer of a whitepaper in Satoshi Nakamoto’s probably collective head. If Forex is the equivalent of facing down the fastest gun in Dodge City at high noon with a cap pistol, trading cryptocurrency is even more dangerous.  

Leading governments, to their credit, have recognized this minefield. The European Union has identified “a need for closer cooperation at a global level to improve security standards, improve information, and promote a common global approach to network and information security issues." The US has also made similar moves, and it's most recent Cybersecurity Strategy reaffirms the need to “strengthen the capacity and interoperability of those allies and partners to improve our ability to optimize our combined skills, resources, capabilities, and perspectives against shared threats."

There is, however, a very fine balance to be drawn between security and freedom. Any restrictions put in place in order to improve the security of international trading networks risk limiting the ability of individuals and companies to invest across borders. Given the benefits that this kind of decentralized trading has brought the world economy, and over-eager implementation of cross-border cybersecurity systems also risks undermining the profitability of many firms.

The Future

Though these issues are far from being resolved, some consensus on the direction of travel is emerging. The Brookings Institute has recently outlined a number of key principles that will govern the way that international trade will be secured in the years to come.

One of the most important is to ensure access to information across international boundaries. Whilst this may sound like it would increase the opportunities for this data to be stolen, in reality this kind of information sharing limits the risks inherent in the localization of financial records. It is strange to note, in fact, that in this regard the way that international trade is being secured bears many similarities to the kinds of decentralized systems used in cryptocurrency exchanges.

Another key area for development will be in the standardisation of cybersecurity standards and policies across territories. The International Standards Organization (ISO) has recently developed a number of cybersecurity standards that aim to help countries to develop compatible ways of securing international trade. These policies can then be internationally integrated in trade agreements, ensuring that criminals and unscrupulous companies cannot escape justice by fleeing to another jurisdiction.

Finally, there is a building consensus – not just in government but also in industry – that a risk-based approach to cybersecurity needs to be adopted when it comes to securing international trade. This approach is one that has been developed in order to assuage the fears that regulation could stifle trade flows: instead of adopting a 'tick-box' approach to cybersecurity compliance, companies should carefully assess their threat profile before deciding which counter-measures to put in place.

Trust and Security

Ultimately, international digital trade is built on trust, and this will need to be maintained in order to ensure profitability for both individual and institutional investors. 

At the broadest level, as complex networks get harder to secure, there will need to be much more dialogue between policy makers and cybersecurity experts. Building bridges between these communities will support the development of effective cybersecurity practices without putting in place unnecessary trade barriers.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island
  • October 25th 2019 at 19:52

Artificial Intelligence: The Next Frontier in Information Security

Artificial Intelligence (AI) is creating a brand new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. However, like humans, they will be flawed, but capable of achieving incredible results.

AI is already finding its way into many mainstream business use cases and business and information security leaders alike need to understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business. Organizations use variations of AI to support processes in areas including customer service, human resources and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI actually is and what it really means for business and security. 

What Risks Are Posed by AI?

As AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Simultaneously, organizations are beginning to face sophisticated AI-enabled attacks – which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before.  Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk.

While AI systems adopted by organizations present a tempting target, adversarial attackers are also beginning to use AI for their own purposes. AI is a powerful tool that can be used to enhance attack techniques, or even create entirely new ones. Organizations must be ready to adapt their defenses in order to cope with the scale and sophistication of AI-enabled cyber-attacks.

Defensive Opportunities Provided by AI

Security practitioners are always trying to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still – as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfil one type of task. They require sufficient data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously – an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Questions to ask include:

  • Is the problem bounded? (i.e. can it be addressed with one dataset or type of input, or does it require a high understanding of context, which humans are usually better at providing?)
  • Does the organization have the data required to run and optimize the AI system?

Security leaders also need to consider issues of governance around defensive AI, such as:

  • How do defensive AI systems fit into organizational security governance structures?
  • How can the organization provide security assurance for defensive AI systems?
  • How can defensive AI systems be maintained, backed up, tested and patched?
  • Does the organization have sufficiently skilled people to provide oversight for defensive AI systems?

AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions.

AI systems will make mistakes – a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI’s decision-making process. Of course, humans make mistakes too – organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organization’s cyber defenses.

The Future is Now

Computer systems that can independently learn, reason and act herald a new technological era, full of both risk and opportunity. The advances already on display are only the tip of the iceberg – there is a lot more to come. The speed and scale at which AI systems ‘think’ will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware – and at that point, defensive AI will not just be a ‘nice to have’. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks.

To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses. AI is no longer a vision of the distant future: the time to start preparing is now.

Copyright 2010 Respective Author at Infosec Island
  • October 23rd 2019 at 10:17

Five Main Differences between SIEM and UEBA

Corporate IT security professionals are bombarded every week with information about the capabilities and benefits of various products and services. One of the most commonly mentioned security products in recent years has been Security Information and Event Management (SIEM) tools.

And for good reason.

SIEM products provide significant value as a log collection and aggregation platform, which can identify and categorize incidents and events. Many also provide rules-based searches on data.

While often compared to user and entity behavior analytics (UEBA) products, SIEMs are a blend of security information management (SIM) and security event management (SEM). This makes SIEMs adept at providing aggregated security event logs analysts can query for  known security threats.

In contrast, UEBA products utilize machine learning algorithms to analyze patterns of human and entity behavior in real time to uncover anomalies indicative of known and unknown threats.

Let’s consider the five ways in which SIEM and UEBA technology differs.

Point-in-time vs. Real-time Analysis

SIEM provides point-in-time analyses of event data, and is generally limited by the number of events that can be processed in a particular time frame. They also do not correlate physical security events with logical security events.

UEBA, meanwhile, operates in real-time, using machine learning, behavior-based security analytics and artificial intelligence. It can detect threats based on contextual information, and enforce immediate remediation actions.

“While SIEM is a core security technology it has not been successful at providing actionable security intelligence in time to avert loss or damage,” wrote Mike Small, a KuppingerCole analyst in a research note.

Manual vs. Automated Threat Hunting

SIEM does a very good job of providing IT pros with the data they need to manually hunt for threats, including details on what happened, when and where it happened. However, manual effort is needed to analyze the data, particularly to detect anomalies and threats.

UEBA performs real-time analysis using machine learning models and algorithms. These provide the machine speed needed to respond to security threats as they happen, while also offering predictive capabilities that anticipate what will or might happen in the future.

Logs vs. Multiple Data Types

SIEM ingests structured logs. Adding new data types often requires upgrading existing data stores and human intervention. In addition, SIEM does not correlate data on users and their activities, or make connections across applications, over time or user behavior patterns.

UEBA is built to process huge volumes of data from various sources, including structured and unstructured data sets. It can analyze data relationships over time, across applications and networks, and pore over millions of bits to find “meanings” that may help in detecting, predicting, and preventing threats.

Short vs. Long-Term Analysis

SIEM does a very good job of helping IT security staff compile valuable, short-term snapshots of events. It is less effective when it comes to storing, finding and analyzing data over time. For example, SIEM provides limited options for searching historical data.

UEBA is designed for real-time visibility into virtually any data type, both short-term and long-term. This generates insights that can be applied to various use cases such as risk-based access control, insider threat detection and entity-based threat detection  associated with IoT, medical, and other devices.

Alerts vs. Risk Scores

SIEM, as the name implies, centralizes and manages security events from host systems, applications, and network and security devices such as firewalls, antivirus filters, etc. They deliver alerts based on events that may or may not be malicious threats. As a result, SIEMs generate a high proportion of false positive alerts which cannot all be investigated. This can lead to “actual” cyber threats going undetected.

UEBA provides risk scoring, which offers granular ranking of threats. By ranking risk for all users and entities in a network, UEBA enables enterprises to apply different controls to different users and entities, based on the level of threat they pose. One of the major advantages of risk scoring is it greatly eliminates the number of false positives.

Both SIEM and UEBA provide value for security operations teams. Each excels at specific use cases. When comparing these two technologies, it’s helpful to consider how they diverge. Namely, SIEM is oriented on point-in-time analyses of known threats. UEBA, meanwhile, provides real-time analysis of activity that can detect unknown threats as they happen and even predict a security incident based on anomalous behavior by a user or entity.

Copyright 2010 Respective Author at Infosec Island
  • October 23rd 2019 at 10:14

For Cybersecurity, It’s That Time of the Year Again

Autumn is the “hacking season,” when hackers work to exploit newly-disclosed vulnerabilities before customers can install patches. This cycle gives hackers a clear advantage and it’s time for a paradigm shift.

Each year, when the leaves start changing color you know the world of cybersecurity is starting to heat up.

This is because the cyber industry holds its two flagship events — DEFCON and BlackHat —over the same week in Las Vegas in late Summer. Something akin to having the Winter and Summer Olympics back-to-back in the same week, these events and other similar ones present priceless opportunities for the world’s most talented hackers to show their chops and reveal new vulnerabilities they’ve uncovered.

It also means that each Fall there’s a mad race against time as customers need to patch these newly revealed vulnerabilities before hackers can pull off major attacks — with mixed results.

A good example began in August, after researchers from Devcore revealed vulnerabilities in enterprise VPN products during a briefing they held at BlackHat entitled “Infiltrating Corporate Intranet Like NSA: Pre-auth RCE on Leading SSL VPNs.”

The researchers also published technical details and proof-of-concept code of the vulnerabilities in a blog post two days after the briefing. Weaponized code for exploits is also widely available online, including on GitHub.

News of the vulnerability rang out like a starter pistol, sending hackers sprinting to attack two enterprise VPN products in use by hundreds of thousands of customers — Pulse Secure VPN and Fortinet FortiGate VPN.

In both cases, White Hat hackers discovered the flaws months earlier and disclosed them confidentiality to the manufacturer, giving them the time and details needed to issue the necessary patches. Both Pulse Secure and Fortinet instructed customers to install the patches, but months later there were still more than 14,500 that had not been patched, according to a report in Bad Packets — and the number could be even higher.

Being that these are enterprise products, they are in use in some of the most sensitive systems, including military networks, state and local government agencies, health care institutions, and major financial bodies. And while these organizations tend to have trained security personnel in place to apply patches and mitigate threats, they tend to be far less nimble than hackers, who can seize a single device and use it to access devices across an entire network, with devastating consequences.

The potential for these attacks is vast, considering the sheer volume of targets. This was again demonstrated in the case of the “URGENT/11” zero-day vulnerabilities exposed by Armis in late July. The vulnerabilities affect the VxWorks OS used by more than 2 billion devices worldwide and include six critical vulnerabilities that can enable remote code execution attacks. Chances are that attackers are already on the move looking for lucrative targets to hit.

This is how it plays out — talented White Hat hackers sniff out security flaws and confidentially inform manufacturers, who then scramble to issue patches and inform users before hackers can pounce. And while manufacturers face the impossible odds of hoping that tens of thousands of customers — and often far more — install new security patches in time, the hackers looking to take advantage of these flaws only need to get lucky once.

It’s time for a paradigm shift. Manufacturers need to provide built-in security which doesn’t rely upon customer updates after the product is already in use. This “embedded security” creates self-protected systems that don’t wait for a vulnerability to be discovered before mounting a response.

This approach was outlined in a report from the US Department of Commerce’s National Institute of Standards and Technology (“NIST”) published in July. Entitled “Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks,” the report detailed the unique challenges of IoT security, and stated that these devices must be able to verify their own software and firmware integrity.

There are already built-in security measures that can stack the deck against hackers, including secure boot, application whitelisting, ASLR, and control flow integrity to name a few. These solutions are readily available and it is imperative that leading manufacturers provide runtime protection during the build process, to safeguard their customers’ data and assets.

It’s a race against time and a reactive security approach that waits for a vulnerability to be discovered and then issues patches is lacking, to put it lightly. There will always be users who don’t install the patches in time and hackers who manage to bypass the security solutions before manufacturers can get their feet on the ground. And with White Hat hackers constantly looking for the next vulnerability to highlight, it’s a vicious cycle and one that gives hackers every advantage against large corporations.

And as Fortinet and Pulse Secure lick their wounds from the recent exploits, the onus is upon other manufacturers to realize that the current security paradigm simply isn’t enough.

Copyright 2010 Respective Author at Infosec Island
  • October 18th 2019 at 03:17

Myth Busters: How to Securely Migrate to the Cloud

Security is top of mind for every company and every IT team – as it should be. The personal data of employees and customers is on the line and valuable company information is at risk. Security protocols are subject to even closer scrutiny when companies are considering migrating to the cloud.

More and more enterprises recognize that they need to pursue cloud adoption to future-proof their tech stack and achieve their business transformation objectives. The agility and cost savings the cloud provides is fast becoming a requirement for competing in today’s marketplace. Despite the growing sense that cloud is the future, many companies are hesitant to migrate their applications as they believe the cloud is not as secure as on-premise. This is a common myth, and far from the truth. While security must remain a top priority for IT professionals during the migration process, there is a successful pathway to safely and securely migrate.

Who Owns What in the Cloud?

In today’s “cloud wars” landscape, it can be difficult to separate fact from fiction – and it’s clear that many IT professionals feel the cloud is less secure. It’s time to address this myth. The cloud can be just as secure, if not more so, than a traditional on-premise environment. A survey by AlertLogic found that security issues do not vary greatly whether the data is stored on-premise or in a public cloud. Although there is the belief that public cloud servers are most at risk for an attack, on-premise systems are typically older, complex legacy systems, which can be more difficult to secure. The public cloud has the advantage of being less dependent on other legacy technologies.

Significant advancements have been made to ensure cloud migration and management can be executed in a highly secure fashion. For example, the major cloud providers today have developed a large partner network with cloud-native tools and services built from the ground up to specifically address cloud security. Public cloud providers have extensive security-focused teams and experts on staff to ensure that the cloud remains secure, supported by an ecosystem of cloud certified Managed Service Providers (“MSPs”) who can monitor and assess threat risk every step of the way. If done properly, organizations can take advantage of these advanced products and skilled resources to secure and harden their cloud environment. Most IT organizations, driven to be lean and efficient, simply can’t replicate the same level of security which leverages layers of security expertise and experience. The biggest threats are people related, either through inadvertent implementation and configuration errors, lack of proactive management discipline (e.g. applying patches) or malicious exploitation of vulnerabilities which, unfortunately, originate most easily from someone inside.

Unlike an on-premise data center deployed and managed by internal IT staff in which the organization is solely responsible, security and compliance in public cloud operates under a shared responsibility model. The cloud provider is responsible for security of the cloud and the customer is responsible for security in the cloud. What this means is that providers such as Amazon Web Services (AWS), manage and control the host operating system, physical security of its facilities, hardware, software, virtualization layer and infrastructure including networking, database, storage and compute resources. Meanwhile, the customer is responsible for system security above the hypervisor – things like data encryption in-transit and at rest, guest operating systems, networking traffic protection, platform and application security including updates and security patches.  

The hybrid cloud is another valuable pathway for companies that aren’t ready or able, for various reasons, to make the full leap to the public cloud. The shared responsibility model for security and compliance applies to hybrid cloud which utilizes a combination of public cloud, private cloud and/or on-premise environment. This definition, understanding and execution of roles is critical for cloud security. According to Gartner, by 2020, 90 percent of companies will utilize some form of the hybrid cloud. In the end, security requires expertise, tools, discipline and governance. The ability for organizations to leverage and push responsibility to vendors is an underlying benefit of cloud.   

How to Move to Cloud Safely

The migration process isn’t a simple task. While there is no universal pathway to migrating securely, the following tips will help IT professionals make the move:

  • Assess and plan in advance for all source data to be transferred. The data should be encrypted at rest on the source, prior to transfer, with a strong encryption algorithm.
  • Perform a hardening of the server before copying any data. Allow only specific and minimal sets of ports with restrictions to specific IP and CIDR.
  • Implement proper authorization and access control according to organizational security permission and roles. Restrict access as needed to data sourced, transmitted or stored in the cloud.
  • Finally, establish audit and monitoring which must be enabled, maintained, monitored and archived for ongoing and historical analysis at any moment in time.

Having a plan in place post-migration is also vital, as security doesn’t stop when the migration is complete. Companies should continue to assess their applications to ensure security remains a top priority. Working with a third-party provider or MSP skilled in cloud security can help take some of the load off the IT team, as systems require continuous updates, maintenance and cost optimization that will need to be monitored to ensure that resources deployed in the cloud are being used as efficiently and safely as possible.

Cloud technology has advanced significantly over the past 5 years. While IT pros may miss the sense of security of actually being able to physically see, restrict and manage access to their tech stack in an on-premise environment, the tide has shifted so that the benefits of cloud along with the maturity and ongoing evolution of cloud security products and services has enabled organizations to achieve a high, if not increased, level of security if implemented properly.

Copyright 2010 Respective Author at Infosec Island
  • October 18th 2019 at 03:06

Microsoft Makes OneDrive Personal Vault Available Worldwide

Microsoft this week announced that users all around the world can now keep their most important files protected in OneDrive Personal Vault.

Launched earlier this summer, the Personal Vault is a protected area in OneDrive that requires strong authentication or a second identification step to access. Thus, users can store their files and ensure that they can’t be accessed without a fingerprint, face, PIN, or code received via email or SMS.

Now available worldwide on all OneDrive consumer accounts, Personal Vault allows users to securely store important information such as files, photos, and videos, including copies of documents, and more. 

The added security ensures that, even if an attacker manages to compromise the OneDrive account, they won’t have access to any of the files in Personal Vault. 

Personal Vault won’t slow users down, as they can easily access content from their PC, on OneDrive.com, or mobile device, Microsoft says.

On top of that, additional security measures are available, including the ability to scan documents or shoot photos directly into Personal Vault. Files and shared items moved into Personal Vault cannot be shared. 

Both Personal Vault and files there will close and lock automatically after a period of inactivity, and Personal Vault files are automatically synced to a BitLocker-encrypted area of the user’s Windows 10 PC local hard drive. 

“Taken together, these security measures help ensure that Personal Vault files are not stored unprotected on your PC, and your files have additional protection, even if your Windows 10 PC or mobile device is lost, stolen, or someone gains access to it or to your account,” Microsoft says.

OneDrive provides other security features as well, including file encryption, monitoring for suspicious sign-ins, ransomware detection and recovery, virus scanning on downloads, password-protection of sharing links, and version history for all file types.

To use Personal Vault, users only need to click on the feature’s icon, available in OneDrive. Only up to three files can be stored in Personal Vault on OneDrive free or standalone 100 GB plans, but that limit is as high as the total storage limit for Office 365 Personal and Office 365 Home plans.

RelatedDHS Highlights Common Security Oversights by Office 365 Customers

RelatedMicrosoft Adds New Security Features to Office 365

Copyright 2010 Respective Author at Infosec Island
  • October 1st 2019 at 13:42

Human-Centered Security: What It Means for Your Organization

Humans are regularly referred to as the ‘weakest link’ in information security. However, organizations have historically relied on the effectiveness of technical security controls, instead of trying to understand why people are susceptible to mistakes and manipulation. A new approach is clearly required: one that helps organizations to understand and manage psychological vulnerabilities, and adopts technology and controls that are designed with human behavior in mind.

That new approach is human-centred security.

Human-centred security starts with understanding humans and their interaction with technologies, controls and data. By discovering how and when humans ‘touch’ data throughout the working day, organizations can uncover the circumstances where psychological-related errors may lead to security incidents.

For years, attackers have been using methods of psychological manipulation to coerce humans into making errors. Attack techniques have evolved in the digital age, increasing in sophistication, speed and scale. Understanding what triggers human error will help organizations make a step change in their approach to information security.

Identifying Human Vulnerabilities

Human-centred security acknowledges that employees interact with technology, controls and data across a series of touchpoints throughout any given day. These touchpoints can be digital, physical or verbal. During such interactions, humans will need to make decisions. Humans, however, have a range of vulnerabilities that can lead to errors in decision making, resulting in negative impacts on the organization, such as sending an email containing sensitive data externally, letting a tailgater into a building or discussing a company acquisition on a train. These errors can also be exploited by opportunistic attackers for malicious purposes.

In some cases, organizations can put preventative controls in place to mitigate errors being made, e.g. preventing employees from sending emails externally, strong encryption of laptops or physical barriers. However, errors can still get through, particularly if individuals decide to subvert or ignore these types of controls to complete work tasks more efficiently or when time is constrained. Errors may also manifest during times of heightened pressure or stress.

By identifying the fundamental vulnerabilities in humans, understanding how psychology works and what triggers risky behavior, organizations can begin to understand why their employees might make errors, and begin managing that risk more effectively.

Exploiting Human Vulnerabilities

Psychological vulnerabilities present attackers with opportunities to influence and exploit humans for their own advantage. The methods of psychological manipulation used by attackers have not changed since humans entered the digital era but attack techniques are more sophisticated, cost-effective and expansive, allowing attackers to effectively target individuals or to attack on considerable scale.

Attackers use the ever-increasing volume of freely available information from online and social media sources to establish believable personas and backstories in order to build trust and rapport with their targets. This information is carefully used to heighten pressure on the target, which then triggers a heuristic decision-making response. Attack techniques are used to force the target to use a particular cognitive bias, resulting in predictable errors. These errors can then be exploited by attackers.

There are several psychological methods that can be used to manipulate human behavior; one such method that attackers can use to influence cognitive biases is social power.

There are many attack techniques that use the method of social power to exploit human vulnerabilities. Attack techniques can be highly targeted or conducted on scale but they typically contain triggers which are designed to evoke a specific cognitive bias, resulting in a predictable error. While untargeted, ‘spray and pray’ attacks rely on a small percentage of the recipients clicking on malicious links, more sophisticated social engineering attacks are becoming prevalent and successful. Attackers have realized that it is far easier targeting humans than trying to attack technical infrastructure.

The way in which the attack technique uses social power to trigger cognitive biases will differ between scenarios. In some cases, a single email may be enough to trigger one or more cognitive bias resulting in a desired outcome. In others, the attack may gradually manipulate the target over a period of time using multiple techniques. What is consistent is that the attacks are carefully constructed and sophisticated. By knowing how attackers use psychological methods, such as social power, to trigger cognitive biases and force errors, organizations can deconstruct and analyze real-world incidents to identify their root causes and therefore invest in the most effective mitigation.

For information security programs to become more human-centred, organizations must become aware of cognitive biases and their influence on decision-making. They should acknowledge that cognitive biases can arise from normal working conditions but also that attackers will use carefully crafted techniques to manipulate them for their own benefit. Organizations can then begin to readdress information security programs to improve the management of human vulnerabilities, and to protect their employees from a range of coercive and manipulative attacks.

Managing Human Vulnerabilities

Human vulnerabilities can lead to errors that can significantly impact an organization’s reputation or even put lives at risk. Organizations can strengthen information security programs in order to mitigate the risk of human vulnerabilities by adopting a more human-centred approach to security awareness, designing security controls and technology to account for human behavior, and enhancing the working environment to reduce the impact of pressure or stress on the workforce.

Reviewing the current security culture and perception of information security should give an organization a strong indication of which cognitive biases are impacting the organization. Increasing awareness of human vulnerabilities and the techniques attackers use to exploit them, then tailoring more human-centred security awareness training to account for different user groups should be fundamental elements of enhancing any information security program.

Organizations with successful human-centred security programs often have significant overlap between information security and human resource functions. The promotion of a strong mentoring network between senior and junior employees, coupled with the improvement of the structure of working days and the work environment, should help to reduce unnecessary stress that leads to the triggering of cognitive biases affecting decision-making.

Develop meaningful relationships between a mentor and mentee to create an equilibrium of knowledge and understanding. Create a working environment and work-life balance that reduces stress, exhaustion, burnout and poor time management, which all significantly increase the likelihood of errors being made. Finally, consider how the improvement or enhancement of workspaces and environments can reduce stress or pressure on the workforce. Consider what is the most appropriate work environment for the workforce as there may be varying options, e.g. working from home, remote working, or modernizing office spaces, factories or outdoor locations.

From Your Weakest Link to Your Strongest Asset

Underlying psychological vulnerabilities mean that humans are prone to both making errors, and to manipulative and coercive attacks. Errors and manipulation now account for the majority of security incidents, so the risk is profound. By helping staff understand how these vulnerabilities can lead to poor decision making and errors, organizations can manage the risk of the accidental insider. To make this happen, a fresh approach to information security is required.

A human-centred approach to security can help organizations to significantly reduce the influence of cognitive biases that cause errors. By discovering the cognitive biases, behavioral triggers and attack techniques that are most common, tailored psychological training can be introduced into an organization’s awareness campaigns. Technology, controls and data can be calibrated to account for human behavior, while enhancement of the working environment can reduce stress and pressure.

Once information security is understood through the lens of psychology, organizations will be better prepared to manage and mitigate the risks posed by human vulnerabilities. Human-centred security will help organizations transform their weakest link into their strongest asset.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

 

Copyright 2010 Respective Author at Infosec Island
  • September 24th 2019 at 18:57

How Ethical Hackers Find Weaknesses and Secure Businesses

When people hear about hackers, it typically conjures up images of a hooded figure in a basement inputting random code into a computer terminal. This Hollywood cliché is far from the truth from modern-day cybersecurity experts, and it’s also important to note that not all hackers are malicious.

Hackers and their role in information cybersecurity is a vastly growing career on a global scale. Market research predictions in the cybersecurity space is expected to exceed $181.77 billion by 2021. The global market for cybersecurity is growing, and companies are considering security an imperative for today’s organizations.

The cybersecurity landscape has growing threats today, with data breaches and attacks happening constantly. For instance, it’s hard to forget the infamous WannaCry ransomware attack spread through the world, targeting Microsoft machines and bringing multiple services worldwide to their knees. The attack hit an estimated 200,000 computers across 150 countries, encrypting files in health services, motor manufacturing, telephone companies, logistics companies, and more.

So, what can we do to secure our businesses and online infrastructure? One option is to look to ethical hackers, or white hat hackers, security experts who approaches your data and services through the eyes of a malicious attacker. An engagement from an ethical hacker is designed to see how your infrastructure or applications would hold up against a real-world attack.

Turning to Ethical Hackers

A commonly used term for ethical hackers attacking your system is known as the “Red Team.” While this term covers a broader attack surface, including attacks against people, such as social Engineering, and physical attacks, such as lock picking. Would your security stop dedicated and professional attackers or would they find holes and weaknesses, unknown to you and your internal security team (also known as, The Blue team)?

The job description for an ethical hacker can be simple to breakdown – assess the target, scope out all functionality and weaknesses, attack the system and then prove it can be exploited. While the job description can be described quite easily, the work involved can be large and undoubtedly complex. Additionally, when carrying out a pen-test or assessment of a client’s application or network, production safety and legality is what separates the “good guys” (ethical hackers) from the “bad guys” (malicious hackers).

Assessing the Target

When beginning an assessment of a system or application, we must have a set scope before we begin. It is illegal to attack systems without prior consent and furthermore a waste of time to work on assets out of the predefined scope. Target assessment can be one of the most important steps in a well-performed test. The idea of simply jumping straight in and attacking a system on the first IP or functionality we come across is a bad way to start.

The best practice is to find everything that is part of the assessment and see how it works together. We must know what the system in place was designed to do and how data is transferred throughout. Building maps with various tools gives a much greater picture of the attack surface we can leverage. The assessment of the target is commonly known as the “enumeration phase.”

At the end of this phase we should have a great place to start attacking, with an entire structure of the system or application, hopefully with information regarding operating systems, services packs, version numbers and any other fingerprinting data that can lead to an effective exploit of the target.

Vulnerability Analysis

All information gathered against the machines or applications should immediately give a good hacker a solid attack surface and the ability to identify weakness in the system. The internet provides a vast amount of information that can easily be associated with the architecture and lists of all known exploits or vulnerabilities already found against said systems.

There are additional tools to help with vulnerability analysis, like scanners, that flag possible points of weakness in the system or application. All of the analytic data is much easier to find and test after a thorough assessment.

Exploitation

Then, with exploitation, the services of an ethical hacker make an impact. We may have all the assessment data and vulnerability analysis information, but if they do not know how to perform strong attacks or bypass any security mechanisms in place, then the previous steps were useless. Exploiting a commonly known vulnerability can be fairly straight forward if it has write-ups from other security specialists. But hands-on experience against creating your own injections and obfuscated code, or black/white list in place is invaluable.

Furthermore, it is imperative to test with production safety in mind. Having an ethical hacker run dangerous code or tests against the system may cause untold damage. This defeats the purpose of a secure test. The objective is to prove that it is vulnerable, without causing harm or disruption to the live system.

Providing Concepts

After a test has been concluded, the results of all exploits, vulnerability analysis and even enumeration data returning valuable system information should be documented and presented to the client. All vulnerabilities should be given ratings (Standard rating systems like CVSS3 are most common to use) on how severe the issue and impact of the exploit could be.

Additionally, steps shown on how an attacker could perform this exploit should be included in a step-by-step proof of concept. The client should be able to follow along with your report and end up with the same results showing the flaw in the system. Again, non-malicious attacks should be given in the report.

Providing these proof-of-concept reports to clients, with steps on how to reproduce the issues and give non-malicious examples of how the system can be breached, is paramount to success in securing your systems.

No Perfect System

Finally, it’s important to note that no system is ever considered flawless. Exploits and vulnerabilities are released on almost a daily basis on every type of machine, server, application and language. Security assessments and tests in modern applications must be a continual process. This is where the role of a hacker in your organization, simulating attacks in the style of a malicious outsider becomes invaluable.

Approaching your currently implemented security as a target to beat or bypass, instead of a defense mechanism waiting to be hit, is the strongest and fastest way to find any flaws that may already exist! Modern-day web applications have been described as a living, breathing thing and negligence for keeping it secure will surely result in a digital disaster!

About the author: Jonathan Rice works as a vulnerability web application specialist for application security provider WhiteHat Security. In this role, Rice has focused on manual assessments, vulnerability verification and dynamic application security testing (DAST).

Copyright 2010 Respective Author at Infosec Island
  • September 11th 2019 at 14:41

New Passive RFID Tech Poses Threat to Enterprise IoT

image

As RFID technology continues to evolve, IoT security measures struggle to keep pace.

The Internet of Things (IoT) industry is growing at a staggering pace. The IoT market in China alone will hit $121.45 billion by 2022 and industry analysts predict that more than 3.5 billion devices will be connected through IoT globally by 2023. 

Among the most important technologies precipitating this breakneck growth is RFID or Radio Frequency Identification. RFID-tagged devices can help track inventory, improve the efficiency of healthcare and enhance services for customers in a variety of industries. 

For example, many hospitals across the world are beginning to test the use of on-metal RFID tags to not only track their inventory of surgical tools--such as scalpels, scissors, and clamps--but to ensure that each tool is properly sterilized and fully maintained prior to new operations. The implications of the widespread application of RFID tracking in the healthcare system would be a dramatic reduction in the number of avoidable infections due to unsterilized equipment and a sharp increase in the efficiency of surgical procedures.

IDenticard Vulnerabilities in PremiSys ID System

Although passive RFID technology shows much promise for streamlining and improving the management of IoT, unresolved vulnerabilities in the technology’s security remain a bottleneck for both the implementation of RFID and the growth of the IoT industry. 

In January, the research group at Tenable discovered multiple zero-day vulnerabilities in the PremiSys access control system developed by IDenticard, a US-based manufacturer of ID, access and security solutions. 

The vulnerabilities - which included weak encryption and a default username-password combination for database access - would have allowed an attacker to gain complete access to employee personal information of any organization using the PremiSys ID system. Though IDenticard released a patch to resolve the vulnerabilities, the incident points to growing security risks around network-connected, RFID-tagged devices.

In the summer of 2017, these security risks were put on full display when researchers from the KU Leuven university discovered a simple method to hack the Tesla Model S’s keyless entry fob. The researchers claim that these types of attacks were possible (prior to the security patch rolled out by Tesla in June of 2018) because of the weak encryption used by the Pektron key’s system. 

Despite the numerous security concerns that have surfaced in recent years, RFID is still one of the most tenable solutions for increasing the efficiency and safety of IoT. That said, for enterprise to take full advantage of the benefits of RFID technology, stronger security protocols and encryptions must be implemented. 

Compounding the threat is the fact that many RFID-enabled enterprise networks are at an increased risk of breaches (especially those in the Industrial IoT, IIoT) due to their inability to detect vulnerabilities and breaches in the first place. In fact, a recent study published in January by Gemalto discovered that nearly 48% of companies in all industries are unable to detect IoT device breaches. 

The Bain & Co. study pointed to security as the major obstacles to full-scale RFID/IoT adoption. With data breaches costing, on average, more than $3.86 million or $148 per record, new security measures must be taken if IoT is to fulfill its promises of en masse real-time connection between businesses, consumers, and their devices. Unsurprisingly, in the Gemalto survey interviewing 950 of the world’s leaders in IT and IoT businesses, more than 79% of them claim to want more robust guidelines for comprehensive IoT security. 

According to The Open Web Application Security Project (OWASP), there are ten primary vulnerabilities present in IoT and many of these risk factors are directly related to the implementation of RFID technology. 

Securing RFID-Enabled Enterprise IoT Devices

Of the many vulnerabilities in RFID/IoT devices and technologies, few impact consumers as directly as those presented by RFID scanners. 

RFID scanners can glean information from any RFID-enabled device, not just credit cards and phones. Our IoT and IIoT, both growing at a breakneck pace and with security features lagging behind, are prime targets for exploitation. 

Security analysts have raised concerns about the safety of data traveling on these networks for years. In fact, in a study conducted by IBM, it was found that fewer than 20% of routinely test their IoT apps and devices for security vulnerabilities. With data breaches growing at an alarming pace--2018 alone resulted in the exposure of more than 47.2 million records--many customers are asking, “What protections do we have against the growing threat against connected devices?” 

As it happens, quite a lot. In 2017, a research group at the IAIK Graz University of Technology created an RFID-based system aiming to secure RFID data on an open Internet of Things (IoT) network. The engineers designed a novel RFID tag that exclusively uses the Internet Protocol Security layer to secure the RFID tag and its sensor data, regardless of what type of RFID scanner attempts to steal the tag data.

Their innovation lies in collecting the RFID sensor data first through a virtual private network (VPN) application. Using the custom RFID tag, communications are routed through the IPsec protocol, which provides secure end-to-end encryption between an RFID-enabled IoT device and the network to which it’s connected. 

Solutions that identify and resolve potential IoT device vulnerabilities still need more work before we can expect widespread implementation. For one thing, the IPsec protocol, which is available on most consumer VPN applications, does not secure networks with 100% certainty.

Researchers at Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum (RUB) recently discovered a Bleichenbacher vulnerability in numerous commercial VPNs, including those used by Cisco, Clavister, Huawei and Zyxel.

RFID Breaking Big in the Enterprise Market

When it comes to RFID security, conversations gravitate toward consumer applications like contactless payment fraud or bugs in wearable technology. Though RFID spending is mostly business-to-consumer, the next largest spending category is the enterprise, comprising nearly 30% of the total RFID market.

RFID’s market size is projected to grow an additional 30% through 2020, as enterprise embraces RFID tags in everything from supply-chain management to security keycard systems. One of the big enablers of IoT in enterprises has been the simple addition of “passive” RFID tags for day-to-day operational functions. 

Passive RFID systems are comprised of RFID tags, readers/antennas, middleware, and (in many cases), RFID printers.  

With the rate the technology has evolved, the modern market now has access to thousands of tag-types with increased range and sensitivity and a plethora of substance-specific designs (e.g. tags made specifically for metal, liquid, and other materials). This technology allows for unprecedented tracking for and security of inventory, personnel, and other company assets.

Passive RFID tags, which have no electronic components, cost roughly 1/100th of the price of their “active” counterparts. And, although they have a much lower range than their active counterparts, they require no internal power source and instead draw their power from electromagnetic energy emitted by the local RFID readers. Though a tag cannot be assigned an IP address, the reader is actually part of the IoT network and is identified by its IP address, which makes the latter vulnerable, as we’ve seen, to the same kinds of hacks that affect other devices when steps have not been taken to hide the IP address.

Because of these factors, passive RFID tags are ideal for companies and supply chains operating in extreme heat and cold, dust, debris and exposure to other elements.

Final Thoughts

With all of this taken into consideration, the question still remains, “What can the average consumer do to protect their IoT devices from hackers?”

One of the simplest solutions is to make a minor investment into some kind of blocking or wallet jamming card. If you have first generation contactless cards, ask your bank or credit card company to upgrade you to the encrypted second generation. While your data might be skimmed, it will be unreadable to the perpetrator due to the power of modern encryption protocols. 

For example, a standard 256-bit protocol would take 50 supercomputers many billions of years to decrypt and the impracticalities of such an attack lead cybercriminals to target easier prey. 

Ultimately, the accelerating pace of RFID tech will make our lives more convenient. With greater convenience, however, comes a greater need for security solutions. When it comes to RFID, one can only hope that the good guys stay one step ahead in the ongoing crypto arms race.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

 

Copyright 2010 Respective Author at Infosec Island
  • September 11th 2019 at 14:33
❌