FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayInfosec Island Latest Articles

Six Reasons for Organizations to Take Control of Their Orphaned Encryption Keys

A close analysis of the cybersecurity attacks of the past shows that, in most cases, the head of the cyber kill chain is formed by some kind of privilege abuse. In fact, Forrester estimates that compromised privileged credentials play a role in at least 80 percent of data breaches. This is the reason privileged access management (PAM) has gained so much attention over the past few years. With securing and managing access to business-critical systems at its core, PAM aims to provide enterprises with a centralized, automated mechanism to regulate access to superuser accounts. PAM solutions ideally do this by facilitating end-to-end management of the privileged identities that grant access to these accounts.

However, the scope of privileged access security is often misconceived and restricted to securing and managing root account passwords alone. Passwords, beyond a doubt, are noteworthy privileged access credentials. But the constant evolution of technology and expanding cybersecurity perimeter calls for enterprises to take a closer look at the other avenues of privileged access, especially encryption keys—which despite serving as access credentials for huge volumes of privileged accounts, are often ignored. 

This article lays focus on the importance encryption key management—why enforcing SSH key and SSL certificate management is vital, and how by doing so, you can effectively bridge the gaps in your enterprise privileged access security strategy. 

1. Uncontrolled numbers of SSH keys trigger trust-based attacks

The average organization houses over 23,000 keys and certificates many of which grant sweeping access to root accounts, says a Ponemon survey. Also, a recent report about the Impact of unsecured digital identities states that 71% of the respondents did not have any idea about the number of keys or the extent of their access within the organization. Without a centralized key management approach, anybody in the network can create or duplicate any number of keys. These keys are often randomly generated as needed and are soon forgotten once the task they are associated with is done. Malicious insiders can take advantage of this massive ocean of orphaned SSH keys to impersonate admins, hide comfortably using encryption, and take complete control of target systems.

2. Static keys create permanent backdoors

Enterprises should periodically rotate their SSH keys to avoid privilege abuse, but huge volumes of unmanaged SSH keys make key rotation an intimidating task for IT administrators. Moreover, due to a lack of proper visibility on which keys can access what, there is widespread apprehension about rotating keys in fear of accidentally blocking access to critical systems. This leads to a surge of static SSH keys, which have the potential to function as permanent backdoors. 

3. Unintentional key duplication increases the chance of privilege abuse

For the sake of efficiency, SSH keys are often duplicated and circulated among various employees in an organization. Such unintended key duplication creates a many-to-many key-user relationship, which highly increases the possibility of privilege abuse. This also makes remediation a challenge since administrators have to spend a good amount of time revoking keys to untangle the existing relationships before creating and deploying fresh, dedicated key pairs.

4. Failed SSL certificate renewals hurt your brand's credibility

SSL certificates, unlike keys, have a set expiration date. Failing to renew SSL certificates on time can have huge implications on website owners as well as end users. Browsers don't trust websites with expired SSL certificates; they throw security error messages when end users try to access such sites. One expired SSL certificate can drive away potential customers in an instant, or worse, lead to personal data theft for site visitors. 

5. Improper SSL implementations put businesses at risk

Many businesses rely completely on SSL for internet security, but they often don't realize that a mere implementation of SSL in their network is not enough to eliminate security threats. SSL certificates need to be thoroughly examined for configuration vulnerabilities after they are installed. When ignored, these vulnerabilities act as security loopholes which cybercriminals exploit to manipulate SSL traffic and launch man-in-the-middle (MITM) attacks.

6. Weak certificate signatures go unheeded

The degree of security provided by any SSL certificate depends on the strength of the hashing algorithm used to sign the certificate. Weak certificate signatures make them vulnerable to collision attacks. Cybercriminals exploit such vulnerabilities to launch MITM attacks and eavesdrop on communication between users and web servers. Organizations need to isolate certificates that bear weak signatures and replace them with fresh certificates containing stronger signatures. 

Bridging the gaps in your PAM strategy

All the above scenarios highlight how important it is to widen the scope of your privileged access security strategy beyond password management. Even with an unyielding password manager in place, cybercriminals have plenty of room to circumvent security controls and gain access to superuser accounts by exploiting various unmanaged authentication identities, including SSH keys and SSL certificates. Discovering and bringing all such identities that are capable of granting privileged access under one roof is one important step enterprises should take to bridge gaps in their privileged access security strategy. For, today's unaccounted authentication identities could become tomorrow's stolen privileged credentials!

About the author: Shwetha Sankari is an IT security product consultant at ManageEngine. With key area of expertise in content marketing, she spends her time researching the latest trends in the IT security industry and creating informative user education content.

Copyright 2010 Respective Author at Infosec Island
  • December 19th 2019 at 21:32

The Cybersecurity Skills Gap: An Update for 2020

The gap in trained, experienced cybersecurity workers is one of those perennial problems: much ink is spilled every year in assessing the scale of the problem, and what can be done about it. We have recently pointed out, for instance, the importance of stopping attacks before they happen, and the fact that you can’t hire your way out of the skills shortage.

As we move into 2020, it's apparent that despite this focus on the problem, it has not been solved. There is still a huge skills gap when it comes to cybersecurity, and in many ways, it is getting worse. According to Cyber Crime Magazine, there may be as many as 3.5 million unfilled cybersecurity jobs by 2021, and recent high-profile cyber breaches provide further evidence that the problem is already becoming acute.

That said, there are some new trends emerging when it comes to managing this crisis. In this article, we'll take a look at some of the innovative ways that companies are getting around the problem.

The Widening Gap

First, some context. At the most basic level, the skills gap in cybersecurity is the product of a simple fact: there are more cybersecurity positions that need to be filled than there are qualified graduates to fill them. This is despite colleges encouraging students to study cybersecurity, and despite companies encouraging their existing employees to retrain.

Look a little deeper, however, and some other reasons for the shortage becomes apparent. One is that a worrying number of qualified professionals are leaving the cybersecurity sector. At cybersecurity conferences, it’s not uncommon to see entire tracks about managing mental health, addiction, and work stress. As these experienced professionals leave the sector, this puts more pressure on younger, less experienced colleagues.

Secondly, a major source of stress for cybersecurity professionals is that they are often assigned total (or at least partial) responsibility for the losses caused by data breaches. In many cases, this is unfair, but persists because many companies still see "security" as a discrete discipline that can be dealt with in isolation from other IT tasks, corporate processes, and reputation management.

Training and Development

Addressing these issues requires more than just increasing the number of qualified graduates. Instead, businesses need to take more innovative approaches to hire, train, and retain cybersecurity staff.

These approaches can be broken down into three types. The first is that cybersecurity training needs to change from an event into a process. Some have argued that traditional, classroom-based cybersecurity training doesn’t reflect the field and that this training needs to be delivered in a more vocational way. Instead of hiring one cybersecurity expert, companies should look to train all of their employees in the basics of cybersecurity. 

In fact, even cybersecurity professionals might benefit from this type of training. Despite companies being resistant to spending more on employee training, investing in training has one of the highest ROI that investors can make. In addition, recent developments have made it clear that continuous training is needed – concerns about the security implications of 5G networks, for example, are now forcing seasoned professionals to go back to school.

Secondly, dramatic gains in cybersecurity can be achieved without employing dedicated staff. One of the major positive outcomes of the cybersecurity skills gap, in fact, has been the proliferation of free, easy to use security tools (like VPNs and secure browsers), which aim to make cybersecurity "fool-proof", even for staff with little or no technical training. These tools can be used to limit the risk of cyberattacks without the necessity of complex (and expensive) dedicated security solutions.

Third, the rise of "security as a service" suggests that the cybersecurity sector of the future is one that relies on outsourcing and subcontracting. Plenty of companies already outsource business processes that would have been done in-house just a few years ago – everything from creating a website to outsourcing pen testing – and taking this approach may provide a more efficient way to use the limited cybersecurity professionals that are available. 

AI Tools: The Future?

Another striking feature of the cybersecurity skills debate, and one which is especially apparent as we move into 2020, is the level of discussion around AI tools. 

Unfortunately, assessing the level of efficacy of AI tools when it comes to improving cybersecurity is difficult. That's because many cybersecurity professionals are skeptical when it comes to AI is a useful ally in this fight. In some ways, they are undoubtedly correct: in a recent study, one popular AI-powered antivirus was defeated with just a few lines of text appended to popular malware.

On the other hand, it must be recognized that cybersecurity pros have a vested interest in talking down how effective AI tools are. If AIs were able to protect networks on their own, after all, cybersecurity pros would be out of a job. Or rather they would be if there were not so many unfilled cybersecurity vacancies.

Ultimately, given the lack of qualified or trained professionals, AI tools are likely to continue to be a major focus of investment for companies from 2020 onwards. This, in turn, entails that IT professionals overcome some of their reticence about working with them, and begin to see AIs less as competitors and more as collaborators.

The Bottom Line

It's also worth pointing out that the individual trends we've mentioned can be seen as working against each other. In some cases, companies have attempted to overcome the skills gap by training large numbers of employees to perform cybersecurity roles. Others have gone in the other direction – outsourcing specific aspects of their cybersecurity to hyper-specialized companies. Others are taking a gamble that AI tools are going to eventually replace the need for (at least some of their) cybersecurity professionals.

Which of these trends is eventually going to dominate the market remains to be seen, but one thing is clear: 2020 is a critical juncture for the entire cybersecurity sector.

Copyright 2010 Respective Author at Infosec Island
  • December 18th 2019 at 06:11

Modernizing Web Filtering for K-12 Schools

In today’s 24/7 internet access world, network administrators need next-generation web filtering to effectively allow access to the internet traffic they do want, and stop the traffic they don’t want. How does this affect the education vertical, with students in K-12? Well, for starters, a lot has changed since the Children’s Internet Protection Act (CIPA) was enacted in 2000. Adding on to dated Acts, let’s not forget that almost two-decades later, the landscape in academics has shifted drastically. We are no longer working from computer labs on a single-network, we are in the world of various personal devices expecting consistent wi-fi and cloud access.

The internet is insurmountable - and as it continues to rapidly evolve, so should the filtering tactics used to keep students safe. But while the law requires schools and public libraries to block or filter obscene and harmful content, that still leaves room for interpretation.

How Much Web Filtering is Too Much?

A 2017 survey shows that 63% of K-12 teachers regularly use laptops and computers in the classroom, making the topic of web filtering in K-12 environments crucial. With the rise of tech-savvy students and classroom settings, precautions must be taken, however, there is such a thing as ‘over-filtering’ and ‘over-blocking.’ 

Current laws and guidelines that prevent students from accessing crucial learning and research materials, have become a rising issue that schools and parents are constantly battling with the FCC. As mentioned on the Atlantic, excessive filtering can limit students research on topics that can be useful to, for example, debate teams or students seeking anti-bullying resources. Instead of enforcing the same rules across the entire school or district, network administrators need to develop a solution that offers flexibility and customizable options, pinpointing specific websites, applications and categories that each grade level may access.

Working Together to Clearly Define Web Access

In the past, schools practiced the over-zealous “block everything” approach. Now, it is important for school administrators and IT departments to collectively work together to define web-access by grade, age, project duration and keyword search. This allows students access to educational resources while administrators maintain acceptable parameters in-place - blocking inappropriate content from sites or applications

Assessing Network Necessities

Academic boards can take it one-step further putting access controls on all school networking, including wi-fi networks to control the use of personal devices during school hours.

In addition to Web Filtering, adding controls such as enforcing safe search on popular search engines, and using restricted mode on YouTube will increase productivity, limit cyberbullying, and deny access to students searching for ways to inflict self-harm or perform other acts of violence.

Why limit students education by blocking crucial learning and research materials? By custom-configuring a network to meet the needs of each grade-level and classroom, educators are encouraging students to become academically resourceful. IT departments and school administrators must form a partnership to generate a solution that will allow students, teachers, and administrators access to the educational tools they need.

It’s time to break down the glass wall and acknowledge the presence of educational materials and information that is now available through various media channels and platforms. The internet which was once a luxury accessible to only a few, is now an amenity available to almost anyone - including young students - signifying the importance of fine-tuned web filters and content security across K-12 networks.

Copyright 2010 Respective Author at Infosec Island
  • December 18th 2019 at 06:05

University of Arizona Researchers Going on Offense and Defense in Battle Against Hackers

The global hacker community continues to grow and evolve, constantly finding new targets and methods of attack. University of Arizona-led teams will be more proactive in the battle against cyberthreats thanks to nearly $1.5 million in grants from the National Science Foundation.   The first grant, for nearly $1 million, will support research contributing to an NSF program designed to protect high-tech scientific instruments from cyberattacks. Hsinchun Chen, Regents Professor of management information systems at the Eller College of Management, says the NSF's Cybersecurity Innovation for Cyberinfrastructure program is all about protecting intellectual property, which hackers can hold for ransom or sell on the darknet.   "You have infrastructure for people to collect data from instruments like telescopes," Chen said. "Scientists use that to collaborate in an open environment. Any environment that is open has security flaws."   A major hurdle to protecting scientific instruments, Chen said, is that the risks to science facilities have not been properly analyzed. He will lead a team using artificial intelligence to study hackers and categorize hundreds of thousands of risks, then connect those risks to two partner facilities at the University of Arizona.   Chen's team is working with CyVerse, a national cyberinfrastructure project led by the University of Arizona, and the Biosphere 2 Landscape Evolution Observatory project. CyVerse develops infrastructure for life sciences research and had researchers involved in this year's black hole imaging. Biosphere 2's LEO project collects data from manmade landscapes to study issues including water supply and how climate change will impact arid ecosystems.   The team will comb through hacker forums to find software tools designed to take advantage of computer system flaws, scan CyVerse and LEO internal and external networks, and then link specific tools found in the forums to specific network vulnerabilities.   "The University of Arizona is a leader in scientific discovery, and we are actively working on solutions to the world's biggest challenges. To do that, it is imperative to keep our state-of-the-art instruments safe from cyberattacks," said UArizona President Robert C. Robbins. "Hsinchun Chen is once again at the forefront of innovation in cybersecurity infrastructure, and this funding will help ensure the data and discoveries at CyVerse and Biosphere 2 are protected, which ultimately enables our researchers to keep working toward a bright future for us all."   Chen's co-principal investigators on the project include: Mark Patton, senior lecturer in management information systems; Peter Troch, science director at Biosphere 2; Edwin Skidmore, director of infrastructure at the BIO5 Institute, which houses CyVerse; and Sagar Samtani, assistant professor in the University of South Florida information systems and decision sciences department and one of Chen's former students.   Chen is also leading an effort to improve the process of collecting and analyzing data from international hacker communities. The NSF, through its Secure and Trustworthy Cyberspace program, has awarded a $500,000 grant to Chen and a team of researchers to gather and analyze data on emerging threats in international hacker markets operating in Russia and China.   "We're creating infrastructure and technologies based on artificial intelligence to study darknet markets," Chen said, "meaning the places where you can buy credit cards, malware to target particular industries or government, service to hack other people, opioids, drugs, weapons — it's all part of the dark web."   The effort will focus on developing techniques to address challenges in combating international hacking operations, including the ability to collect massive amounts of data and understand common hacker terms and concepts in other countries and languages.   Chen's co-principal investigator on the research is Weifing Li, assistant professor of management information systems at the University of Georgia.   SOURCE: The University of Arizona Copyright 2010 Respective Author at Infosec Island
  • December 4th 2019 at 18:01

Securing the Internet of Things (IoT) in Today's Connected Society

The Internet of Things (IoT) promises much: from enabling the digital organization, to making domestic life richer and easier. However, with those promises come risks: the rush to adoption has highlighted serious deficiencies in both the security design of IoT devices and their implementation.

Coupled with increasing governmental concerns around the societal, commercial and critical infrastructure impacts of this technology, the emerging world of the IoT has attracted significant attention.

While the IoT is often perceived as cutting edge, similar technology has been around since the last century. What has changed is the ubiquity of high-speed, low-cost communication networks, and a reduction in the cost of compute and storage. Combined with a societal fascination with technology, this has resulted in an expanding market opportunity for IoT devices, which can be split into two categories: consumer and industrial IoT.

Consumer IoT

Consumer IoT products often focus on convenience or adding value to services within a domestic or office environment, focusing on the end user experience and providing a rich data source that can be useful in understanding consumer behavior.

The consumer IoT comprises a set of connected devices, whose primary customer is the private individual or domestic market. Typically, the device has a discrete function which is enabled or supplemented by a data-gathering capability through on-board sensors and can also be used to add functionality to common domestic items, such as refrigerators. Today’s 'smart' home captures many of the characteristics of the consumer IoT, featuring an array of connected devices and providing a previously inaccessible source of data about consumer behavior that has considerable value for organizations.

Whilst the primary target market for IoT devices is individuals and domestic environments, these devices may also be found in commercial office premises – either an employee has brought in the device or it has been installed as an auxiliary function.

Industrial IoT

Industrial IoT deployments offer tangible benefits associated with digitization of processes and improvements in supply chain efficiencies through near real-time monitoring of industrial or business processes.

The industrial IoT encompasses connected sensors and actuators associated with kinetic industrial processes, including factory assembly lines, agriculture and motive transport. Whilst these sensors and actuators have always been prevalent in the context of operational technology (OT), connectivity and the data processing opportunities offered by cloud technologies mean that deeper insight and near real-time feedback can further optimize industrial processes. Consequently, the industrial IoT is seen as core to the digitization of industry.

Examples of industrial usage relevant to the IoT extend from manufacturing environments, transport, utilities and supply chain, through to agriculture.

The IoT is a Reality

The IoT has become a reality and is already embedded in industrial and consumer environments. It will further develop and become a critical component of not just modern life, but critical services. Yet, at the moment, it is inherently vulnerable, often neglects fundamental security principles and is a tempting attack target. This requires a change.

There is a growing momentum behind the need for change, but a lot of that momentum is governmental and regulatory-focused which, as history tells us, can be problematical. The IoT can be seen as a form of shadow IT, often hidden from view and purchased through a non-IT route. Hence, responsibility for its security is often not assigned or misassigned. There is an opportunity for information security to take control of the security aspects of the IoT, but this is not without challenges: amongst them skills and resources. Nevertheless, there is a window of opportunity to tame this world, by building security into it. As most information security professionals will know, this represents a cheaper and less disruptive option than the alternative.

In the face of rising, global security threats, organizations must make systematic and wide-ranging commitments to ensure that practical plans are in place to acclimate to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.Enterprises with the appropriate expertise, leadership, policy and strategy in place will be agile enough to respond to the inevitable security lapses. Those who do not closely monitor the growth of the IoT may find themselves on the outside looking in.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • November 19th 2019 at 15:16

What Is Next Generation SIEM? 8 Things to Look For

The SIEM market has evolved and today most solutions call themselves “Next Generation SIEM.” Effective NG SIEM should provide better protection and equally important, if not more, a much more effective, next gen user experience. What you should look for when evaluating a next generation SIEM?

The state of cybersecurity has evolved one threat at a time, with organizations constantly adding new technologies to combat new threats. The result? Organizations are left with complex and costly infrastructures made up of many products that are out of sync with one another, and thus simply cannot keep pace with the velocity of today’s dizzying threat landscape.

Traditional security information and event management (SIEM) solutions tried to make sense of the mess but fell short. Then came “Next Generation SIEM” or NG-SIEM. No vendor today will admit that they sell legacy SIEM, but there is no ISO style organization doling out official NG SIEM stamps of approval. So how is a security professional to know if the technology in front of him or her really brings the benefits they need, or if it’s just another legacy vendor calling itself NG-SIEM?

The basic capabilities of legacy SIEM are well known – data ingestion, analytics engines, dashboards, alerting and so on. But with these legacy SIEM capabilities your security team will still drown in huge amounts of logs. That’s because even many NG-SIEMs in the market still let copious amounts of threats and logs pass through – straight to the doorstep of your security team.

Working Down the Pyramid

A true Next Generation SIEM will enable the security team to work from the top down, rather than bottom up. If we look at the above pyramid, most security analysts have to sift through the bottom layer of logs and alerts – or create manual correlation rules for new attacks that can then move logs up the pyramid. This is extremely time-consuming and frustrating. Essentially security teams (especially small teams of one or two analysts) simply don’t have the bandwidth to go through all the logs, meaning attacks slip through the cracks (and analysts burn out).

Artificial Intelligence technologies available today can help to automatically create correlation rules for existing attacks - and even new attacks - before they occur. The significance of this for security teams is enormous: It means they can begin at the top of the pyramid by going through a small number of logs.  For those threats the analyst deems require further examination, the mid-level and raw data needs to be readily available and easily searchable. 

The Checklist for NG-SIEM

To make sure your NG-SIEM of choice will be effective, look for the following capabilities:

  1. Data lake – a solution that is able to ingest ALL types of data from various sources, making sure data retention can be supported, with very high search performance, including securing the data in transit and at rest.
  2. Data classification – relies on structured and un-structured data classification technologies (such as NLP) in order to sort all collected data into classes of security groups such as MITRE techniques and tactics – representing the data through one language. This will allow much faster investigation.
  3. Behavioral analytics – Built in NTA and UEBA engines. These engines by themselves lack the ability to cover the entire cyber kill chain, therefore need to be part of the NG-SIEM in order to allow correlating them with other signals, thus reducing the noise that typifies them.
  4. Auto-Investigation (or SOAR) can mean many things. The bottom line is that effective auto-investigation needs both to perform prioritization (entity prioritization, supporting all identity types including ip, host, user, email, etc.) and allow impact analysis. Impact analysis is the ability to analyze the level of actual or potential impact that each risk-prioritized entity has on the organization, so that response actions can be prioritized effectively.  
  5. Auto-Mitigation – will not necessarily be implemented on day one, however, a NG-SIEM must have the ability to automatically execute mitigation actions, even if these, in the beginning, are triggered in very narrow security use cases.
  6. Automation – Automation – Automation – nothing can be 100% automated, but in general the NG-SIEM Vendor needs to present at least 80% automation of the legacy SIEM operations. Otherwise we are missing the whole point of what NG-SIEM is all about, supporting the data pyramid approach.
  7. Data relevancy analyst support tools – Manual investigation will always be part of the analyst’s job. A NG-SIEM must present search and hunting tools that support the analyst’s advanced investigation actions, and response. In this way the NG-SIEM will support the analyst efficiently in their route of investigating the data from the top of the pyramid, through only the relevant (related) information at the bottom of it. This way we make sure advanced investigations are done quickly and efficiently.
  8. Community - solutions which have an opensource component will create a dynamic avenue for constant improvement of the NG-SIEM, through community contributions.

All of the above will create a SIEM with a user experience which allows security analysts to work top down rather than bottom up, starting with the highest risk data.

A SIEM platform that can tick off all these boxes will provide performance that is truly “next generation” and enable the organization to respond faster to relevant threats, at lower cost, improved ROI, and will make for a stable and happy security team.

About the author: Avi Chesla is the founder and CEO of empow (empow.co) - a cyber security startup distrupting the SIEM category with our "no rules" AI and NLP based i-SIEM, integrated with the Elastic Stack. Before empow he was CTO at Radware. Avi holds 25 patents in the cyber security arena.

Copyright 2010 Respective Author at Infosec Island
  • November 14th 2019 at 13:59

Cybersecurity and Online Trading: An Overview

Trade and cybersecurity are inherently linked. The promise of the information revolution was always that it would allow people to connect internationally, and that it would make international investment available for everyday citizens.

It has certainly done that, but as trade and investment grow ever more complex, the risks also grow. Alongside the development of international investment networks has developed another, shadowy network of hackers and unscrupulous investment companies. As the Internet of Things (IoT) and Artificial Intelligence (AI) technologies are adopted, the complexity and vulnerability of trading platforms is also going to increase. 

In this article, we’ll take a look at how and why the risks of international trade are increasing, and the political response to this.

The Security Risks Of Trade

There is one primary reason why digital trade is more at risk from cyberattack than ever before: a huge increase in the number of people using online trading platforms. Whilst this increase has greatly increased the ability of individuals to invest internationally, it has also opened up many opportunities for hackers.

In other cases, technologies that have been developed in order to increase the security of international trade can have the opposite effect. The move to cloud storage and Software as a Service (SaaS), for example, has been driven by the perception that there are many security benefits of cloud storage: as research firm BlueTree.ai notes, 83 percent of successful American businesses were planning a SaaS strategy for the coming year, due in part to data security concerns.

Whilst cloud storage can be a more secure way for traders to protect their data (and profits), cloud systems are also an order of magnitude more complex than more 'traditiona;' trading systems. That means that they require similarly complex cybersecurity protocols to be put in place in order to stop the spread of malware infection, or simply the interception of sensitive commercial data.

The Political Response

These concerns have led many governments to seek to regulate and control digital trading, in order to protect both individuals and firms against cyberattack. According to some estimates, up to 50 countries have now put in place – or are planning to put in place – policies that seek to limit the vulnerability of their citizens.

At the moment, however, these measures have largely been adopted on a per-country basis. Since international trading is, by definition, international, this has severely limited the efficacy of these systems. 

Add to the simmering mix the reality that many individual investors simply don’t have the technical know-how to avoid scams and hacks. The Foreign exchange (Forex) market, in particular, has had a reputation for being a sort of online Wild West ever since it opened to retail traders in the late 90’s. Many jumped in (and continue to do so) without even a rudimentary knowledge of basic currency trading strategy, which contributes to the steady and still almost unbelievable 96% failure rate. Combine these poor trading skills with a mostly unregulated brokerage industry and you have a perfect storm preying on mass ignorance.

And this was before cryptocurrency was even a glimmer of a whitepaper in Satoshi Nakamoto’s probably collective head. If Forex is the equivalent of facing down the fastest gun in Dodge City at high noon with a cap pistol, trading cryptocurrency is even more dangerous.  

Leading governments, to their credit, have recognized this minefield. The European Union has identified “a need for closer cooperation at a global level to improve security standards, improve information, and promote a common global approach to network and information security issues." The US has also made similar moves, and it's most recent Cybersecurity Strategy reaffirms the need to “strengthen the capacity and interoperability of those allies and partners to improve our ability to optimize our combined skills, resources, capabilities, and perspectives against shared threats."

There is, however, a very fine balance to be drawn between security and freedom. Any restrictions put in place in order to improve the security of international trading networks risk limiting the ability of individuals and companies to invest across borders. Given the benefits that this kind of decentralized trading has brought the world economy, and over-eager implementation of cross-border cybersecurity systems also risks undermining the profitability of many firms.

The Future

Though these issues are far from being resolved, some consensus on the direction of travel is emerging. The Brookings Institute has recently outlined a number of key principles that will govern the way that international trade will be secured in the years to come.

One of the most important is to ensure access to information across international boundaries. Whilst this may sound like it would increase the opportunities for this data to be stolen, in reality this kind of information sharing limits the risks inherent in the localization of financial records. It is strange to note, in fact, that in this regard the way that international trade is being secured bears many similarities to the kinds of decentralized systems used in cryptocurrency exchanges.

Another key area for development will be in the standardisation of cybersecurity standards and policies across territories. The International Standards Organization (ISO) has recently developed a number of cybersecurity standards that aim to help countries to develop compatible ways of securing international trade. These policies can then be internationally integrated in trade agreements, ensuring that criminals and unscrupulous companies cannot escape justice by fleeing to another jurisdiction.

Finally, there is a building consensus – not just in government but also in industry – that a risk-based approach to cybersecurity needs to be adopted when it comes to securing international trade. This approach is one that has been developed in order to assuage the fears that regulation could stifle trade flows: instead of adopting a 'tick-box' approach to cybersecurity compliance, companies should carefully assess their threat profile before deciding which counter-measures to put in place.

Trust and Security

Ultimately, international digital trade is built on trust, and this will need to be maintained in order to ensure profitability for both individual and institutional investors. 

At the broadest level, as complex networks get harder to secure, there will need to be much more dialogue between policy makers and cybersecurity experts. Building bridges between these communities will support the development of effective cybersecurity practices without putting in place unnecessary trade barriers.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island
  • October 25th 2019 at 19:52

Artificial Intelligence: The Next Frontier in Information Security

Artificial Intelligence (AI) is creating a brand new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. However, like humans, they will be flawed, but capable of achieving incredible results.

AI is already finding its way into many mainstream business use cases and business and information security leaders alike need to understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business. Organizations use variations of AI to support processes in areas including customer service, human resources and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI actually is and what it really means for business and security. 

What Risks Are Posed by AI?

As AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Simultaneously, organizations are beginning to face sophisticated AI-enabled attacks – which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before.  Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk.

While AI systems adopted by organizations present a tempting target, adversarial attackers are also beginning to use AI for their own purposes. AI is a powerful tool that can be used to enhance attack techniques, or even create entirely new ones. Organizations must be ready to adapt their defenses in order to cope with the scale and sophistication of AI-enabled cyber-attacks.

Defensive Opportunities Provided by AI

Security practitioners are always trying to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still – as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfil one type of task. They require sufficient data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously – an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Questions to ask include:

  • Is the problem bounded? (i.e. can it be addressed with one dataset or type of input, or does it require a high understanding of context, which humans are usually better at providing?)
  • Does the organization have the data required to run and optimize the AI system?

Security leaders also need to consider issues of governance around defensive AI, such as:

  • How do defensive AI systems fit into organizational security governance structures?
  • How can the organization provide security assurance for defensive AI systems?
  • How can defensive AI systems be maintained, backed up, tested and patched?
  • Does the organization have sufficiently skilled people to provide oversight for defensive AI systems?

AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions.

AI systems will make mistakes – a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI’s decision-making process. Of course, humans make mistakes too – organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organization’s cyber defenses.

The Future is Now

Computer systems that can independently learn, reason and act herald a new technological era, full of both risk and opportunity. The advances already on display are only the tip of the iceberg – there is a lot more to come. The speed and scale at which AI systems ‘think’ will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware – and at that point, defensive AI will not just be a ‘nice to have’. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks.

To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses. AI is no longer a vision of the distant future: the time to start preparing is now.

Copyright 2010 Respective Author at Infosec Island
  • October 23rd 2019 at 10:17

Five Main Differences between SIEM and UEBA

Corporate IT security professionals are bombarded every week with information about the capabilities and benefits of various products and services. One of the most commonly mentioned security products in recent years has been Security Information and Event Management (SIEM) tools.

And for good reason.

SIEM products provide significant value as a log collection and aggregation platform, which can identify and categorize incidents and events. Many also provide rules-based searches on data.

While often compared to user and entity behavior analytics (UEBA) products, SIEMs are a blend of security information management (SIM) and security event management (SEM). This makes SIEMs adept at providing aggregated security event logs analysts can query for  known security threats.

In contrast, UEBA products utilize machine learning algorithms to analyze patterns of human and entity behavior in real time to uncover anomalies indicative of known and unknown threats.

Let’s consider the five ways in which SIEM and UEBA technology differs.

Point-in-time vs. Real-time Analysis

SIEM provides point-in-time analyses of event data, and is generally limited by the number of events that can be processed in a particular time frame. They also do not correlate physical security events with logical security events.

UEBA, meanwhile, operates in real-time, using machine learning, behavior-based security analytics and artificial intelligence. It can detect threats based on contextual information, and enforce immediate remediation actions.

“While SIEM is a core security technology it has not been successful at providing actionable security intelligence in time to avert loss or damage,” wrote Mike Small, a KuppingerCole analyst in a research note.

Manual vs. Automated Threat Hunting

SIEM does a very good job of providing IT pros with the data they need to manually hunt for threats, including details on what happened, when and where it happened. However, manual effort is needed to analyze the data, particularly to detect anomalies and threats.

UEBA performs real-time analysis using machine learning models and algorithms. These provide the machine speed needed to respond to security threats as they happen, while also offering predictive capabilities that anticipate what will or might happen in the future.

Logs vs. Multiple Data Types

SIEM ingests structured logs. Adding new data types often requires upgrading existing data stores and human intervention. In addition, SIEM does not correlate data on users and their activities, or make connections across applications, over time or user behavior patterns.

UEBA is built to process huge volumes of data from various sources, including structured and unstructured data sets. It can analyze data relationships over time, across applications and networks, and pore over millions of bits to find “meanings” that may help in detecting, predicting, and preventing threats.

Short vs. Long-Term Analysis

SIEM does a very good job of helping IT security staff compile valuable, short-term snapshots of events. It is less effective when it comes to storing, finding and analyzing data over time. For example, SIEM provides limited options for searching historical data.

UEBA is designed for real-time visibility into virtually any data type, both short-term and long-term. This generates insights that can be applied to various use cases such as risk-based access control, insider threat detection and entity-based threat detection  associated with IoT, medical, and other devices.

Alerts vs. Risk Scores

SIEM, as the name implies, centralizes and manages security events from host systems, applications, and network and security devices such as firewalls, antivirus filters, etc. They deliver alerts based on events that may or may not be malicious threats. As a result, SIEMs generate a high proportion of false positive alerts which cannot all be investigated. This can lead to “actual” cyber threats going undetected.

UEBA provides risk scoring, which offers granular ranking of threats. By ranking risk for all users and entities in a network, UEBA enables enterprises to apply different controls to different users and entities, based on the level of threat they pose. One of the major advantages of risk scoring is it greatly eliminates the number of false positives.

Both SIEM and UEBA provide value for security operations teams. Each excels at specific use cases. When comparing these two technologies, it’s helpful to consider how they diverge. Namely, SIEM is oriented on point-in-time analyses of known threats. UEBA, meanwhile, provides real-time analysis of activity that can detect unknown threats as they happen and even predict a security incident based on anomalous behavior by a user or entity.

Copyright 2010 Respective Author at Infosec Island
  • October 23rd 2019 at 10:14

For Cybersecurity, It’s That Time of the Year Again

Autumn is the “hacking season,” when hackers work to exploit newly-disclosed vulnerabilities before customers can install patches. This cycle gives hackers a clear advantage and it’s time for a paradigm shift.

Each year, when the leaves start changing color you know the world of cybersecurity is starting to heat up.

This is because the cyber industry holds its two flagship events — DEFCON and BlackHat —over the same week in Las Vegas in late Summer. Something akin to having the Winter and Summer Olympics back-to-back in the same week, these events and other similar ones present priceless opportunities for the world’s most talented hackers to show their chops and reveal new vulnerabilities they’ve uncovered.

It also means that each Fall there’s a mad race against time as customers need to patch these newly revealed vulnerabilities before hackers can pull off major attacks — with mixed results.

A good example began in August, after researchers from Devcore revealed vulnerabilities in enterprise VPN products during a briefing they held at BlackHat entitled “Infiltrating Corporate Intranet Like NSA: Pre-auth RCE on Leading SSL VPNs.”

The researchers also published technical details and proof-of-concept code of the vulnerabilities in a blog post two days after the briefing. Weaponized code for exploits is also widely available online, including on GitHub.

News of the vulnerability rang out like a starter pistol, sending hackers sprinting to attack two enterprise VPN products in use by hundreds of thousands of customers — Pulse Secure VPN and Fortinet FortiGate VPN.

In both cases, White Hat hackers discovered the flaws months earlier and disclosed them confidentiality to the manufacturer, giving them the time and details needed to issue the necessary patches. Both Pulse Secure and Fortinet instructed customers to install the patches, but months later there were still more than 14,500 that had not been patched, according to a report in Bad Packets — and the number could be even higher.

Being that these are enterprise products, they are in use in some of the most sensitive systems, including military networks, state and local government agencies, health care institutions, and major financial bodies. And while these organizations tend to have trained security personnel in place to apply patches and mitigate threats, they tend to be far less nimble than hackers, who can seize a single device and use it to access devices across an entire network, with devastating consequences.

The potential for these attacks is vast, considering the sheer volume of targets. This was again demonstrated in the case of the “URGENT/11” zero-day vulnerabilities exposed by Armis in late July. The vulnerabilities affect the VxWorks OS used by more than 2 billion devices worldwide and include six critical vulnerabilities that can enable remote code execution attacks. Chances are that attackers are already on the move looking for lucrative targets to hit.

This is how it plays out — talented White Hat hackers sniff out security flaws and confidentially inform manufacturers, who then scramble to issue patches and inform users before hackers can pounce. And while manufacturers face the impossible odds of hoping that tens of thousands of customers — and often far more — install new security patches in time, the hackers looking to take advantage of these flaws only need to get lucky once.

It’s time for a paradigm shift. Manufacturers need to provide built-in security which doesn’t rely upon customer updates after the product is already in use. This “embedded security” creates self-protected systems that don’t wait for a vulnerability to be discovered before mounting a response.

This approach was outlined in a report from the US Department of Commerce’s National Institute of Standards and Technology (“NIST”) published in July. Entitled “Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks,” the report detailed the unique challenges of IoT security, and stated that these devices must be able to verify their own software and firmware integrity.

There are already built-in security measures that can stack the deck against hackers, including secure boot, application whitelisting, ASLR, and control flow integrity to name a few. These solutions are readily available and it is imperative that leading manufacturers provide runtime protection during the build process, to safeguard their customers’ data and assets.

It’s a race against time and a reactive security approach that waits for a vulnerability to be discovered and then issues patches is lacking, to put it lightly. There will always be users who don’t install the patches in time and hackers who manage to bypass the security solutions before manufacturers can get their feet on the ground. And with White Hat hackers constantly looking for the next vulnerability to highlight, it’s a vicious cycle and one that gives hackers every advantage against large corporations.

And as Fortinet and Pulse Secure lick their wounds from the recent exploits, the onus is upon other manufacturers to realize that the current security paradigm simply isn’t enough.

Copyright 2010 Respective Author at Infosec Island
  • October 18th 2019 at 03:17

Myth Busters: How to Securely Migrate to the Cloud

Security is top of mind for every company and every IT team – as it should be. The personal data of employees and customers is on the line and valuable company information is at risk. Security protocols are subject to even closer scrutiny when companies are considering migrating to the cloud.

More and more enterprises recognize that they need to pursue cloud adoption to future-proof their tech stack and achieve their business transformation objectives. The agility and cost savings the cloud provides is fast becoming a requirement for competing in today’s marketplace. Despite the growing sense that cloud is the future, many companies are hesitant to migrate their applications as they believe the cloud is not as secure as on-premise. This is a common myth, and far from the truth. While security must remain a top priority for IT professionals during the migration process, there is a successful pathway to safely and securely migrate.

Who Owns What in the Cloud?

In today’s “cloud wars” landscape, it can be difficult to separate fact from fiction – and it’s clear that many IT professionals feel the cloud is less secure. It’s time to address this myth. The cloud can be just as secure, if not more so, than a traditional on-premise environment. A survey by AlertLogic found that security issues do not vary greatly whether the data is stored on-premise or in a public cloud. Although there is the belief that public cloud servers are most at risk for an attack, on-premise systems are typically older, complex legacy systems, which can be more difficult to secure. The public cloud has the advantage of being less dependent on other legacy technologies.

Significant advancements have been made to ensure cloud migration and management can be executed in a highly secure fashion. For example, the major cloud providers today have developed a large partner network with cloud-native tools and services built from the ground up to specifically address cloud security. Public cloud providers have extensive security-focused teams and experts on staff to ensure that the cloud remains secure, supported by an ecosystem of cloud certified Managed Service Providers (“MSPs”) who can monitor and assess threat risk every step of the way. If done properly, organizations can take advantage of these advanced products and skilled resources to secure and harden their cloud environment. Most IT organizations, driven to be lean and efficient, simply can’t replicate the same level of security which leverages layers of security expertise and experience. The biggest threats are people related, either through inadvertent implementation and configuration errors, lack of proactive management discipline (e.g. applying patches) or malicious exploitation of vulnerabilities which, unfortunately, originate most easily from someone inside.

Unlike an on-premise data center deployed and managed by internal IT staff in which the organization is solely responsible, security and compliance in public cloud operates under a shared responsibility model. The cloud provider is responsible for security of the cloud and the customer is responsible for security in the cloud. What this means is that providers such as Amazon Web Services (AWS), manage and control the host operating system, physical security of its facilities, hardware, software, virtualization layer and infrastructure including networking, database, storage and compute resources. Meanwhile, the customer is responsible for system security above the hypervisor – things like data encryption in-transit and at rest, guest operating systems, networking traffic protection, platform and application security including updates and security patches.  

The hybrid cloud is another valuable pathway for companies that aren’t ready or able, for various reasons, to make the full leap to the public cloud. The shared responsibility model for security and compliance applies to hybrid cloud which utilizes a combination of public cloud, private cloud and/or on-premise environment. This definition, understanding and execution of roles is critical for cloud security. According to Gartner, by 2020, 90 percent of companies will utilize some form of the hybrid cloud. In the end, security requires expertise, tools, discipline and governance. The ability for organizations to leverage and push responsibility to vendors is an underlying benefit of cloud.   

How to Move to Cloud Safely

The migration process isn’t a simple task. While there is no universal pathway to migrating securely, the following tips will help IT professionals make the move:

  • Assess and plan in advance for all source data to be transferred. The data should be encrypted at rest on the source, prior to transfer, with a strong encryption algorithm.
  • Perform a hardening of the server before copying any data. Allow only specific and minimal sets of ports with restrictions to specific IP and CIDR.
  • Implement proper authorization and access control according to organizational security permission and roles. Restrict access as needed to data sourced, transmitted or stored in the cloud.
  • Finally, establish audit and monitoring which must be enabled, maintained, monitored and archived for ongoing and historical analysis at any moment in time.

Having a plan in place post-migration is also vital, as security doesn’t stop when the migration is complete. Companies should continue to assess their applications to ensure security remains a top priority. Working with a third-party provider or MSP skilled in cloud security can help take some of the load off the IT team, as systems require continuous updates, maintenance and cost optimization that will need to be monitored to ensure that resources deployed in the cloud are being used as efficiently and safely as possible.

Cloud technology has advanced significantly over the past 5 years. While IT pros may miss the sense of security of actually being able to physically see, restrict and manage access to their tech stack in an on-premise environment, the tide has shifted so that the benefits of cloud along with the maturity and ongoing evolution of cloud security products and services has enabled organizations to achieve a high, if not increased, level of security if implemented properly.

Copyright 2010 Respective Author at Infosec Island
  • October 18th 2019 at 03:06

Microsoft Makes OneDrive Personal Vault Available Worldwide

Microsoft this week announced that users all around the world can now keep their most important files protected in OneDrive Personal Vault.

Launched earlier this summer, the Personal Vault is a protected area in OneDrive that requires strong authentication or a second identification step to access. Thus, users can store their files and ensure that they can’t be accessed without a fingerprint, face, PIN, or code received via email or SMS.

Now available worldwide on all OneDrive consumer accounts, Personal Vault allows users to securely store important information such as files, photos, and videos, including copies of documents, and more. 

The added security ensures that, even if an attacker manages to compromise the OneDrive account, they won’t have access to any of the files in Personal Vault. 

Personal Vault won’t slow users down, as they can easily access content from their PC, on OneDrive.com, or mobile device, Microsoft says.

On top of that, additional security measures are available, including the ability to scan documents or shoot photos directly into Personal Vault. Files and shared items moved into Personal Vault cannot be shared. 

Both Personal Vault and files there will close and lock automatically after a period of inactivity, and Personal Vault files are automatically synced to a BitLocker-encrypted area of the user’s Windows 10 PC local hard drive. 

“Taken together, these security measures help ensure that Personal Vault files are not stored unprotected on your PC, and your files have additional protection, even if your Windows 10 PC or mobile device is lost, stolen, or someone gains access to it or to your account,” Microsoft says.

OneDrive provides other security features as well, including file encryption, monitoring for suspicious sign-ins, ransomware detection and recovery, virus scanning on downloads, password-protection of sharing links, and version history for all file types.

To use Personal Vault, users only need to click on the feature’s icon, available in OneDrive. Only up to three files can be stored in Personal Vault on OneDrive free or standalone 100 GB plans, but that limit is as high as the total storage limit for Office 365 Personal and Office 365 Home plans.

RelatedDHS Highlights Common Security Oversights by Office 365 Customers

RelatedMicrosoft Adds New Security Features to Office 365

Copyright 2010 Respective Author at Infosec Island
  • October 1st 2019 at 13:42

Human-Centered Security: What It Means for Your Organization

Humans are regularly referred to as the ‘weakest link’ in information security. However, organizations have historically relied on the effectiveness of technical security controls, instead of trying to understand why people are susceptible to mistakes and manipulation. A new approach is clearly required: one that helps organizations to understand and manage psychological vulnerabilities, and adopts technology and controls that are designed with human behavior in mind.

That new approach is human-centred security.

Human-centred security starts with understanding humans and their interaction with technologies, controls and data. By discovering how and when humans ‘touch’ data throughout the working day, organizations can uncover the circumstances where psychological-related errors may lead to security incidents.

For years, attackers have been using methods of psychological manipulation to coerce humans into making errors. Attack techniques have evolved in the digital age, increasing in sophistication, speed and scale. Understanding what triggers human error will help organizations make a step change in their approach to information security.

Identifying Human Vulnerabilities

Human-centred security acknowledges that employees interact with technology, controls and data across a series of touchpoints throughout any given day. These touchpoints can be digital, physical or verbal. During such interactions, humans will need to make decisions. Humans, however, have a range of vulnerabilities that can lead to errors in decision making, resulting in negative impacts on the organization, such as sending an email containing sensitive data externally, letting a tailgater into a building or discussing a company acquisition on a train. These errors can also be exploited by opportunistic attackers for malicious purposes.

In some cases, organizations can put preventative controls in place to mitigate errors being made, e.g. preventing employees from sending emails externally, strong encryption of laptops or physical barriers. However, errors can still get through, particularly if individuals decide to subvert or ignore these types of controls to complete work tasks more efficiently or when time is constrained. Errors may also manifest during times of heightened pressure or stress.

By identifying the fundamental vulnerabilities in humans, understanding how psychology works and what triggers risky behavior, organizations can begin to understand why their employees might make errors, and begin managing that risk more effectively.

Exploiting Human Vulnerabilities

Psychological vulnerabilities present attackers with opportunities to influence and exploit humans for their own advantage. The methods of psychological manipulation used by attackers have not changed since humans entered the digital era but attack techniques are more sophisticated, cost-effective and expansive, allowing attackers to effectively target individuals or to attack on considerable scale.

Attackers use the ever-increasing volume of freely available information from online and social media sources to establish believable personas and backstories in order to build trust and rapport with their targets. This information is carefully used to heighten pressure on the target, which then triggers a heuristic decision-making response. Attack techniques are used to force the target to use a particular cognitive bias, resulting in predictable errors. These errors can then be exploited by attackers.

There are several psychological methods that can be used to manipulate human behavior; one such method that attackers can use to influence cognitive biases is social power.

There are many attack techniques that use the method of social power to exploit human vulnerabilities. Attack techniques can be highly targeted or conducted on scale but they typically contain triggers which are designed to evoke a specific cognitive bias, resulting in a predictable error. While untargeted, ‘spray and pray’ attacks rely on a small percentage of the recipients clicking on malicious links, more sophisticated social engineering attacks are becoming prevalent and successful. Attackers have realized that it is far easier targeting humans than trying to attack technical infrastructure.

The way in which the attack technique uses social power to trigger cognitive biases will differ between scenarios. In some cases, a single email may be enough to trigger one or more cognitive bias resulting in a desired outcome. In others, the attack may gradually manipulate the target over a period of time using multiple techniques. What is consistent is that the attacks are carefully constructed and sophisticated. By knowing how attackers use psychological methods, such as social power, to trigger cognitive biases and force errors, organizations can deconstruct and analyze real-world incidents to identify their root causes and therefore invest in the most effective mitigation.

For information security programs to become more human-centred, organizations must become aware of cognitive biases and their influence on decision-making. They should acknowledge that cognitive biases can arise from normal working conditions but also that attackers will use carefully crafted techniques to manipulate them for their own benefit. Organizations can then begin to readdress information security programs to improve the management of human vulnerabilities, and to protect their employees from a range of coercive and manipulative attacks.

Managing Human Vulnerabilities

Human vulnerabilities can lead to errors that can significantly impact an organization’s reputation or even put lives at risk. Organizations can strengthen information security programs in order to mitigate the risk of human vulnerabilities by adopting a more human-centred approach to security awareness, designing security controls and technology to account for human behavior, and enhancing the working environment to reduce the impact of pressure or stress on the workforce.

Reviewing the current security culture and perception of information security should give an organization a strong indication of which cognitive biases are impacting the organization. Increasing awareness of human vulnerabilities and the techniques attackers use to exploit them, then tailoring more human-centred security awareness training to account for different user groups should be fundamental elements of enhancing any information security program.

Organizations with successful human-centred security programs often have significant overlap between information security and human resource functions. The promotion of a strong mentoring network between senior and junior employees, coupled with the improvement of the structure of working days and the work environment, should help to reduce unnecessary stress that leads to the triggering of cognitive biases affecting decision-making.

Develop meaningful relationships between a mentor and mentee to create an equilibrium of knowledge and understanding. Create a working environment and work-life balance that reduces stress, exhaustion, burnout and poor time management, which all significantly increase the likelihood of errors being made. Finally, consider how the improvement or enhancement of workspaces and environments can reduce stress or pressure on the workforce. Consider what is the most appropriate work environment for the workforce as there may be varying options, e.g. working from home, remote working, or modernizing office spaces, factories or outdoor locations.

From Your Weakest Link to Your Strongest Asset

Underlying psychological vulnerabilities mean that humans are prone to both making errors, and to manipulative and coercive attacks. Errors and manipulation now account for the majority of security incidents, so the risk is profound. By helping staff understand how these vulnerabilities can lead to poor decision making and errors, organizations can manage the risk of the accidental insider. To make this happen, a fresh approach to information security is required.

A human-centred approach to security can help organizations to significantly reduce the influence of cognitive biases that cause errors. By discovering the cognitive biases, behavioral triggers and attack techniques that are most common, tailored psychological training can be introduced into an organization’s awareness campaigns. Technology, controls and data can be calibrated to account for human behavior, while enhancement of the working environment can reduce stress and pressure.

Once information security is understood through the lens of psychology, organizations will be better prepared to manage and mitigate the risks posed by human vulnerabilities. Human-centred security will help organizations transform their weakest link into their strongest asset.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

 

Copyright 2010 Respective Author at Infosec Island
  • September 24th 2019 at 18:57

How Ethical Hackers Find Weaknesses and Secure Businesses

When people hear about hackers, it typically conjures up images of a hooded figure in a basement inputting random code into a computer terminal. This Hollywood cliché is far from the truth from modern-day cybersecurity experts, and it’s also important to note that not all hackers are malicious.

Hackers and their role in information cybersecurity is a vastly growing career on a global scale. Market research predictions in the cybersecurity space is expected to exceed $181.77 billion by 2021. The global market for cybersecurity is growing, and companies are considering security an imperative for today’s organizations.

The cybersecurity landscape has growing threats today, with data breaches and attacks happening constantly. For instance, it’s hard to forget the infamous WannaCry ransomware attack spread through the world, targeting Microsoft machines and bringing multiple services worldwide to their knees. The attack hit an estimated 200,000 computers across 150 countries, encrypting files in health services, motor manufacturing, telephone companies, logistics companies, and more.

So, what can we do to secure our businesses and online infrastructure? One option is to look to ethical hackers, or white hat hackers, security experts who approaches your data and services through the eyes of a malicious attacker. An engagement from an ethical hacker is designed to see how your infrastructure or applications would hold up against a real-world attack.

Turning to Ethical Hackers

A commonly used term for ethical hackers attacking your system is known as the “Red Team.” While this term covers a broader attack surface, including attacks against people, such as social Engineering, and physical attacks, such as lock picking. Would your security stop dedicated and professional attackers or would they find holes and weaknesses, unknown to you and your internal security team (also known as, The Blue team)?

The job description for an ethical hacker can be simple to breakdown – assess the target, scope out all functionality and weaknesses, attack the system and then prove it can be exploited. While the job description can be described quite easily, the work involved can be large and undoubtedly complex. Additionally, when carrying out a pen-test or assessment of a client’s application or network, production safety and legality is what separates the “good guys” (ethical hackers) from the “bad guys” (malicious hackers).

Assessing the Target

When beginning an assessment of a system or application, we must have a set scope before we begin. It is illegal to attack systems without prior consent and furthermore a waste of time to work on assets out of the predefined scope. Target assessment can be one of the most important steps in a well-performed test. The idea of simply jumping straight in and attacking a system on the first IP or functionality we come across is a bad way to start.

The best practice is to find everything that is part of the assessment and see how it works together. We must know what the system in place was designed to do and how data is transferred throughout. Building maps with various tools gives a much greater picture of the attack surface we can leverage. The assessment of the target is commonly known as the “enumeration phase.”

At the end of this phase we should have a great place to start attacking, with an entire structure of the system or application, hopefully with information regarding operating systems, services packs, version numbers and any other fingerprinting data that can lead to an effective exploit of the target.

Vulnerability Analysis

All information gathered against the machines or applications should immediately give a good hacker a solid attack surface and the ability to identify weakness in the system. The internet provides a vast amount of information that can easily be associated with the architecture and lists of all known exploits or vulnerabilities already found against said systems.

There are additional tools to help with vulnerability analysis, like scanners, that flag possible points of weakness in the system or application. All of the analytic data is much easier to find and test after a thorough assessment.

Exploitation

Then, with exploitation, the services of an ethical hacker make an impact. We may have all the assessment data and vulnerability analysis information, but if they do not know how to perform strong attacks or bypass any security mechanisms in place, then the previous steps were useless. Exploiting a commonly known vulnerability can be fairly straight forward if it has write-ups from other security specialists. But hands-on experience against creating your own injections and obfuscated code, or black/white list in place is invaluable.

Furthermore, it is imperative to test with production safety in mind. Having an ethical hacker run dangerous code or tests against the system may cause untold damage. This defeats the purpose of a secure test. The objective is to prove that it is vulnerable, without causing harm or disruption to the live system.

Providing Concepts

After a test has been concluded, the results of all exploits, vulnerability analysis and even enumeration data returning valuable system information should be documented and presented to the client. All vulnerabilities should be given ratings (Standard rating systems like CVSS3 are most common to use) on how severe the issue and impact of the exploit could be.

Additionally, steps shown on how an attacker could perform this exploit should be included in a step-by-step proof of concept. The client should be able to follow along with your report and end up with the same results showing the flaw in the system. Again, non-malicious attacks should be given in the report.

Providing these proof-of-concept reports to clients, with steps on how to reproduce the issues and give non-malicious examples of how the system can be breached, is paramount to success in securing your systems.

No Perfect System

Finally, it’s important to note that no system is ever considered flawless. Exploits and vulnerabilities are released on almost a daily basis on every type of machine, server, application and language. Security assessments and tests in modern applications must be a continual process. This is where the role of a hacker in your organization, simulating attacks in the style of a malicious outsider becomes invaluable.

Approaching your currently implemented security as a target to beat or bypass, instead of a defense mechanism waiting to be hit, is the strongest and fastest way to find any flaws that may already exist! Modern-day web applications have been described as a living, breathing thing and negligence for keeping it secure will surely result in a digital disaster!

About the author: Jonathan Rice works as a vulnerability web application specialist for application security provider WhiteHat Security. In this role, Rice has focused on manual assessments, vulnerability verification and dynamic application security testing (DAST).

Copyright 2010 Respective Author at Infosec Island
  • September 11th 2019 at 14:41

New Passive RFID Tech Poses Threat to Enterprise IoT

image

As RFID technology continues to evolve, IoT security measures struggle to keep pace.

The Internet of Things (IoT) industry is growing at a staggering pace. The IoT market in China alone will hit $121.45 billion by 2022 and industry analysts predict that more than 3.5 billion devices will be connected through IoT globally by 2023. 

Among the most important technologies precipitating this breakneck growth is RFID or Radio Frequency Identification. RFID-tagged devices can help track inventory, improve the efficiency of healthcare and enhance services for customers in a variety of industries. 

For example, many hospitals across the world are beginning to test the use of on-metal RFID tags to not only track their inventory of surgical tools--such as scalpels, scissors, and clamps--but to ensure that each tool is properly sterilized and fully maintained prior to new operations. The implications of the widespread application of RFID tracking in the healthcare system would be a dramatic reduction in the number of avoidable infections due to unsterilized equipment and a sharp increase in the efficiency of surgical procedures.

IDenticard Vulnerabilities in PremiSys ID System

Although passive RFID technology shows much promise for streamlining and improving the management of IoT, unresolved vulnerabilities in the technology’s security remain a bottleneck for both the implementation of RFID and the growth of the IoT industry. 

In January, the research group at Tenable discovered multiple zero-day vulnerabilities in the PremiSys access control system developed by IDenticard, a US-based manufacturer of ID, access and security solutions. 

The vulnerabilities - which included weak encryption and a default username-password combination for database access - would have allowed an attacker to gain complete access to employee personal information of any organization using the PremiSys ID system. Though IDenticard released a patch to resolve the vulnerabilities, the incident points to growing security risks around network-connected, RFID-tagged devices.

In the summer of 2017, these security risks were put on full display when researchers from the KU Leuven university discovered a simple method to hack the Tesla Model S’s keyless entry fob. The researchers claim that these types of attacks were possible (prior to the security patch rolled out by Tesla in June of 2018) because of the weak encryption used by the Pektron key’s system. 

Despite the numerous security concerns that have surfaced in recent years, RFID is still one of the most tenable solutions for increasing the efficiency and safety of IoT. That said, for enterprise to take full advantage of the benefits of RFID technology, stronger security protocols and encryptions must be implemented. 

Compounding the threat is the fact that many RFID-enabled enterprise networks are at an increased risk of breaches (especially those in the Industrial IoT, IIoT) due to their inability to detect vulnerabilities and breaches in the first place. In fact, a recent study published in January by Gemalto discovered that nearly 48% of companies in all industries are unable to detect IoT device breaches. 

The Bain & Co. study pointed to security as the major obstacles to full-scale RFID/IoT adoption. With data breaches costing, on average, more than $3.86 million or $148 per record, new security measures must be taken if IoT is to fulfill its promises of en masse real-time connection between businesses, consumers, and their devices. Unsurprisingly, in the Gemalto survey interviewing 950 of the world’s leaders in IT and IoT businesses, more than 79% of them claim to want more robust guidelines for comprehensive IoT security. 

According to The Open Web Application Security Project (OWASP), there are ten primary vulnerabilities present in IoT and many of these risk factors are directly related to the implementation of RFID technology. 

Securing RFID-Enabled Enterprise IoT Devices

Of the many vulnerabilities in RFID/IoT devices and technologies, few impact consumers as directly as those presented by RFID scanners. 

RFID scanners can glean information from any RFID-enabled device, not just credit cards and phones. Our IoT and IIoT, both growing at a breakneck pace and with security features lagging behind, are prime targets for exploitation. 

Security analysts have raised concerns about the safety of data traveling on these networks for years. In fact, in a study conducted by IBM, it was found that fewer than 20% of routinely test their IoT apps and devices for security vulnerabilities. With data breaches growing at an alarming pace--2018 alone resulted in the exposure of more than 47.2 million records--many customers are asking, “What protections do we have against the growing threat against connected devices?” 

As it happens, quite a lot. In 2017, a research group at the IAIK Graz University of Technology created an RFID-based system aiming to secure RFID data on an open Internet of Things (IoT) network. The engineers designed a novel RFID tag that exclusively uses the Internet Protocol Security layer to secure the RFID tag and its sensor data, regardless of what type of RFID scanner attempts to steal the tag data.

Their innovation lies in collecting the RFID sensor data first through a virtual private network (VPN) application. Using the custom RFID tag, communications are routed through the IPsec protocol, which provides secure end-to-end encryption between an RFID-enabled IoT device and the network to which it’s connected. 

Solutions that identify and resolve potential IoT device vulnerabilities still need more work before we can expect widespread implementation. For one thing, the IPsec protocol, which is available on most consumer VPN applications, does not secure networks with 100% certainty.

Researchers at Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum (RUB) recently discovered a Bleichenbacher vulnerability in numerous commercial VPNs, including those used by Cisco, Clavister, Huawei and Zyxel.

RFID Breaking Big in the Enterprise Market

When it comes to RFID security, conversations gravitate toward consumer applications like contactless payment fraud or bugs in wearable technology. Though RFID spending is mostly business-to-consumer, the next largest spending category is the enterprise, comprising nearly 30% of the total RFID market.

RFID’s market size is projected to grow an additional 30% through 2020, as enterprise embraces RFID tags in everything from supply-chain management to security keycard systems. One of the big enablers of IoT in enterprises has been the simple addition of “passive” RFID tags for day-to-day operational functions. 

Passive RFID systems are comprised of RFID tags, readers/antennas, middleware, and (in many cases), RFID printers.  

With the rate the technology has evolved, the modern market now has access to thousands of tag-types with increased range and sensitivity and a plethora of substance-specific designs (e.g. tags made specifically for metal, liquid, and other materials). This technology allows for unprecedented tracking for and security of inventory, personnel, and other company assets.

Passive RFID tags, which have no electronic components, cost roughly 1/100th of the price of their “active” counterparts. And, although they have a much lower range than their active counterparts, they require no internal power source and instead draw their power from electromagnetic energy emitted by the local RFID readers. Though a tag cannot be assigned an IP address, the reader is actually part of the IoT network and is identified by its IP address, which makes the latter vulnerable, as we’ve seen, to the same kinds of hacks that affect other devices when steps have not been taken to hide the IP address.

Because of these factors, passive RFID tags are ideal for companies and supply chains operating in extreme heat and cold, dust, debris and exposure to other elements.

Final Thoughts

With all of this taken into consideration, the question still remains, “What can the average consumer do to protect their IoT devices from hackers?”

One of the simplest solutions is to make a minor investment into some kind of blocking or wallet jamming card. If you have first generation contactless cards, ask your bank or credit card company to upgrade you to the encrypted second generation. While your data might be skimmed, it will be unreadable to the perpetrator due to the power of modern encryption protocols. 

For example, a standard 256-bit protocol would take 50 supercomputers many billions of years to decrypt and the impracticalities of such an attack lead cybercriminals to target easier prey. 

Ultimately, the accelerating pace of RFID tech will make our lives more convenient. With greater convenience, however, comes a greater need for security solutions. When it comes to RFID, one can only hope that the good guys stay one step ahead in the ongoing crypto arms race.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

 

Copyright 2010 Respective Author at Infosec Island
  • September 11th 2019 at 14:33

Android RAT Exclusively Targets Brazil

A newly discovered Android remote access Trojan (RAT) is specifically targeting users in Brazil, Kaspersky reports. 

Called BRATA, which stands for Brazilian RAT Android, the malware could theoretically be used to target any other Android user, should the cybercriminals behind it want to. Widespread since January 2019, the threat was primarily hosted in Google Play, but also in alternative Android app stores. 

The malware targets Android 5.0 or later and infects devices via push notifications on compromised websites, messages delivered via WhatsApp or SMS, or sponsored links in Google searches.

After discovering the first RAT samples in January and February 2019, Kaspersky has observed over 20 different variants to date, in Google Play alone, most posing as updates to WhatsApp. 

One of the topics abused by BRATA is the CVE-2019-3568 WhatsApp patch. The infamous fake WhatsApp update had over 10,000 downloads in the official Android store when it was removed, Kaspersky says.

As soon as it has infected a device, BRATA enables its keylogging feature and starts abusing Android’s Accessibility Service feature to interact with other applications.

The commands supported by the malware allow it to capture and send user’s screen output in real-time, or turn off the screen or give the user the impression that the screen is off while performing actions in the background. 

It can also retrieve Android system information, data on the logged user and their registered Google accounts, and hardware information, and can request the user to unlock the device or perform a remote unlock.

What’s more, BRATA can launch any application installed with a set of parameters sent via a JSON data file, send a string of text to input data in textboxes, and launch any particular application or uninstall the malware and remove traces of infection.

“In general, we always recommend carefully review permissions any app is requesting on the device. It is also essential to install an excellent up-to-date anti-malware solution with real-time protection enabled,” Kaspersky concludes. 

RelatedMalware Found in Google Play App With 100 Million Downloads

RelatedResearchers Discover Android Surveillance Malware Built by Russian Firm

Copyright 2010 Respective Author at Infosec Island
  • September 2nd 2019 at 14:59

Three Strategies to Avoid Becoming the Next Capital One

Recently, Capital One discovered a breach in their system that compromised Social Security numbers of about 140,000 credit card customers along with 80,000 bank account numbers. The breach also exposed names, addresses, phone numbers and credit scores, among other data.

What makes this breach even more disconcerting is Capital One has been the poster child for cloud adoption and most, if not all, of their applications are hosted in the cloud. They were one of the first financial companies - a very technologically conservative industry -- to adopt the cloud and have always maintained the cloud has been a critical enabler of their business success by providing incredible IT agility and competitive strengths.

So, does this mean companies should rethink their cloud adoption? In two words: hell o! The agility and economic value of cloud are intact and accelerating.  Leading edge companies will continue to adopt the cloud and SaaS technologies. The breach does, however, put a finer point on what it means to manage security in the cloud.

So how do you avoid becoming the next Capital One? At Sumo Logic, we are fully in the cloud and work with thousands of companies who have (or are planning to) adopt the cloud. Our experience enables us to offer three strategies to our enterprise CISO/security teams:

1. Know the “shared security” principles in the cloud environment.

The cloud runs on a shared security model. If you are using the cloud and building apps in the cloud, you should know that your app security is shared between you (the application owner) and the cloud platform. .

Specifically, the cloud security model means that:

  • The cloud vendor manages and controls the host operating system, the virtualization layer, and the physical security of its facilities.
  • To ensure security within the cloud, the customer configures and manages the security controls for the guest operating system and other apps (including updates and security patches), as well as for the security group firewall. The customer is also responsible for encrypting data in-transit and at-rest.
  • Have a strong IAM strategy, access control and logging are key to stopping inseider threats
  • Consider a Bug Bounty program, this was an essential point in what Capital One did right to identify the breach.

Hence, running in the cloud does not absolve you of managing the security of your application or its infrastructure, something all cloud enterprises should be aware of. It is also a good time to step up you security to invite ethical hacking on your services. At Sumo Logic, we have been running Bounties on our platform for two years using both HackOne and BugCrowd to open the kimono and gain trust from our consumers that we are doing everything possible to secure their data in the cloud.

Your call to action: Know the model. Know what you are responsible for (at the end of the day, almost everything!).

2. Know and use the cloud native security services

Some elements of cloud infrastructure and systems are opaque -- all cloud providers provide native security services to help you get control of access/security in the cloud. It’s imperative enterprises in the cloud use these foundational services. In Sumo Logic’s third annual State of the Modern App Report, we analyzed the usage of these services in AWS and saw significant usage of these services.

Your call to action: Implement the cloud platform security services. They are your foundational services and help implement your basic posture.

3. Get a “cloud” SIEM to mind the minder

A security information event management (SIEM) solution is like a radar system pilots and air traffic controllers use. Without one, enterprise IT is flying blind in regard to security. Today’s most serious threats are distributed, acting in concert across multiple systems and using advanced evasion techniques to avoid detection. Without a SIEM, attacks are allowed to germinate and grow into emergency incidents with significant business impact.

Cloud security is radically different from traditional SIEM’s. There are many key differences:

  • The architecture of cloud apps (microservices, API based) is very different from traditional apps
  • The surface area of cloud applications (and therefore security incidents) is very large
  • The types of security incidents (malware, ransomware etc.) in the cloud could also be very different from traditional data center attacks

While you consider a SIEM, consider one focused on new threats in the cloud environment, built in the cloud, for the cloud.

So, there you have it -- three strategies to preventing catastrophic cloud security issues. These strategies will not fix everything, but they are the best starting points to improve your security posture as you move to the cloud.

About the author: As Sumo Logic's Chief Security Officer, George Gerchow brings over 20 years of information technology and systems management expertise to the application of IT processes and disciplines. His background includes the security, compliance, and cloud computing disciplines.

Copyright 2010 Respective Author at Infosec Island
  • August 30th 2019 at 14:00

Why a Business-Focused Approach to Security Assurance Should Be an Ongoing Investment

How secure is your organization’s information? At any given moment, can a security leader look an executive in the eye and tell them how well business processes, projects and supporting assets are protected?   

Security assurance should provide relevant stakeholders with a clear, objective picture of the effectiveness of information security controls. However, in a fast-moving, interconnected world where the threat landscape is constantly evolving, many security assurance programs are unable to keep pace. Ineffective programs that do not focus sufficiently on the needs of the business can provide a false level of confidence.  

A Business-Focused Approach

Many organizations aspire to an approach that directly links security assurance with the needs of the business, demonstrating the level of value that security provides. Unfortunately, there is often a significant gap between aspiration and reality.

Improvement requires time and patience, but organizations do not need to start at the beginning. Most already have the basics of security assurance in place, meeting compliance obligations by evaluating the extent to which required controls have been implemented and identifying gaps or weaknesses. 

Taking a business-focused approach to security assurance is an evolution. It means going a step further and demonstrating how well business processes, projects and supporting assets are really protected, by focusing on how effective controls are. It requires a broader view, considering the needs of multiple stakeholders within the organization.

Business-focused security assurance programs can build on current compliance-based approaches by:

  • Identifying the specific needs of different business stakeholders
  • Testing and verifying the effectiveness of controls, rather than focusing purely on whether the right ones are in place
  • Reporting on security in a business context
  • Leveraging skills, expertise and technology from within and outside the organization

A successful business-focused security assurance program requires positive, collaborative working relationships throughout the organization. Security, business and IT leaders should energetically engage with each other to make sure that requirements are realistic and expectations are understood by all.

A Change Will Do You Good

The purpose of security assurance is to provide business leaders with an accurate and realistic level of confidence in the protection of ‘target environments’ for which they are responsible. This involves presenting relevant stakeholders with evidence regarding the effectiveness of controls. However, common organizational approaches to security assurance do not always provide an accurate or realistic level of confidence, nor focus on the needs of the business.

Security assurance programs seldom provide reliable assurance in a dynamic technical environment, which is subject to a rapidly changing threat landscape. Business stakeholders often lack confidence in the accuracy of security assurance findings for a variety of reasons.

Common security assurance activities and reporting practices only provide a snapshot view, which can quickly become out of date: new threats emerge or existing ones evolve soon after results are reported. Activities such as security audits and control gap assessments typically evaluate the strengths and weaknesses of controls at a single point in time. While these types of assurance activities can be helpful in identifying trends and patterns, reports provided on a six-monthly or annual basis are unlikely to present an accurate, up-to-date picture of the effectiveness of controls. More regular reporting is required to keep pace with new threats.

Applying a Repeatable Process

Organizations should follow a clearly defined and approved process for performing security assurance in target environments. The process should be repeatable for any target environment, fulfilling specific business-defined requirements.

The security assurance process comprises five steps, which can be adopted or tailored to meet the needs of any organization. During each step of the process a variety of individuals, including representatives from operational and business support functions throughout the organization, might need to be involved.

The extent to which individuals and functions are involved during each step will differ between organizations. A relatively small security assurance function, for example, may need to acquire external expertise or additional specialists from the broader information security or IT functions to conduct specific types of technical testing. However, in every organization:

  • Business stakeholders should influence and approve the objectives and scope of security assurance assessments
  • The security assurance function should analyze results from security assurance assessments to measure performance and report the main findings

Organizations should:

  • Prioritize and select the target environments in which security assurance activities will be performed
  • Apply the security assurance process to selected target environments
  • Consolidate results from assessments of multiple target environments to provide a wider picture of the effectiveness of security controls
  • Make improvements to the security assurance program over time

An Ongoing Investment

In a fast-moving business environment filled with constantly evolving cyber threats, leaders want confidence that their business processes, projects and supporting assets are well protected. An independent and objective security assurance function should provide business stakeholders with the right level of confidence in controls – complacency can have disastrous consequences.

Security assurance activities should demonstrate how effective controls really are – not just determine whether they have been implemented or not. Focusing on what business stakeholders need to know about the specific target environments for which they have responsibility will enable the security assurance function to report in terms that resonate. Delivering assurance that critical business processes and projects are not exposed to financial loss, do not leak sensitive information, are resilient and meet legal, regulatory and compliance requirements, will help to demonstrate the value of security to the business.

In most cases, new approaches to security assurance should be more of an evolution than a revolution. Organizations can build on existing compliance-based approaches rather than replace them, taking small steps to see what works and what doesn’t.

Establishing a business-focused security assurance program is a long-term, ongoing investment.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
  • August 29th 2019 at 13:14

If You Don’t Have Visibility, You Don’t Have Security

If you’ve ever watched a thriller or horror movie, you’re probably familiar with the scene where someone is trying to keep a monster or attacker out so they barricade the doors and lock the windows and feel safe for 10 seconds…until someone remembers that the cellar door is unlocked and they discover the threat is already inside. That’s a pretty good metaphor for cybersecurity. IT security professionals scramble to protect and secure everything they’re aware of—but the one thing they’re not aware of is the Achilles heel that can bring everything crumbling down. That is why comprehensive visibility is crucial for effective cybersecurity.

You Can’t Protect What You Can’t See

As illustrated in the example above, you can have the best security possible protecting the attack vectors and assets you’re aware of, but that won’t do you any good if an attacker discovers an attack vector or asset you aren’t aware of and haven’t protected. It may not seem like a fair fight, but an attacker only needs one vulnerability to exploit. The burden is on the IT security team to make sure that everything is secured.

That’s easier said than done in today’s network environments. When you’re trying to keep a monster out of the house, you’re at least only dealing with a static and manageable number of doors and windows. In a dynamic, hybrid cloud, DevOps-driven, software-defined environment running containerized applications, the entire ecosystem can change in the blink of an eye and the number of assets to protect can increase exponentially. Employees have installed unauthorized routers and wireless access points and connected to unsanctioned web-based services that expose the network and sensitive data to unnecessary risk since the dawn of networking, but the advent of IoT (internet-of-things) has created an explosion in the volume of rogue devices.

Organizations need a tool that provides visibility of all IT assets—both known and unknown—including endpoints, cloud platforms, containers, mobile devices, OT and IoT equipment across hybrid and multi-cloud environment. It’s urgent for IT and cybersecurity teams to have comprehensive visibility and the ability to assess their security and compliance posture and respond in real-time to address challenges as they arise.

Vulnerability and Patch Management Can’t Replace Visibility

Since the dawn of cybersecurity, vulnerability and patch management have formed the backbone of effective protection. It makes sense. If you can proactively discover vulnerabilities in the hardware and software you use and deploy patches to fix the flaws or take steps to mitigate the risk, you should be able to prevent almost any attack.

Vulnerability and patch management are still important elements of effective cybersecurity, but comprehensive visibility is crucial. Finding and patching vulnerabilities without visibility provides a false sense of security. The assumption is that the environment is secure if all of the discovered vulnerabilities have been patched, but the reality is that only the vulnerabilities of the hardware and software you’re aware of have been patched. If you aren’t confident that you have an accurate, real-time inventory of your hardware and software assets, you’re not really secure.

Continuous Visibility Leads to Better Cybersecurity

Ideally, organizations need to have visibility of all IT assets—both known and unknown—throughout the entire IT infrastructure, spanning local networks and hybrid cloud environments. Imagine how much better your security and compliance posture would be if you actually knew—with confidence—what is on your global hybrid-IT environment at any given moment rather than relying on periodic asset scans that are already obsolete. What would it be like to have a single source of truth that enables you to identify issues and respond in real-time?

Visibility alone is not enough, though. It’s also crucial to have the right tools to do something with the information. Beyond visibility, you also need workflows to seamlessly connect to vulnerability and compliance solutions. For example, IT and cybersecurity teams should be able to add unmanaged devices and begin a scan, or tag unmanaged devices to initiate cloud agent installation to enable more comprehensive compliance checks.

Thankfully, the same platforms and technologies that make network visibility more complex and challenging also provide the power, scalability, and accessibility to deliver comprehensive, continuous visibility and tools and platforms that make it easier to run compliance and vulnerability programs. With the appropriate sensors placed strategically throughout the network and on devices, you can actively and continuously collect the necessary data.

The data can be stored in the cloud where the relevant IT, security and compliance information can be analyzed, categorized, enriched, and correlated. Because the data is stored and analyzed in the cloud, it has the flexibility and scalability to address spikes in assets resulting from high demand on containerized applications. It also simplifies and streamlines the ability to search for any asset and quickly determine its security posture.

With the right platform and tools, organizations have access to clean, reliable data—providing continuous visibility and relevant context to enable effective business decisions. It is also crucial for IT and cybersecurity teams to be able to quickly and easily find what they need. The information has to be available and accessible in seconds rather than minutes or hours or days so threats and issues can be addressed with urgency.

Knowledge Is Power

You can’t protect what you can’t see…or what you don’t know about. Don’t be the guy who thinks he is safe in the house while the monster crawls through an unlocked window at the back of the house. Effective cybersecurity is about knowing—with confidence and accuracy—what devices and assets are connected to your network and having the information and tools necessary to respond to threats in real-time.

Without comprehensive visibility, there will always be the chance that your false sense of security could be shattered at any time as attackers discover the vulnerable assets you aren’t aware of and exploit them to gain access to your network and data. Start with visibility. It is the foundation of effective cybersecurity, and it is absolutely essential.

About the AuthorShiva Mandalam is Vice President, Asset Management & Secure Access Controls at Qualys.

Copyright 2010 Respective Author at Infosec Island
  • August 20th 2019 at 10:01

Ransomware: Why Hackers Have Taken Aim at City Governments

When the news media reports on data breaches and other forms of cybercrime, the center of the story is usually a major software company, financial institution, or retailer. But in reality, these types of attacks are merely part of the damage that global hackers cause on a daily basis.

Town and city governments are becoming a more common target for online criminals. For example, a small city in Florida, Riviera Beach, had their office computers hacked and ended up paying $600,000 to try to reverse the damage. Hackers saw this as a successful breach and are now inspired to look at more public institutions that could be vulnerable.

Why are cities and towns so susceptible to hacking, how are these attacks carried out, and what steps should administrators take to protect citizen data?

How Hackers Choose Targets

While some cybercriminals seek out exploits for the sole purpose of causing destruction or frustration, the majority of hackers are looking to make money. Their aim is to locate organizations with poor security practices so that they can infiltrate their networks and online systems. Sometimes hackers will actually hide inside of a local network or database for an extended period of time without the organization realizing it.

Hackers usually cash in through one of two ways. The first way is to try to steal data, like email addresses, passwords, and credit card numbers, from an internal system and then sell that information on the dark web. The alternative is a ransomware attack, in which the hacker holds computer systems hostage and unusable until the organization pays for them to be released.

City and town governments are becoming a common target for hackers because they often rely on outdated legacy software or else have built tools internally that may not be fully secure. These organizations rarely have a dedicated cybersecurity team or extensive testing procedures.

The Basics of Ransomware

Ransomware attacks, like the one which struck the city government of Riviera Beach, can begin with one simple click of a dangerous link. Hackers will often launch targeted phishing scams at an organization's members via emails that are designed to look legitimate.

When a link within one of these emails is clicked, the hacker will attempt to hijack the user's local system. If successful, their next move will be to seek out other nodes on the network. Then they will deploy a piece of malware that will lock all internal users from accessing the systems.

At this point, the town or city employees will usually see a message posted on their screen demanding a ransom payment. Some forms of ransomware will actually encrypt all individual files on an operating system so that the users have no way of opening or copying them.

Ways to Defend Yourself

Cybersecurity threats should be taken seriously by all members of an organization. The first step to stopping hackers is promoting awareness of potential attacks. This can be done through regular training sessions. Additionally, an organization’s IT department should evaluate the following areas immediately.

  • Security Tools: City governments should have a well-reviewed, full-featured, and updated virus scanning tool installed on the network to flag potential threats. At an organization level, firewall policies should be put in place to filter incoming traffic and only allow connections from reputable sources.
  • Web Hosting: With the eternal pressure to stick to a budget, cities often choose a web host based on the lowest price, which can lead to a disaster that far exceeds any cost savings. In a recent comparison of low cost web hosts, community-supported research group Hosting Canada tracked providers using Pingdom and found that the ostensibly “free” and discount hosts had an average uptime of only 96.54%.For reference, 99.9% is considered by the industry to be the bare minimum. Excessive downtime often correlates to older hardware and outdated software that is more easily compromised.   
  • Virtual Private Network (VPN): This one should be mandatory for any employee who works remotely or needs to connect to public wi-fi networks. A VPN encodes all data in a secure tunnel as it leaves your device and heads to the open internet. This means that if a hacker tries to intercept your web traffic, they will be unable to view the raw content. However, a VPN is not enough to stop ransomware attacks or other forms of malware. It simply provides you with an anonymous IP address to use for exchanging data.

Looking Ahead

Local governments need to maintain a robust risk management approach while preparing for potential attacks from hackers. Most security experts agree that the Riviera Beach group actually did the wrong thing by paying out the hacker ransomware. This is because there's no guarantee that the payment will result in the unlocking of all systems and data.

During a ransomware attack, an organization needs to act swiftly. When the first piece of malware is detected, the infected hardware should be immediately shut down and disconnected from the local network to limit the spread of the virus. Any affected machine should then have its hard drive wiped and restored to a previous backup from before the attack began.

Preparing for different forms of cyberattack is a critical activity within a disaster recovery plan. Every organization should have their plan defined with various team members assigned to roles and responsibilities. Cities and towns should also consider investing in penetration testing from outside groups and also explore the increasingly popular zero-trust security strategy as a way to harden the network. During a penetration test, experts explore potential gaps in your security approach and report the issues to you directly, allowing you to fix problems before hackers exploit them.

Final Thoughts

With ransomware attacks, a hacker looks to infiltrate an organization's network and hold their hardware and data files hostage until they receive a large payment. City and town government offices are becoming a common target for these instances of cybercrime due to their immature security systems and reliance on legacy software.

The only way to stop the trend of ransomware is for municipal organizations to build a reputation of having strong security defenses. This starts at the employee level, with people being trained to look for danger online and learning how to keep their own hardware and software safe.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

 

Copyright 2010 Respective Author at Infosec Island
  • August 19th 2019 at 12:09

5 Limitations of Network-Centric Security in the Cloud

Traditional security solutions were designed to identify threats at the perimeter of the enterprise, which was primarily defined by the network. Whether called firewall, intrusion detection system, or intrusion prevention system, these tools delivered “network-centric” solutions. However, much like a sentry guarding the castle, they generally emphasized identification and were not meant to investigate activity that might have gotten past their surveillance.

Modern threats targeting public clouds (PaaS or IaaS platforms) require a different level of insight and action. Since executables come and go instantaneously, network addresses and ports are recycled seemingly at random, and even the fundamental way traffic flows have changed, compared to the traditional data center. To operate successfully in modern IT infrastructures, we must reset how we think about security in cloud.

Surprisingly, many organizations continue to use network-based security and rely on available network traffic data as their security approach. It’s important for decision makers to understand the limitations inherent in this kind of approach so they don’t operate on a false sense of security.

To help security professionals understand the new world of security in the cloud, below are five specific use cases where network-centric security is inadequate to handle the challenges of security in modern cloud environments:

1. Network-based detection tends to garner false positives

Nothing has confounded network security as much as the demise of static IP addresses and endpoints in the cloud. Endpoints used to be physical; now they are virtual and exist as containers. In the cloud, everything is dynamic and transient; nothing is persistent. IP addresses and port numbers are recycled rapidly and continuously, making it impossible to identify and track over time which application generated a connection just by looking at network logs. Attempting to detect risks, and threats using network activity creates too many irrelevant alerts and false positives.

2. Network data doesn’t associate cloud sessions to actual users

The common DevOps practice of using service and root accounts has been a double-edged sword. On one hand, it removes administrative roadblocks for developers and accelerates even further the pace of software delivery in cloud environments. On

the other hand, it also makes it easier to initiate attacks from these “privileged” accounts and gives attackers another place to hide. By co-opting a user or service account, cybercriminals can evade identity-aware network defenses. Even correlating traffic with Active Directory can fail to provide insights into the true user. The only way to get to the true user of an application is to correlate and stitch SSH sessions, which is simply not possible with network only information.

3. The network attack surface is no longer the only target for cyber attacks

Illicit activities have moved beyond the network attack surface in the cloud. Here are four common attack scenarios that involve configuration and workloads (VMs or containers) in public clouds, but will not appear in network logs:

  • User privilege changes: most cyber attacks have to operate a change of privilege to succeed.
  • The launch of a new application or a change to a launch package.
  • Changes in application launch sequences.
  • Changes made to configuration files.

4. When it comes to container traffic, network-based security is blind

Network logs capture network activities from one endpoint (physical or virtual server, VM, user, or generically an “instance”) to another along with many attributes of the communication. Network logs have no visibility inside an instance. In a typical modern micro-services architecture, multiple containers will run inside the same instance and their communication will not show up on any network logs. The same applies to all traffic within a workload. Containerized clouds are where cryptocurrency mining attacks often start, and network-based security has no ability to detect the intrusion.

5. Harmful activity at the storage layer is not detected

In cloud environments, the separation of compute and storage resources into two layers creates new direct paths to the data. If the storage layer is not configured properly, hackers can target APIs and conduct successful attacks without being detected by network-based security. On AWS specifically, S3 bucket misconfigurations common and have left large volumes of data exposed. Data leaks due to open buckets will not appear on network logs unless you have more granular information that can detect that abnormal activity is taking place.

Focusing exclusively on network connections is not enough to secure cloud environments. Servers and endpoints don’t yield any better results as they come and go too fast for an endpoint-only strategy to succeed. So, what can you do? Take a different approach altogether. Collect data at the VM and container level, organize that data into logical units that give security insights, and then analyze the situation in real-time. In other words, go deep vertically when collecting data from workloads, but analyze the information horizontally across your entire cloud. This is how you can focus on the application’s behaviors and not on network 5-tuples or single machines.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island
  • August 19th 2019 at 11:55

1 Million South Korean Credit Card Records Found Online

Over 1 million South Korea-issued Card Present records have been posted for sale on the dark web since the end of May, Gemini Advisory says. 

The security firm could not pinpoint the exact compromised point of purchase (CPP), but believes the records may have been obtained either from a breached company operating several different businesses or from a compromised point-of-sale (POS) integrator. 

Amid an increase in attacks targeting brick-and-mortar and e-commerce businesses in the Asia Pacific (APAC) region, South Korea emerges as the largest victim of Card Present (CP) data theft by a wide margin, Gemini Advisory says.

Although EMV chips have been used in the country since 2015 and compliance is mandatory since July 2018, CP fraud still frequently occurs, especially due to poor merchant implementation. 

In May 2019, Gemini Advisory found 42,000 compromised South Korea-issued CP records posted for sale on the dark web, with a 448% spike in June, when 230,000 records were observed. In July, there were 890,000 records posted, marking a 2,019% increase from May. 

Overall, more than 1 million compromised South Korea-issued CP records have been posted for sale on the dark web since May 29, 2019. 

The security firm also identified 3.7% US-issued cards, with a credit union that primarily serves the US Air Force emerging as one of the most impacted US financial institutions (the Air Force maintains multiple air bases in South Korea). 

“Through an in-depth analysis of the compromised cards, analysts determined that many of them belong to US cardholders visiting South Korea. Since South Korea has received just over 1 million US travelers in the past 12 months, this should account for the high level of US payment records,” Gemini Advisory says. 

The median price per record is $40, significantly higher than the $24 median price of South Korean CP records across the dark web overall, the security firm notes. While 2018 marked a relatively large supply of such records, but a low demand, 2019 saw lower supply, but a growing demand.

“The demand continued to increase while the supply remained stagnant until the recent spike in South Korean records from June until the present. This sudden influx in card supply may be highly priced in an attempt to capitalize on the growing demand,” Gemini Advisory notes. 

The security firm says attempts to explore potential CPPs were not fruitful, as there were too many possible businesses affected by this breach. The most likely scenarios suggest that either a large business was compromised, or that a POS integrator was breached, impacting multiple merchants.

“South Korea’s high CP fraud rates indicate a weakness in the country’s payment security that fraudsters are motivated to exploit. As this global trend towards increasingly targeting non-Western countries continues, Gemini Advisory assesses with a moderate degree of confidence that both the supply and demand for South Korean-issued CP records in the dark web will likely increase,” the security firm concludes.

RelatedA Crash-Course in Card Shops

RelatedPayment Card Data Stolen From AeroGrow Website

Copyright 2010 Respective Author at Infosec Island
  • August 8th 2019 at 09:54
❌