FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Why You Should Opt Out of Sharing Data With Your Mobile Provider

By BrianKrebs

A new breach involving data from nine million AT&T customers is a fresh reminder that your mobile provider likely collects and shares a great deal of information about where you go and what you do with your mobile device — unless and until you affirmatively opt out of this data collection. Here’s a primer on why you might want to do that, and how.

Image: Shutterstock

Telecommunications giant AT&T disclosed this month that a breach at a marketing vendor exposed certain account information for nine million customers. AT&T said the data exposed did not include sensitive information, such as credit card or Social Security numbers, or account passwords, but was limited to “Customer Proprietary Network Information” (CPNI), such as the number of lines on an account.

Certain questions may be coming to mind right now, like “What the heck is CPNI?” And, ‘If it’s so ‘customer proprietary,’ why is AT&T sharing it with marketers?” Also maybe, “What can I do about it?” Read on for answers to all three questions.

AT&T’s disclosure said the information exposed included customer first name, wireless account number, wireless phone number and email address. In addition, a small percentage of customer records also exposed the rate plan name, past due amounts, monthly payment amounts and minutes used.

CPNI refers to customer-specific “metadata” about the account and account usage, and may include:

-Called phone numbers
-Time of calls
-Length of calls
-Cost and billing of calls
-Service features
-Premium services, such as directory call assistance

According to a succinct CPNI explainer at TechTarget, CPNI is private and protected information that cannot be used for advertising or marketing directly.

“An individual’s CPNI can be shared with other telecommunications providers for network operating reasons,” wrote TechTarget’s Gavin Wright. “So, when the individual first signs up for phone service, this information is automatically shared by the phone provider to partner companies.”

Is your mobile Internet usage covered by CPNI laws? That’s less clear, as the CPNI rules were established before mobile phones and wireless Internet access were common. TechTarget’s CPNI primer explains:

“Under current U.S. law, cellphone use is only protected as CPNI when it is being used as a telephone. During this time, the company is acting as a telecommunications provider requiring CPNI rules. Internet use, websites visited, search history or apps used are not protected CPNI because the company is acting as an information services provider not subject to these laws.”

Hence, the carriers can share and sell this data because they’re not explicitly prohibited from doing so. All three major carriers say they take steps to anonymize the customer data they share, but researchers have shown it is not terribly difficult to de-anonymize supposedly anonymous web-browsing data.

“Your phone, and consequently your mobile provider, know a lot about you,” wrote Jack Morse for Mashable. “The places you go, apps you use, and the websites you visit potentially reveal all kinds of private information — e.g. religious beliefs, health conditions, travel plans, income level, and specific tastes in pornography. This should bother you.”

Happily, all of the U.S. carriers are required to offer customers ways to opt out of having data about how they use their devices shared with marketers. Here’s a look at some of the carrier-specific practices and opt-out options.

AT&T

AT&T’s policy says it shares device or “ad ID”, combined with demographics including age range, gender, and ZIP code information with third parties which explicitly include advertisers, programmers, and networks, social media networks, analytics firms, ad networks and other similar companies that are involved in creating and delivering advertisements.

AT&T said the data exposed on 9 million customers was several years old, and mostly related to device upgrade eligibility. This may sound like the data went to just one of its partners who experienced a breach, but in all likelihood it also went to hundreds of AT&T’s partners.

AT&T’s CPNI opt-out page says it shares CPNI data with several of its affiliates, including WarnerMedia, DirecTV and Cricket Wireless. Until recently, AT&T also shared CPNI data with Xandr, whose privacy policy in turn explains that it shares data with hundreds of other advertising firms. Microsoft bought Xandr from AT&T last year.

T-MOBILE

According to the Electronic Privacy Information Center (EPIC), T-Mobile seems to be the only company out of the big three to extend to all customers the rights conferred by the California Consumer Privacy Act (CCPA).

EPIC says T-Mobile customer data sold to third parties uses another unique identifier called mobile advertising IDs or “MAIDs.” T-Mobile claims that MAIDs don’t directly identify consumers, but under the CCPA MAIDs are considered “personal information” that can be connected to IP addresses, mobile apps installed or used with the device, any video or content viewing information, and device activity and attributes.

T-Mobile customers can opt out by logging into their account and navigating to the profile page, then to “Privacy and Notifications.” From there, toggle off the options for “Use my data for analytics and reporting” and “Use my data to make ads more relevant to me.”

VERIZON

Verizon’s privacy policy says it does not sell information that personally identities customers (e.g., name, telephone number or email address), but it does allow third-party advertising companies to collect information about activity on Verizon websites and in Verizon apps, through MAIDs, pixels, web beacons and social network plugins.

According to Wired.com’s tutorial, Verizon users can opt out by logging into their Verizon account through a web browser or the My Verizon mobile app. From there, select the Account tab, then click Account Settings and Privacy Settings on the web. For the mobile app, click the gear icon in the upper right corner and then Manage Privacy Settings.

On the privacy preferences page, web users can choose “Don’t use” under the Custom Experience section. On the My Verizon app, toggle any green sliders to the left.

EPIC notes that all three major carriers say resetting the consumer’s device ID and/or clearing cookies in the browser will similarly reset any opt-out preferences (i.e., the customer will need to opt out again), and that blocking cookies by default may also block the opt-out cookie from being set.

T-Mobile says its opt out is device-specific and/or browser-specific. “In most cases, your opt-out choice will apply only to the specific device or browser on which it was made. You may need to separately opt out from your other devices and browsers.”

Both AT&T and Verizon offer opt-in programs that gather and share far more information, including device location, the phone numbers you call, and which sites you visit using your mobile and/or home Internet connection. AT&T calls this their Enhanced Relevant Advertising Program; Verizon’s is called Custom Experience Plus.

In 2021, multiple media outlets reported that some Verizon customers were being automatically enrolled in Custom Experience Plus — even after those customers had already opted out of the same program under its previous name — “Verizon Selects.”

If none of the above opt out options work for you, at a minimum you should be able to opt out of CPNI sharing by calling your carrier, or by visiting one of their stores.

THE CASE FOR OPTING OUT

Why should you opt out of sharing CPNI data? For starters, some of the nation’s largest wireless carriers don’t have a great track record in terms of protecting the sensitive information that you give them solely for the purposes of becoming a customer — let alone the information they collect about your use of their services after that point.

In January 2023, T-Mobile disclosed that someone stole data on 37 million customer accounts, including customer name, billing address, email, phone number, date of birth, T-Mobile account number and plan details. In August 2021, T-Mobile acknowledged that hackers made off with the names, dates of birth, Social Security numbers and driver’s license/ID information on more than 40 million current, former or prospective customers who applied for credit with the company.

Last summer, a cybercriminal began selling the names, email addresses, phone numbers, SSNs and dates of birth on 23 million Americans. An exhaustive analysis of the data strongly suggested it all belonged to customers of one AT&T company or another. AT&T stopped short of saying the data wasn’t theirs, but said the records did not appear to have come from its systems and may be tied to a previous data incident at another company.

However frequently the carriers may alert consumers about CPNI breaches, it’s probably nowhere near often enough. Currently, the carriers are required to report a consumer CPNI breach only in cases “when a person, without authorization or exceeding authorization, has intentionally gained access to, used or disclosed CPNI.”

But that definition of breach was crafted eons ago, back when the primary way CPNI was exposed was through “pretexting,” such when the phone company’s employees are tricked into giving away protected customer data.

In January, regulators at the U.S. Federal Communications Commission (FCC) proposed amending the definition of “breach” to include things like inadvertent disclosure — such as when companies expose CPNI data on a poorly-secured server in the cloud. The FCC is accepting public comments on the matter until March 24, 2023.

While it’s true that the leak of CPNI data does not involve sensitive information like Social Security or credit card numbers, one thing AT&T’s breach notice doesn’t mention is that CPNI data — such as balances and payments made — can be abused by fraudsters to make scam emails and text messages more believable when they’re trying to impersonate AT&T and phish AT&T customers.

The other problem with letting companies share or sell your CPNI data is that the wireless carriers can change their privacy policies at any time, and you are assumed to be okay with those changes as long as you keep using their services.

For example, location data from your wireless device is most definitely CPNI, and yet until very recently all of the major carriers sold their customers’ real-time location data to third party data brokers without customer consent.

What was their punishment? In 2020, the FCC proposed fines totaling $208 million against all of the major carriers for selling their customers’ real-time location data. If that sounds like a lot of money, consider that all of the major wireless providers reported tens of billions of dollars in revenue last year (e.g., Verizon’s consumer revenue alone was more than $100 billion last year).

If the United States had federal privacy laws that were at all consumer-friendly and relevant to today’s digital economy, this kind of data collection and sharing would always be opt-in by default. In such a world, the enormously profitable wireless industry would likely be forced to offer clear financial incentives to customers who choose to share this information.

But until that day arrives, understand that the carriers can change their data collection and sharing policies when it suits them. And regardless of whether you actually read any notices about changes to their privacy policies, you will have agreed to those changes as long as you continue using their service.

To Infinity and Beyond, with Cloudflare Cache Reserve

By Troy Hunt
To Infinity and Beyond, with Cloudflare Cache Reserve

What if I told you... that you could run a website from behind Cloudflare and only have 385 daily requests miss their cache and go through to the origin service?

To Infinity and Beyond, with Cloudflare Cache Reserve

No biggy, unless... that was out of a total of more than 166M requests in the same period:

To Infinity and Beyond, with Cloudflare Cache Reserve

Yep, we just hit "five nines" of cache hit ratio on Pwned Passwords being 99.999%. Actually, it was 99.9998% but we're at the point now where that's just splitting hairs, let's talk about how we've managed to only have two requests in a million hit the origin, beginning with a bit of history:

Optimising Caching on Pwned Passwords (with Workers)- @troyhunt - https://t.co/KjBtCwmhmT pic.twitter.com/BSfJbWyxMy

— Cloudflare (@Cloudflare) August 9, 2018

Ah, memories 😊 Back then, Pwned Passwords was serving way fewer requests in a month than what we do in a day now and the cache hit ratio was somewhere around 92%. Put another way, instead of 2 in every million requests hitting the origin it was 85k. And we were happy with that! As the years progressed, the traffic grew and the caching model was optimised so our stats improved:

There it is - Pwned Passwords is now doing north of 2 *billion* requests a month, peaking at 91.59M in a day with a cache-hit ratio of 99.52%. All free, open source and out there for the community to do good with 😊 pic.twitter.com/DSJOjb2CxZ

— Troy Hunt (@troyhunt) May 24, 2022

And that's pretty much where we levelled out, at about the 99-and-a-bit percent mark. We were really happy with that as it was now only 5k requests per million hitting the origin. There was bound to be a number somewhere around that mark due to the transient nature of cache and eviction criteria inevitably meaning a Cloudflare edge node somewhere would need to reach back to the origin website and pull a new copy of the data. But what if Cloudflare never had to do that unless explicitly instructed to do so? I mean, what if it just stayed in their cache unless we actually changed the source file and told them to update their version? Welcome to Cloudflare Cache Reserve:

To Infinity and Beyond, with Cloudflare Cache Reserve

Ok, so I may have annotated the important bit but that's what it feels like - magic - because you just turn it on and... that's it. You still serve your content the same way, you still need the appropriate cache headers and you still have the same tiered caching as before, but now there's a "cache reserve" sitting between that and your origin. It's backed by R2 which is their persistent data store and you can keep your cached things there for as long as you want. However, per the earlier link, it's not free:

To Infinity and Beyond, with Cloudflare Cache Reserve

You pay based on how much you store for how long, how much you write and how much you read. Let's put that in real terms and just as a brief refresher (longer version here), remember that Pwned Passwords is essentially just 16^5 (just over 1 million) text files of about 30kb each for the SHA-1 hashes and a similar number for the NTLM ones (albeit slight smaller file sizes). Here are the Cache Reserve usage stats for the last 9 days:

To Infinity and Beyond, with Cloudflare Cache Reserve

We can now do some pretty simple maths with that and working on the assumption of 9 days, here's what we get:

To Infinity and Beyond, with Cloudflare Cache Reserve

2 bucks a day 😲 But this has taken nearly 16M requests off my origin service over this period of time so I haven't paid for the Azure Function execution (which is cheap) nor the egress bandwidth (which is not cheap). But why are there only 16M read operations over 9 days when earlier we saw 167M requests to the API in a single day? Because if you scroll back up to the "insert magic here" diagram, Cache Reserve is only a fallback position and most requests (i.e. 99.52% of them) are still served from the edge caches.

Note also that there are nearly 1M write operations and there are 2 reasons for this:

  1. Cache Reserve is being seeded with source data as requests come in and miss the edge cache. This means that our cache hit ratio is going to get much, much better yet as not even half all the potentially cacheable API queries are in Cache Reserve. It also means that the 48c per day cost is going to come way down 🙂
  2. Every time the FBI feeds new passwords into the service, the impacted file is purged from cache. This means that there will always be write operations and, of course, read operations as the data flows to the edge cache and makes corresponding hits to the origin service. The prevalence of all this depends on how much data the feds feed in, but it'll never get to zero whilst they're seeding new passwords.

An untold number of businesses rely on Pwned Passwords as an integral part of their registration, login and password reset flows. Seriously, the number is "untold" because we have no idea who's actually using it, we just know the service got hit three and a quarter billion times in the last 30 days:

To Infinity and Beyond, with Cloudflare Cache Reserve

Giving consumers of the service confidence that not only is it highly resilient, but also massively fast is essential to adoption. In turn, more adoption helps drive better password practices, less account takeovers and more smiles all round 😊

As those remaining hash prefixes populate Cache Reserve, keep an eye on the "cf-cache-status" response header. If you ever see a value of "MISS" then congratulations, you're literally one in a million!

Full disclosure: Cloudflare provides services to HIBP for free and they helped in getting Cache Reserve up and running. However, they had no idea I was writing this blog post and reading it live in its entirety is the first anyone there has seen it. Surprise! 👋

Jenkins Security Alert: New Security Flaws Could Allow Code Execution Attacks

By Ravie Lakshmanan
A pair of severe security vulnerabilities have been disclosed in the Jenkins open source automation server that could lead to code execution on targeted systems. The flaws, tracked as CVE-2023-27898 and CVE-2023-27905, impact the Jenkins server and Update Center, and have been collectively christened CorePlague by cloud security firm Aqua. All versions of Jenkins versions prior to 2.319.2 are

Hackers Claim They Breached T-Mobile More Than 100 Times in 2022

By BrianKrebs

Image: Shutterstock.com

Three different cybercriminal groups claimed access to internal networks at communications giant T-Mobile in more than 100 separate incidents throughout 2022, new data suggests. In each case, the goal of the attackers was the same: Phish T-Mobile employees for access to internal company tools, and then convert that access into a cybercrime service that could be hired to divert any T-Mobile user’s text messages and phone calls to another device.

The conclusions above are based on an extensive analysis of Telegram chat logs from three distinct cybercrime groups or actors that have been identified by security researchers as particularly active in and effective at “SIM-swapping,” which involves temporarily seizing control over a target’s mobile phone number.

Countless websites and online services use SMS text messages for both password resets and multi-factor authentication. This means that stealing someone’s phone number often can let cybercriminals hijack the target’s entire digital life in short order — including access to any financial, email and social media accounts tied to that phone number.

All three SIM-swapping entities that were tracked for this story remain active in 2023, and they all conduct business in open channels on the instant messaging platform Telegram. KrebsOnSecurity is not naming those channels or groups here because they will simply migrate to more private servers if exposed publicly, and for now those servers remain a useful source of intelligence about their activities.

Each advertises their claimed access to T-Mobile systems in a similar way. At a minimum, every SIM-swapping opportunity is announced with a brief “Tmobile up!” or “Tmo up!” message to channel participants. Other information in the announcements includes the price for a single SIM-swap request, and the handle of the person who takes the payment and information about the targeted subscriber.

The information required from the customer of the SIM-swapping service includes the target’s phone number, and the serial number tied to the new SIM card that will be used to receive text messages and phone calls from the hijacked phone number.

Initially, the goal of this project was to count how many times each entity claimed access to T-Mobile throughout 2022, by cataloging the various “Tmo up!” posts from each day and working backwards from Dec. 31, 2022.

But by the time we got to claims made in the middle of May 2022, completing the rest of the year’s timeline seemed unnecessary. The tally shows that in the last seven-and-a-half months of 2022, these groups collectively made SIM-swapping claims against T-Mobile on 104 separate days — often with multiple groups claiming access on the same days.

The 104 days in the latter half of 2022 in which different known SIM-swapping groups claimed access to T-Mobile employee tools.

KrebsOnSecurity shared a large amount of data gathered for this story with T-Mobile. The company declined to confirm or deny any of these claimed intrusions. But in a written statement, T-Mobile said this type of activity affects the entire wireless industry.

“And we are constantly working to fight against it,” the statement reads. “We have continued to drive enhancements that further protect against unauthorized access, including enhancing multi-factor authentication controls, hardening environments, limiting access to data, apps or services, and more. We are also focused on gathering threat intelligence data, like what you have shared, to help further strengthen these ongoing efforts.”

TMO UP!

While it is true that each of these cybercriminal actors periodically offer SIM-swapping services for other mobile phone providers — including AT&T, Verizon and smaller carriers — those solicitations appear far less frequently in these group chats than T-Mobile swap offers. And when those offers do materialize, they are considerably more expensive.

The prices advertised for a SIM-swap against T-Mobile customers in the latter half of 2022 ranged between USD $1,000 and $1,500, while SIM-swaps offered against AT&T and Verizon customers often cost well more than twice that amount.

To be clear, KrebsOnSecurity is not aware of specific SIM-swapping incidents tied to any of these breach claims. However, the vast majority of advertisements for SIM-swapping claims against T-Mobile tracked in this story had two things in common that set them apart from random SIM-swapping ads on Telegram.

First, they included an offer to use a mutually trusted “middleman” or escrow provider for the transaction (to protect either party from getting scammed). More importantly, the cybercriminal handles that were posting ads for SIM-swapping opportunities from these groups generally did so on a daily or near-daily basis — often teasing their upcoming swap events in the hours before posting a “Tmo up!” message announcement.

In other words, if the crooks offering these SIM-swapping services were ripping off their customers or claiming to have access that they didn’t, this would be almost immediately obvious from the responses of the more seasoned and serious cybercriminals in the same chat channel.

There are plenty of people on Telegram claiming to have SIM-swap access at major telecommunications firms, but a great many such offers are simply four-figure scams, and any pretenders on this front are soon identified and banned (if not worse).

One of the groups that reliably posted “Tmo up!” messages to announce SIM-swap availability against T-Mobile customers also reliably posted “Tmo down!” follow-up messages announcing exactly when their claimed access to T-Mobile employee tools was discovered and revoked by the mobile giant.

A review of the timestamps associated with this group’s incessant “Tmo up” and “Tmo down” posts indicates that while their claimed access to employee tools usually lasted less than an hour, in some cases that access apparently went undiscovered for several hours or even days.

TMO TOOLS

How could these SIM-swapping groups be gaining access to T-Mobile’s network as frequently as they claim? Peppered throughout the daily chit-chat on their Telegram channels are solicitations for people urgently needed to serve as “callers,” or those who can be hired to social engineer employees over the phone into navigating to a phishing website and entering their employee credentials.

Allison Nixon is chief research officer for the New York City-based cybersecurity firm Unit 221B. Nixon said these SIM-swapping groups will typically call employees on their mobile devices, pretend to be someone from the company’s IT department, and then try to get the person on the other end of the line to visit a phishing website that mimics the company’s employee login page.

Nixon argues that many people in the security community tend to discount the threat from voice phishing attacks as somehow “low tech” and “low probability” threats.

“I see it as not low-tech at all, because there are a lot of moving parts to phishing these days,” Nixon said. “You have the caller who has the employee on the line, and the person operating the phish kit who needs to spin it up and down fast enough so that it doesn’t get flagged by security companies. Then they have to get the employee on that phishing site and steal their credentials.”

In addition, she said, often there will be yet another co-conspirator whose job it is to use the stolen credentials and log into employee tools. That person may also need to figure out how to make their device pass “posture checks,” a form of device authentication that some companies use to verify that each login is coming only from employer-issued phones or laptops.

For aspiring criminals with little experience in scam calling, there are plenty of sample call transcripts available on these Telegram chat channels that walk one through how to impersonate an IT technician at the targeted company — and how to respond to pushback or skepticism from the employee. Here’s a snippet from one such tutorial that appeared recently in one of the SIM-swapping channels:

“Hello this is James calling from Metro IT department, how’s your day today?”

(yea im doing good, how r u)

i’m doing great, thank you for asking

i’m calling in regards to a ticket we got last week from you guys, saying you guys were having issues with the network connectivity which also interfered with [Microsoft] Edge, not letting you sign in or disconnecting you randomly. We haven’t received any updates to this ticket ever since it was created so that’s why I’m calling in just to see if there’s still an issue or not….”

TMO DOWN!

The TMO UP data referenced above, combined with comments from the SIM-swappers themselves, indicate that while many of their claimed accesses to T-Mobile tools in the middle of 2022 lasted hours on end, both the frequency and duration of these events began to steadily decrease as the year wore on.

T-Mobile declined to discuss what it may have done to combat these apparent intrusions last year. However, one of the groups began to complain loudly in late October 2022 that T-Mobile must have been doing something that was causing their phished access to employee tools to die very soon after they obtained it.

One group even remarked that they suspected T-Mobile’s security team had begun monitoring their chats.

Indeed, the timestamps associated with one group’s TMO UP/TMO DOWN notices show that their claimed access was often limited to less than 15 minutes throughout November and December of 2022.

Whatever the reason, the calendar graphic above clearly shows that the frequency of claimed access to T-Mobile decreased significantly across all three SIM-swapping groups in the waning weeks of 2022.

SECURITY KEYS

T-Mobile US reported revenues of nearly $80 billion last year. It currently employs more than 71,000 people in the United States, any one of whom can be a target for these phishers.

T-Mobile declined to answer questions about what it may be doing to beef up employee authentication. But Nicholas Weaver, a researcher and lecturer at University of California, Berkeley’s International Computer Science Institute, said T-Mobile and all the major wireless providers should be requiring employees to use physical security keys for that second factor when logging into company resources.

A U2F device made by Yubikey.

“These breaches should not happen,” Weaver said. “Because T-Mobile should have long ago issued all employees security keys and switched to security keys for the second factor. And because security keys provably block this style of attack.”

The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB key and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

THE ROLE OF MINORS IN SIM-SWAPPING

Nixon said one confounding aspect of SIM-swapping is that these criminal groups tend to recruit teenagers to do their dirty work.

“A huge reason this problem has been allowed to spiral out of control is because children play such a prominent role in this form of breach,” Nixon said.

Nixon said SIM-swapping groups often advertise low-level jobs on places like Roblox and Minecraft, online games that are extremely popular with young adolescent males.

“Statistically speaking, that kind of recruiting is going to produce a lot of people who are underage,” she said. “They recruit children because they’re naive, you can get more out of them, and they have legal protections that other people over 18 don’t have.”

For example, she said, even when underage SIM-swappers are arrested, the offenders tend to go right back to committing the same crimes as soon as they’re released.

In January 2023, T-Mobile disclosed that a “bad actor” stole records on roughly 37 million current customers, including their name, billing address, email, phone number, date of birth, and T-Mobile account number.

In August 2021, T-Mobile acknowledged that hackers made off with the names, dates of birth, Social Security numbers and driver’s license/ID information on more than 40 million current, former or prospective customers who applied for credit with the company. That breach came to light after a hacker began selling the records on a cybercrime forum.

In the shadow of such mega-breaches, any damage from the continuous attacks by these SIM-swapping groups can seem insignificant by comparison. But Nixon says it’s a mistake to dismiss SIM-swapping as a low volume problem.

“Logistically, you may only be able to get a few dozen or a hundred SIM-swaps in a day, but you can pick any customer you want across their entire customer base,” she said. “Just because a targeted account takeover is low volume doesn’t mean it’s low risk. These guys have crews that go and identify people who are high net worth individuals and who have a lot to lose.”

Nixon said another aspect of SIM-swapping that causes cybersecurity defenders to dismiss the threat from these groups is the perception that they are full of low-skilled “script kiddies,” a derisive term used to describe novice hackers who rely mainly on point-and-click hacking tools.

“They underestimate these actors and say this person isn’t technically sophisticated,” she said. “But if you’re rolling around in millions worth of stolen crypto currency, you can buy that sophistication. I know for a fact some of these compromises were at the hands of these ‘script kiddies,’ but they’re not ripping off other people’s scripts so much as hiring people to make scripts for them. And they don’t care what gets the job done, as long as they get to steal the money.”

CISA Issues Warning on Active Exploitation of ZK Java Web Framework Vulnerability

By Ravie Lakshmanan
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a high-severity flaw affecting the ZK Framework to its Known Exploited Vulnerabilities (KEV) catalog based on evidence of active exploitation. Tracked as CVE-2022-36537 (CVSS score: 7.5), the issue impacts ZK Framework versions 9.6.1, 9.6.0.1, 9.5.1.3, 9.0.1.2, and 8.6.4.1, and allows threat actors to retrieve sensitive

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

By Troy Hunt
Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

I found myself going down a previously unexplored rabbit hole recently, or more specifically, what I thought was "a" rabbit hole but in actual fact was an ever-expanding series of them that led me to what I refer to in the title of this post as "6 rabbits deep". It's a tale of firewalls, APIs and sifting through layers and layers of different services to sniff out the root cause of something that seemed very benign, but actually turned out to be highly impactful. Let's go find the rabbits!

The Back Story

When you buy an API key on Have I Been Pwned (HIBP), Stripe handles all the payment magic. I love Stripe, it's such an awesome service that abstracts away so much pain and it's dead simple to integrate via their various APIs. It's also dead simple to configure Stripe to send notices back to your own service via webhooks. For example, when an invoice is paid or a customer is updated, Stripe sends information about that event to HIBP and then lists each call on the webhooks dashboard in their portal:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

There are a whole range of different events that can be listened to and webhooks fired, here we're seeing just a couple of them that are self explanatory in name. When an invoice is paid, the callback looks something like this:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

HIBP has received this call and updated it's own DB such that for a new customer, they can now retrieve an API key or for an existing customer whose subscription has renewed, the API key validity period has been extended. The same callback is also issued when someone upgrades an API key, for example when going from 10RPM (requests per minute) to 50RPM. It's super important that HIBP gets that callback so it can appropriately upgrade the customer's key and they can immediately begin making more requests. When that call doesn't happen, well, let's go down the first rabbit hole.

The Failed API Key Upgrade 🐰

This should never happen:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This came in via HIBP's API key support portal and is pretty self-explanatory. I checked the customer's account on Stripe and it did indeed show an active 50RPM subscription, but when drilling down into the associated payment, I found the following:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Ok, so at least I know where things have started to go wrong, but why? Over to the webhooks dashboard and into the failed payments and things look... suboptimal:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Dammit! Fortunately this is only a small single-digit percentage of all callbacks, but every time this fails it's either stopping someone like the guy above from making the requests they've paid for or potentially, causing someone's API key to expire even though they've paid for it. The latter in particular I was really worried about as it would nuke their key and whatever they'd built on top of it would cease to function. Fortunately, because that's such an impactful action I'd built in heaps of buffer for just such an occurrence and I'd gotten onto this issue quickly, but it was disconcerting all the same.

So, what's happening? Well, the response is HTTP 403 "Forbidden" and the body is clearly a Cloudflare challenge page so something at their end is being triggered. Looks like it's time to go down the next rabbit hole.

Cloudflare's Firewall and Logs 🐰 🐰

Desperate just to quickly restore functionality, I dropped into Cloudflare's WAF and allowed all Stripe's outbound IPs used for webhooks to bypass their security controls:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This wasn't ideal, but it only created risk for requests originating from Stripe and it got things up and running again quickly. With time up my sleeve I could now delve deeper and work out precisely what was going on, starting with the logs. Cloudflare has a really extensive set of APIs that can control a heap of features of the service, including pulling back logs (note: this is a feature of their Enterprise plan). I queried out a slice of the logs corresponding to when some of the 403s from Stripe's dashboard occurred and found 2 entries similar to this one:

{"BotScore":1,"BotScoreSrc":"Verified Bot","CacheCacheStatus":"unknown","ClientASN":16509,"ClientCountry":"us","ClientIP":"54.187.205.235","ClientRequestHost":"haveibeenpwned.com","ClientRequestMethod":"POST","ClientRequestReferer":"","ClientRequestURI":"[redacted]","ClientRequestUserAgent":"Stripe/1.0 (+https://stripe.com/docs/webhooks)","EdgeRateLimitAction":"","EdgeResponseStatus":403,"EdgeStartTimestamp":1674073983931000000,"FirewallMatchesActions":["managedChallenge"],"FirewallMatchesRuleIDs":["6179ae15870a4bb7b2d480d4843b323c"],"FirewallMatchesSources":["firewallManaged"],"OriginResponseStatus":0,"WAFAction":"unknown","WorkerSubrequest":false}

That's one of Stripe's outbound IP's on 54.187.205.235 and the "FirewallMatchesRuleIDs" collection has a value in it. Ergo, something about this request triggered the firewall and caused it to be challenged. I'm sure many of us have gone through the following thought process before:

What did I change?

Did I change anything?

Did they change something?

Except "they" could have been either Cloudflare or Stripe; if it wasn't me (and I was fairly certain it wasn't), was it a Cloudflare change to the rules or a Stripe change to a webhook payload that was now triggering an existing rule? Time to dig deeper again so it's over to the Cloudflare dashboard and down into the WAF events for requests to the webhook callback path:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Yep, something proper broke! Let's drill deeper and look at recent events for that IP:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

As you dig deeper through troubleshooting exercises like this, you gradually turn up more and more information that helps piece the entire puzzle together. In this case, it looks like the "Inbound Anomaly Score Exceeded" rule was being triggered. What's that? And why? Time to go down another rabbit hole.

The Cloudflare OWASP Core Ruleset 🐰 🐰 🐰

So, deeper and deeper down the rabbit holes we go, this time into the depths of the requests that triggered the managed rule:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Well that's comprehensive 🙂

There's a lot to unpack here so let's begin with the ruleset that the previously identified "Inbound Anomaly Score Exceeded" rule belongs to, the Cloudflare OWASP Core Ruleset:

The Cloudflare OWASP Core Ruleset is Cloudflare’s implementation of the OWASP ModSecurity Core Rule SetOpen external link (CRS). Cloudflare routinely monitors for updates from OWASP based on the latest version available from the official code repository.

That link is yet another rabbit hole altogether so let me summarise succinctly here: Cloudflare uses OWASP's rules to identify anomalous traffic based on a customer-defined paranoia level (how strict you want to be) and then applies a score threshold (also customer-defined) at which an action will be taken, for example challenging the request. What I learned as this saga progressed is that the "Inbound Anomoly Score Exceeded" rule is actually a rollup of the rules beneath it. The OWASP score of "26" is the sum of the 6 rules listed beneath it and once it exceeds 25, the superset rule is triggered.

Further - and this is the really important bit - Cloudflare routinely updates the rules from OWASP which makes sense because these are ever-evolving in response to new threats. And when did they last upgrade the rules? It looks like they announced it right before I started having issues:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Whilst it's not entirely clear from above when this release was scheduled to occur, I did reach out to Cloudflare support and was advised it had already taken place:

Please note that we did bump the OWASP version, which we are integrating with to 3.3.4 as noted on our scheduled changes.

So maybe it's not Cloudflare's fault or Stripe's fault, but OWASP's fault? In fairness to all, I don't think it's anyone's fault per se and is instead just an unfortunate result of everyone doing their best to keep the bad guys out. Unless... it really is Stripe's fault because there's something in the request payload that was always fishy and is now being caught? But why for only some requests and not others? Next rabbit!

Cloudflare Payload Logging 🐰 🐰 🐰 🐰

Sometimes, people on the internet lose their minds a bit over things they really shouldn't. One of those things, in my experience, is Cloudflare's interception of traffic and it's something I wrote about in detail nearly 7 years ago now in my piece on security absolutism. Cloudflare plays an enormously valuable role in the internet's ecosystem and a substantial part of the value comes from being able to inspect, cache, optimise, and yes, even reject traffic. When you use Cloudflare to protect your website, they're applying rulesets like the aforementioned OWASP ones and in order to do that, they must be able to inspect your traffic! But they don't log it, not all of it, rather just "metadata generated by our products" as they refer to it on their logs page. We saw an example of that earlier on with Stripe's request from their IP showing it triggered a firewall rule, but what we didn't see is the contents of that POST request, the actual payload that triggered the rule. Let's go grab that.

Because the contents of a POST request can contain sensitive information, Cloudflare doesn't log it. Obviously they see it in transit (that's how OWASP's rules can be applied to it), but it's not stored anywhere and even if you want to capture it, they don't want to be able to see it. That's where payload logging (another Enterprise plan feature) comes in and what's really neat about that is every payload must be encrypted with a public key retained by Cloudflare whilst only you retain the private key. The setup looks like this:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

Pretty self-explanatory and once done, right under where we previously saw the additional logs we now have the ability to decrypt the payload:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

As promised, this requires the private key from earlier:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And now, finally, we have the actual payload that triggered the rule, seen here with my own test data:

[ " },\n \"billing_reason\": \"subscription_update\",\n \"charge\": null,\n \"collection_method\": \"charge_automatically\",\n \"created\": 1674351619,\n \"currency\": \"usd\",\n \"custom_fields\": null,\n \"customer\": \"cus_MkA71FpZ7XXRlt\",\n \"customer_address\": ", " },\n \"customer_email\": \"troy-hunt+1@troyhunt.com\",\n \"customer_name\": \"Troy Hunt 1\",\n \"customer_phone\": null,\n \"customer_shipping\": null,\n \"customer_tax_exempt\": \"none\",\n \"customer_tax_ids\": [\n\n ],\n \"default_payment_method\": null,\n \"default_source\": null,\n \"default_tax_rates\": [\n\n ],\n \"description\": \"You can manage your subscription (i.e. cancel it or regenerate the API key) at any time by verifying your email address here: https://haveibeenpwned.com/API/Key\",\n \"discount\": null,\n \"discounts\": [\n\n ],\n \"due_date\": null,\n \"ending_balance\": -11804,\n \"footer\": null,\n \"from_invoice\": null,\n \"hosted_invoice_url\": \"https://invoice.stripe.com/i/acct_1EdQYpEF14jWlYDw/test_YWNjdF8xRWRRWXBFRjE0aldsWUR3LF9OREo5SlpqUFFvVnFtQnBVcE91YUFXemtkRHFpQWNWLDY0ODkyNDIw02004bEyljdC?s=ap\",\n \"invoice_pdf\": \"https://pay.stripe.com/invoice/acct_1EdQYpEF14jWlYDw/test_YWNjdF8xRWRRWXBFRjE0aldsWUR3LF9OREo5SlpqUFFvVnFtQnBVcE91YUFXemtkRHFpQWNWLDY0ODkyNDIw02004bEyljdC/pdf?s=ap\",\n \"last_finalization_error\": null,\n \"latest_revision\": null,\n \"lines\": ", " ", " ],\n \"discountable\": false,\n \"discounts\": [\n\n ],\n \"invoice_item\": \"ii_1MSsXfEF14jWlYDwB1nfZvFm\",\n \"livemode\": false,\n \"metadata\": ", " },\n \"period\": ", " },\n \"plan\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4eLcJ7JYF02f\",\n \"tiers_mode\": null,\n \"transform_usage\": null,\n \"trial_period_days\": null,\n \"usage_type\": \"licensed\"\n },\n \"price\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4eLcJ7JYF02f\",\n \"recurring\": ", " },\n \"tax_behavior\": \"unspecified\",\n \"tiers_mode\": null,\n \"transform_quantity\": null,\n \"type\": \"recurring\",\n \"unit_amount\": 15000,\n \"unit_amount_decimal\": \"15000\"\n },\n \"proration\": true,\n \"proration_details\": ", " \"il_1MMjfcEF14jWlYDwoe7uhDPF\"\n ]\n }\n },\n \"quantity\": 1,\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subscription_item\": \"si_N6xapJ8gSXdp7W\",\n \"tax_amounts\": [\n\n ],\n \"tax_rates\": [\n\n ],\n \"type\": \"invoiceitem\",\n \"unit_amount_excluding_tax\": \"-14304\"\n },\n ", " ],\n \"discountable\": true,\n \"discounts\": [\n\n ],\n \"livemode\": false,\n \"metadata\": ", " },\n \"period\": ", " },\n \"plan\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4lTSl4axd9mt\",\n \"tiers_mode\": null,\n \"transform_usage\": null,\n \"trial_period_days\": null,\n \"usage_type\": \"licensed\"\n },\n \"price\": ", " },\n \"nickname\": null,\n \"product\": \"prod_Mk4lTSl4axd9mt\",\n \"recurring\": ", " },\n \"tax_behavior\": \"unspecified\",\n \"tiers_mode\": null,\n \"transform_quantity\": null,\n \"type\": \"recurring\",\n \"unit_amount\": 2500,\n \"unit_amount_decimal\": \"2500\"\n },\n \"proration\": false,\n \"proration_details\": ", " },\n \"quantity\": 1,\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subscription_item\": \"si_NDJ98tQrCcviJf\",\n \"tax_amounts\": [\n\n ],\n \"tax_rates\": [\n\n ],\n \"type\": \"subscription\",\n \"unit_amount_excluding_tax\": \"2500\"\n }\n ],\n \"has_more\": false,\n \"total_count\": 2,\n \"url\": \"/v1/invoices/in_1MSsXfEF14jWlYDwxHKk4ASA/lines\"\n },\n \"livemode\": false,\n \"metadata\": ", " },\n \"next_payment_attempt\": null,\n \"number\": \"04FC1917-0008\",\n \"on_behalf_of\": null,\n \"paid\": true,\n \"paid_out_of_band\": false,\n \"payment_intent\": null,\n \"payment_settings\": ", " },\n \"period_end\": 1674351619,\n \"period_start\": 1674351619,\n \"post_payment_credit_notes_amount\": 0,\n \"pre_payment_credit_notes_amount\": 0,\n \"quote\": null,\n \"receipt_number\": null,\n \"rendering_options\": null,\n \"starting_balance\": 0,\n \"statement_descriptor\": null,\n \"status\": \"paid\",\n \"status_transitions\": ", " },\n \"subscription\": \"sub_1MMjfcEF14jWlYDwi8JWFcxw\",\n \"subtotal\": -11804,\n \"subtotal_excluding_tax\": -11804,\n \"tax\": null,\n \"test_clock\": null,\n \"total\": -11804,\n \"total_discount_amounts\": [\n\n ],\n \"total_excluding_tax\": -11804,\n \"total_tax_amounts\": [\n\n ],\n \"transfer_data\": null,\n \"webhooks_delivered_at\": 1674351619\n }\n },\n \"livemode\": false,\n \"pending_webhooks\": 1,\n \"request\": ", " },\n \"type\": \"invoice.paid\"\n}" ]

But enough of what's present in the payload, it's what's absent that especially struck me. No obvious XSS patterns, nor SQL injection or any other suspicious looking strings. The request looked totally benign, so why did it trigger the rule?

I wanted to compare the payload of a blocked request with a similar request that wasn't blocked, but they're only logged at Cloudflare when they trigger a rule. No problem, it's easy to grab the full request from Stripe's webhook history so I found one that passed and one that failed and diff'd them both:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This clearly isn't the full 200 lines, but it's a very similar story over the remainder of the files; tiny differences largely down to dates, IDs, and of course, the customers themselves. No suspicious patterns, no funky characters, nothing visibly abnormal. It's a bit pointless to even mention it because they're near identical, but the payload on the left is the one that passed the firewall whilst the payload on the right was blocked.

Next rabbit hole!

Cloudflare's Internal Rules Engine 🐰 🐰 🐰 🐰 🐰

Completely running out of ideas and options, focus moved to the folks inside Cloudflare who were already aware there was an issue:

We are actively looking into this and will likely release an update to the Cloudflare OWASP ruleset soon

— Michael Tremante (@MichaelTremante) January 20, 2023

What followed was a period of back and forth initially with Cloudflare, then Stripe as well with everyone trying to nut out exactly where things were going wrong. Essentially, the process went like this:

Is Cloudflare inadvertently blocking the requests?

Is the OWASP ruleset raising false positives?

Is Stripe issuing requests that are deemed to be malicious?

And round and round we went. At one time, Cloudflare identified a change in the OWASP ruleset which appeared to have resulted in their implementation inadvertently triggering the WAF. They rolled it back and... the same thing happened. We deferred back to Stripe on the assumption that something must have changed on their end, but they couldn't identify any change that would have any sort of material impact. We were stumped, but we also had an easy fix just one last rabbit hole away...

Fine Tuning the Cloudflare WAF 🐰 🐰 🐰 🐰 🐰 🐰

The joy of a managed firewall is that someone else takes all the rigmarole of looking after it away. I'm going to talk more about that in the summary shortly but clearly, that also creates risk as you're delegating control of traffic flow to someone else. Fortunately, Cloudflare gives you a load of configurability with their managed rules which makes it easy to add custom exceptions:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

This meant I could create a simple exception that was much more intelligent than the previous "just let all outbound Stripe IPs in" by filtering down to the specific path those webhooks were flowing in to:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And finally, because sequence matters, I dragged that rule right up to the top of the pile so it would cause matching inbound requests to skip all the other rules:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

And finally, there were no more rabbits 😊

Lessons Learned

I know what you're thinking - "what was the actual root cause?" - and to be honest, I still don't know. I don't know if it was Cloudflare or OWASP or Stripe or if it even impacted other customers of these services and to be honest, yes, that's a little frustrating. But I learned a bunch of stuff and for that alone, this was a worthwhile exercise I took three big lessons away from:

Firstly, understanding the plumbing of how all these bits work together is super important. I was lucky this wasn't a time critical issue and I had the luxury of learning without being under duress; how rules, payload inspection and exception management all work together is really valuable stuff to understand. And just like that, as if to underscore my first point, I found this right before hitting the publish button on the blog post:

Down the Cloudflare / Stripe / OWASP Rabbit Hole: A Tale of 6 Rabbits Deep 🐰 🐰 🐰 🐰 🐰 🐰

I added a couple more OWASP rules to the exception in Cloudflare (things like a MySQL rule that was adding 5 points), and we were back in business.

Secondly, I look at the managed WAF Cloudflare provides more favourably than I did before simply because I have a better understanding of how comprehensive it is. I want to write code and run apps on the web, that's my focus, and I want someone else to provide that additional layer on top that continuously adapts to block new and emerging threats. I want to understand it (and I now do, at least certainly better than before), but I don't want managing it day in and day out to be my job.

And finally, IMHO, Stripe needs a better mechanism to report on webhook failures:

In live mode you are notified after 3 days of trying. You can also query the events (https://t.co/0mujOPssV0) to create a running list of statuses on web hooks that have been sent and alert on that via your own app.

— Blake Krone (@blakekrone) January 19, 2023

Waiting until stuff breaks really isn't ideal and whilst I'm sure you could plug into the (very extensive) API ecosystem Stripe has, this feels like an easy feature for them to build in. So, Stripe friends, when you read this that's a big "yes" vote from me for some form of anomalous webhook response alerting.

This experience was equal parts frustration and fun and whilst the former is probably obvious, the latter is simply due to having an opportunity to learn something new that's a pretty important part of the service I run. May my frustrated fun story here make your life easier in the future if you face the same problems 😊

Pwned Passwords Adds NTLM Support to the Firehose

By Troy Hunt
Pwned Passwords Adds NTLM Support to the Firehose

I think I've pretty much captured it all in the title of this post but as of about a day ago, Pwned Passwords now has full parity between the SHA-1 hashes that have been there since day 1 and NTLM hashes. We always had both as a downloadable corpus but as of just over a year ago with the introduction of the FBI data feed, we stopped maintaining downloadable behemoths of data.

A little later, we added the downloader to make it easy to pull down the latest and greatest complete data set directly from the same API that so many of you have integrated into your own apps. But because we only had an API for SHA-1 hashes, the downloader couldn't grab the NTLM versions and increasingly, we had 2 corpuses well out of parity.

I don't know exactly why, but just over the last few weeks we've had a marked uptick in requests for an updated NTLM corpus. Obviously there's still a demand to run this against local Active Directory environments and clearly, the more up to date the hashes are the more effective they are at blocking the use of poor passwords.

So, Chief Pwned Passwords Wrangler Stefán Jökull Sigurðarson got to work and just went ahead and built it all for you. For free. In his spare time. As a community contribution. Seriously, have a look through the public GitHub repos and it's all his work ranging from the API to the Cloudflare Worker to the downloader so if you happen to come across him say, at NDC Oslo in a few months' time, show your appreciation and buy the guy a beer 🍺

Lastly, every time I look at how much this tool is being used, I'm a bit shocked at how big the numbers are getting:

Pwned Passwords Adds NTLM Support to the Firehose

That's well more than double the number of monthly requests from when I wrote the blog post about the FBI and NCA only just over a year ago, and I imagine that will only continue to increase, especially with today's announcement about NTLM hashes. Thank you to everyone that has taken this data and done great things with it, we're grateful that it's been put to good use and has undoubtedly helped an untold number of people to make better password choices 😊

Experts Warn of 'Ice Breaker' Cyberattacks Targeting Gaming and Gambling Industry

By Ravie Lakshmanan
A new attack campaign has been targeting the gaming and gambling sectors since at least September 2022, just as the ICE London 2023 gaming industry trade fair event is scheduled to kick off next week. Israeli cybersecurity company Security Joes is tracking the activity cluster under the name Ice Breaker, stating the intrusions employ clever social engineering tactics to deploy a JavaScript

Pwned or Bot

By Troy Hunt
Pwned or Bot

It's fascinating to see how creative people can get with breached data. Of course there's all the nasty stuff (phishing, identity theft, spam), but there are also some amazingly positive uses for data illegally taken from someone else's system. When I first built Have I Been Pwned (HIBP), my mantra was to "do good things after bad things happen". And arguably, it has, largely by enabling individuals and organisations to learn of their own personal exposure in breaches. However, the use cases go well beyond that and there's one I've been meaning to write about for a while now after hearing about it firsthand. For now, let's just call this approach "Pwned or Bot", and I'll set the scene with some background on another problem: sniping.

Think about Miley Cyrus as Hannah Montana (bear with me, I'm actually going somewhere with this!) putting on shows people would buy tickets to. We're talking loads of tickets as back in the day, her popularity was off the charts with demand well in excess of supply. Which, for enterprising individuals of ill-repute, presented an opportunity:

Ticketmaster, the exclusive ticket seller for the tour, sold out numerous shows within minutes, leaving many Hannah Montana fans out in the cold. Yet, often, moments after the shows went on sale, the secondary market  flourished with tickets to those shows. The tickets, whose face value ranged from $21 to $66, were resold on StubHub for an average of $258, plus StubHub’s 25% commission (10% paid by the buyer, 15% by the seller).

This is called "sniping", where an individual jumps the queue and snaps up products in limited demand for their own personal gain and consequently, to the detriment of others. Tickets to entertainment events is one example of sniping, the same thing happens when other products launch with insufficient supply to meet demand, for example Nike shoes. These can be massively popular and, par for the course of this blog, released in short demand. This creates a marketplace for snipers, some of whom share their tradecraft via videos such as this one:

"BOTTER BOY NOVA" refers to himself as a "Sneaker botter" in the video and demonstrates a tool called "Better Nike Bot" (BnB) which sells for $200 plus a renewal fee of $60 every 6 months. But don't worry, he has a discount code! Seems like hackers aren't the only ones making money out of the misfortune of others.

Have a look at the video and watch how at about the 4:20 mark he talks about using proxies "to prevent Nike from flagging your accounts". He recommends using the same number of proxies as you have accounts, inevitably to avoid Nike's (automated) suspicions picking up on the anomaly of a single IP address signing up multiple times. Proxies themselves are a commercial enterprise but don't worry, BOTTER BOY NOVA has a discount code for them too!

The video continues to demonstrate how to configure the tool to ultimately blast Nike's service with attempts to purchase shoes, but it's at the 8:40 mark that we get to the crux of where I'm going with this:

Pwned or Bot

Using the tool, he's created a whole bunch of accounts in an attempt to maximise his chances of a successful purchase. These are obviously just samples in the screen cap above, but inevitably he'd usually go and register a bunch of new email addresses he could use specifically for this purpose.

Now, think of it from Nike's perspective: they've launched a new shoe and are seeing a whole heap of new registrations and purchase attempts. In amongst that lot are many genuine people... and this guy 👆 How can they weed him out such that snipers aren't snapping up the products at the expense of genuine customers? Keeping in mind tools like this are deliberately designed to avoid detection (remember the proxies?), it's a hard challenge to reliably separate the humans from the bots. But there's an indicator that's very easy to cross-check, and that's the occurrence of the email address in previous data breaches. Let me phrase it in simple terms:

We're all so comprehensively pwned that if an email address isn't pwned, there's a good chance it doesn't belong to a real human.

Hence, "Pwned or Bot" and this is precisely the methodology organisations have been using HIBP data for. With caveats:

If an email address hasn't been seen in a data breach before, it may be a newly created one especially for the purpose of gaming your system. It may also be legitimate and the owner has just been lucky to have not been pwned, or it may be that they're uniquely subaddressing their email addresses (although this is extremely rare) or even using a masked email address service such as the one 1Password provides through Fastmail. Absence of an email address in HIBP is not evidence of possible fraud, that's merely one possible explanation.

However, if an email address has been seen in a data breach before, we can say with a high degree of confidence that it did indeed exist at the time of that breach. For example, if it was in the LinkedIn breach of 2012 then you can conclude with great confidence that the address wasn't just set up for gaming your system. Breaches establish history and as unpleasant as they are to be a part of, they do actually serve a useful purpose in this capacity.

Think of breach history not as a binary proposition indicating the legitimacy of an email address, rather as one of assessing risk and considering "pwned or bot" as one of many factors. The best illustration I can give is how Stripe defines risk by assessing a multitude of fraud factors. Take this recent payment for HIBP's API key:

Pwned or Bot

There's a lot going on here and I won't run through it all, the main thing to take away from this is that in a risk evaluation rating scale from 0 to 100, this particular transaction rated a 77 which puts it in the "highest risk" bracket. Why? Let's just pick a few obvious reasons:

  1. The IP address had previously raised early fraud warnings
  2. The email was only ever once previously seen on Stripe, and that was only 3 minutes ago
  3. The customers name didn't match their email address
  4. Only 76% of transactions from the IP address had previously been authorised
  5. The customer's device had previously had 2 other cards associated with it

Any one of these fraud factors may not have been enough to block the transaction, but all combined it made the whole thing look rather fishy. Just as this risk factor also makes it look fishy:

Pwned or Bot

Applying "Pwned or Bot" to your own risk assessment is dead simple with the HIBP API and hopefully, this approach will help more people do precisely what HIBP is there for in the first place: to help "do good things after bad things happen".

Australian Healthcare Sector Targeted in Latest Gootkit Malware Attacks

By Ravie Lakshmanan
A recent wave of Gootkit malware loader attacks has targeted the Australian healthcare sector by leveraging legitimate tools like VLC Media Player. Gootkit, also called Gootloader, is known to employ search engine optimization (SEO) poisoning tactics (aka spamdexing) for initial access. It typically works by compromising and abusing legitimate infrastructure and seeding those sites with common

FrodoPIR: New Privacy-Focused Database Querying System

By Ravie Lakshmanan
The developers behind the Brave open-source web browser have revealed a new privacy-preserving data querying and retrieval system called FrodoPIR. The idea, the company said, is to use the technology to build out a wide range of use cases such as safe browsing, scanning passwords against breached databases, certificate revocation checks, and streaming, among others. The scheme is called FrodoPIR

Two New Security Flaws Reported in Ghost CMS Blogging Software

By Ravie Lakshmanan
Cybersecurity researchers have detailed two security flaws in the JavaScript-based blogging platform known as Ghost, one of which could be abused to elevate privileges via specially crafted HTTP requests. Ghost is an open source blogging platform that's used in more than 52,600 live websites, most of them located in the U.S., the U.K., German, China, France, Canada, and India. Tracked as CVE-

Hacking Using SVG Files to Smuggle QBot Malware onto Windows Systems

By Ravie Lakshmanan
Phishing campaigns involving the Qakbot malware are using Scalable Vector Graphics (SVG) images embedded in HTML email attachments. The new distribution method was spotted by Cisco Talos, which said it identified fraudulent email messages featuring HTML attachments with encoded SVG images that incorporate HTML script tags. HTML smuggling is a technique that relies on using legitimate features of

Researchers Demonstrate How EDR and Antivirus Can Be Weaponized Against Users

By Ravie Lakshmanan
High-severity security vulnerabilities have been disclosed in different endpoint detection and response (EDR) and antivirus (AV) products that could be exploited to turn them into data wipers. "This wiper runs with the permissions of an unprivileged user yet has the ability to wipe almost any file on a system, including system files, and make a computer completely unbootable," SafeBreach Labs

Unwrapping Some of the Holiday Season’s Biggest Scams

By McAfee

Even with the holidays in full swing, scammers won’t let up. In fact, it’s high time for some of their nastiest cons as people travel, donate to charities, and simply try to enjoy their time with friends and family. 

Unfortunate as it is, scammers see this time of year as a tremendous opportunity to profit. While people focus giving to others, they focus on taking, propping up all manner of scams that use the holidays as a disguise. So as people move quickly about their day, perhaps with a touch of holiday stress in the mix, they hope to catch people off their guard with scams that wrap themselves in holiday trappings. 

Yet once you know what to look for, they’re relatively easy to spot. The same scams roll out every year, sometimes changing in appearance yet remaining the same in substance. With a sharp eye, you can steer clear of them. 

Watch out for these online scams this holiday season 

1. Shopping scams 

With Black Friday and Cyber Monday in the books, we can look forward to what’s next—a wave of post-holiday sales events that will likewise draw in millions of online shoppers. And just like those other big shopping days, bad actors will roll out a host of scams aimed at unsuspecting shoppers. Shopping scams take on several forms, which makes this a topic unto itself, one that we cover thoroughly in our Black Friday & Cyber Monday shopping scams blog. It’s worth a read if you haven’t done so already, as digs into the details of these scams and shows how you can avoid them.  

However, the high-level advice for avoiding shopping scams is this: keep your eyes open. Deals that look too good to be true likely are, and shopping with retailers you haven’t heard of before requires a little bit of research to determine if their track record is clean. In the U.S., you can turn to the Better Business Bureau (BBB) for help with a listing of retailers you can search simply by typing in their names. You can also use https://whois.domaintools.com to look up the web address of the shopping site you want to research. There you can see its history and see when it was registered. A site that was registered only recently may be far less reputable than one that’s been registered for some time. 

2. Tech support scams  

Plenty of new tech makes its way into our homes during the holiday season. And some of that tech can be a little challenging to set up. Be careful when you search for help online. Many scammers will establish phony tech support sites that aim to steal funds and credit card information. Go directly to the product manufacturer for help. Often, manufacturers will offer free support as part of the product warranty, so if you see a site advertising support for a fee, that could be a sign of a scam. 

Likewise, scammers will reach out to you themselves. Whether through links from unsolicited emails, pop-up ads from risky sites, or by spammy phone calls, these scammers will pose as tech support from reputable brands. From there, they’ll falsely inform you that there’s something urgently wrong with your device and that you need to get it fixed right now—for a fee. Ignore these messages and don’t click on any links or attachments. Again, if you have concerns about your device, contact the manufacturer directly. 

3. Travel scams 

With the holidays comes travel, along with all the online booking and ticketing involved. Scammers will do their part to cash in here as well. Travel scams may include bogus emails that pose as reputable travel sites telling you something’s wrong with your booking. Clicking a link takes you to a similarly bogus site that asks for your credit card information to update the booking—which then passes it along to the scammer so they can rack up charges in your name. Other travel scams involve ads for cut-rate lodging, tours, airfare, and the like, all of which are served up on a phony website that only exists to steal credit card numbers and other personal information. 

Some of these scams can look quite genuine, even though they’re not. They’ll use cleverly disguised web addresses that look legitimate, but aren’t, so don’t click any links. If you receive notice about an issue with your holiday travel, contact the company directly to follow up. Also, be wary of ads with unusually deep discounts or that promise availability in an otherwise busy season or time. These could be scams, so stick with reputable booking sites or with the websites maintained by hotels and travel providers themselves. 

4. Fake charity scams 

Donations to an organization or cause that’s close to someone’s heart make for a great holiday gift, just as they offer you a way to give back during the holiday season. And you guessed it, scammers will take advantage of this too. They’ll set up phony charities and apply tactics that pressure you into giving. As with so many scams out there, any time an email, text, direct message, or site urges you into immediate action—take pause. Research the charity. See how long they’ve been in operation, how they put their funds to work, and who truly benefits from them.  

Likewise, note that there some charities pass along more money to their beneficiaries than others. As a general rule of thumb, most reputable organizations only keep 25% or less of their funds for operations, while some less-than-reputable organizations keep up to 95% of funds, leaving only 5% for advancing the cause they advocate. In the U.S., the Federal Trade Commission (FTC) has a site full of resources so that you can make your donation truly count. Resources like Charity Watch and Charity Navigator, along with the BBB’s Wise Giving Alliance can also help you identify the best charities. 

5. Online betting scams 

The holidays also mean a flight of big-time sporting events, and with the advent of online betting in many regions scammers want to cash in. This scam works quite like shopping scams, where bad actors will set up online betting sites that look legitimate. They’ll take your bet, but if you win, they won’t pay out. Per the U.S. Better Business Bureau (BBB), the scam plays out like this: 

“You place a bet, and, at first, everything seems normal. But as soon as you try to cash out your winnings, you find you can’t withdraw a cent. Scammers will make up various excuses. For example, they may claim technical issues or insist on additional identity verification. In other cases, they may require you to deposit even more money before you can withdraw your winnings. Whatever you do, you’ll never be able to get your money off the site. And any personal information you shared is now in the hands of scam artists.” 

You can avoid these sites rather easily. Stick with the online betting sites that are approved by your regional gambling commission. Even so, be sure to read the fine print on any promo offers that these sites advertise because even legitimate betting sites can freeze accounts and the funds associated with them based on their terms and conditions. 

Further protection from scams 

A complete suite of online protection software, such as McAfee+ Ultimate can offer layers of extra security. In addition to more private and secure time online with a VPN, identity monitoring, and password management, it includes web browser protection that can block malicious and suspicious links that could lead you down the road to malware or a phishing scam—which antivirus protection can’t do alone. Additionally, we offer $1M identity theft coverage and support from a recovery pro, just in case. 

And because scammers use personal information such as email addresses and cell phone numbers to wage their attacks, other features like our  Personal Data Cleanup service can scan high-risk data broker sites for your personal information and then help you remove it, which can help reduce spam, phishing attacks, and deny bad actors the information they need to commit identity theft. 

Scammers love a good thing—and will twist it for their own benefit. 

That’s why they enjoy the holidays so much. With all our giving, travel, and charity in play, it’s prime time for their scams. Yet a little insight into their cons, along with some knowledge as to how they play out, you can avoid them.  

Remember that they’re playing into the hustle and bustle of the season and that they’re counting on you to lower your guard more than you might during other times of the year. Keep an eye open for the signs, do a little research when it’s called for, and stick with reputable stores, charities, and online services. With a thoughtful pause and a second look, you can spare yourself the grief of a scam and fully enjoy your holidays. 

The post Unwrapping Some of the Holiday Season’s Biggest Scams appeared first on McAfee Blog.

Researchers Disclose Critical RCE Vulnerability Affecting Quarkus Java Framework

By Ravie Lakshmanan
A critical security vulnerability has been disclosed in the Quarkus Java framework that could be potentially exploited to achieve remote code execution on affected systems. Tracked as CVE-2022-4116 (CVSS score: 9.8), the shortcoming could be trivially abused by a malicious actor without any privileges. "The vulnerability is found in the Dev UI Config Editor, which is vulnerable to drive-by

Data Breach Misattribution, Acxiom & Live Ramp

By Troy Hunt
Data Breach Misattribution, Acxiom & Live Ramp

If you find your name and home address posted online, how do you know where it came from? Let's assume there's no further context given, it's just your legitimate personal data and it also includes your phone number, email address... and over 400 other fields of data. Where on earth did it come from? Now, imagine it's not just your record, but it's 246 million records. Welcome to my world.

This is a story about a massive corpus of data circulating widely within the hacking community and misattributed to a legitimate organisation. That organisation is Acxiom, and their business hinges on providing their customers with data on their customers. By the very nature of their business, they process large volumes of data that includes a broad set of personal attributes. By pure coincidence, there is nominal commonality between Acxiom’s records and the ones in the 246M corpus I mentioned earlier. But I'm jumping ahead to the conclusion, let's go back to the beginning:

Disclosure and Attribution Debunking

In June last year, I received an email from someone I trust who had sent me data for Have I Been Pwned (HIBP) in the past:

Have you seen Axciom [sic] data? It was just sent to us. Seems to being traded/sold on some forums. Have you received it yet? If not i can upload it for you. It's quite large tho, ~250M Records.

A corpus of data that size is particularly interesting as it impacts such a huge number of people. So, I reviewed the data and concluded... pretty much nothing. Looks legit, smells legit but there was absolutely nothing beyond the word of one person to tie it to Acxiom (and who knows who they got that word from). Burdened by other more immediately actionable data breaches, I filed it away until recently when that name popped up again, this time on a popular hacking forum:

Data Breach Misattribution, Acxiom & Live Ramp

It was referred to as "LiveRamp (Formerly Acxiom)" and before I go any further, let's just clarify the problem with that while you're looking at the image above: LiveRamp was previously a subsidiary of Acxiom, but that hasn't been the case since they separated businesses in 2018 so whoever put this together is referring back to a very old state of play. Regardless, those downloading it from the forum were clearly very excited about it. Seeing this for the second time and spreading far more broadly, I decided to reach out to the (alleged) source and ask Acxiom what was going on.

I dread this process - contacting an organisation about a breach - because I usually get either no response whatsoever or a standoffish one. Rarely do I find a receptive organisation willing to fully investigate an alleged incident, but that's exactly what I found on this occasion. Much of the reason why I wanted to write this post is because whilst I hate breached organisations not properly investigating an incident, I also hate seeing misattribution of a breach to an innocent party. That's a particularly sore point for me right now because of this incident just last week:

This is the dumbest infosec story I’ve read in… forever? It is so profoundly incorrect, poorly researched, never verified, rambling and indistinguishable from parody that I literally went looking for the parody reference. I think he’s actually serious! https://t.co/oLyIHxb8D3

— Troy Hunt (@troyhunt) November 15, 2022

I've had various public users of HIBP, commercial users and even governments reach out to ask what's going on because they were concerned about their data. Whilst this incident won't do HIBP any actual harm (and frankly, I'm stunned anyone took that story seriously), I can very easily see how misattribution can be damaging to an organisation, indeed that's a key reason why I invest so much effort into properly investigating these claims before putting anything into HIBP. But that ridiculous example is nothing compared to the amount of traction some misattributions get. Remember how just recently a couple of billion TikTok accounts had been "breached"? This made massive news headlines until...

The thread on the hacking forum with the samples of alleged TikTok data has been deleted and the user banned for “lying about data breaches” https://t.co/9ZKkKvu8JT

— Troy Hunt (@troyhunt) September 5, 2022

"Lying about data breaches". Ugh, criminals are so untrustworthy! This happens all the time and when I'm not sure of the origin of a substantial breach, I often write a blog post like this and on many occasions, the masses help establish the origin. So, here goes:

The Data

Let's jump into the data, starting with 2 of the most obvious things I look for in any new data breach:

  1. The total number of unique email addresses is 51,730,831 (many records don't have this field populated)
  2. The most recent data I can find is from mid-2020 (which also speaks to the inaccuracy of the LiveRamp association)

As to the aforementioned attributes, they total 410 different columns:

To my eye, this data is very generic and looks like a superset of information that may be collected across a large number of people. For example, the sort of data requested when filling out dodgy online competitions. However, unlike many large corpuses of aggregated data I've seen in the past, this one is... neat. For example, here's a little sample of the first 5 columns (redaction of some chars with a dash), note how the names are all uniformly presented:

120321486,4,BE-----,B,TAYLOR
120321487,2,JOY,M,----EY
120321466,1,DOYLE,E,------HAM
120321486,3,L----,,TAYLOR
120321486,2,R---,M,TAYLOR

Sure, this is just uppercasing characters but over and over again, I found data that was just too neat. The addresses. The phone numbers. Everything about it was far to curated to simply be text entered by humans. My suspicion is that it's likely a result of either a very refined collection process or in the case of addresses, matched using a service to resolve the human-entered address to a normalised form stored centrally.

Perhaps what I was most interested in though was the URL column as that seems to give some indication of where the data might have come from. I queried out the top 100 most common ones and took a look:

Eyeballing them, I couldn't help feel that my earlier hunch was on the money - "dodgy online competitions". Not just competitions but a general theme of getting stuff for cheap or more specifically, services that look like they've been built to entice people to part with their personal data.

Take the first one, for example, DIRECTEDUCATIONCENTER.COM. That's a dead domain as of now but check out what it looked line in March last year:

Data Breach Misattribution, Acxiom & Live Ramp

"I may be contacted by trusted partners and others". What's "others"? Untrusted partners? 🤷‍♂️

Let's try the next one being originalcruisegiveaway.com and again, the site is now gone so it's back over to archive.org:

Data Breach Misattribution, Acxiom & Live Ramp

It's different, but somehow the same. Clicking through to the claim form, it seems the only way you can enter is if you agree to receive comms from all sorts of other parties:

Data Breach Misattribution, Acxiom & Live Ramp

Ok, one more, this time free-ukstuff.com which is also now a dead site, and not even indexed by archive.org. Next then, is findyourdegreenow.com which is - you're not gonna believe this - a dead site! Here's what it used to look like:

Data Breach Misattribution, Acxiom & Live Ramp

And again, it feels the same. Same same, but different.

To try and get a sense of how localised this data was, I queried out all the values in the "state" column. Is this a US-only data set? If that column is anything to go by, yes:

Something didn't add up when I first saw that and after a quick check of the population of each US state, it become immediately obvious: there's no California, the most populous state in the country. Nor Texas, the second most populous state. In fact, with only 35 rows there's a bunch of US states missing. Why? Who knows, the only thing I can say for sure is that this is a subset of the population with some glaring geographical omissions.

Then there's another curveball - what about the URL quickquid.co.uk, that doesn't look very US-centric. Heading over there redirects to casheuronetukadministration.grantthornton.co.uk which advises that as of last month, "The Administration of CashEuroNet UK, LLC has closed and the Joint Administrators have ceased to act". So something has obviously been wound up, wonder what was there originally? I had to go back a few years to find this:

Data Breach Misattribution, Acxiom & Live Ramp

To my mind, this is more of the same ilk in terms of a service targeted at people after quick money. But it's clearly all in GBP and with a .co.uk TLD, this being right after I've just said all the states are in the US, what gives? Back to the source data, filter out the records based on that URL and sure enough, everyone has a US address. Grabbing a random selection of IP addresses had them all resolving to the US too so I have absolutely no idea how his geographically inconsistent set of data came to being.

And that's really the theme across the data set when doing independent analysis - how is this so? What service or process could have pulled the data together in this way? Maybe the people who this data actually refers to will have the answers, let's go and ask them.

Responses From Impacted HIBP Subscribers

We're approaching 4.5M subscribers to HIBP's free notification service now which makes for a great corpus of people I can reach out to when doing breach verification. I grabbed a handful of addresses from this data set and asked them if they could help out. I sent those that responded positively their full record and asked some questions about the legitimacy of the data, and where they thought it might have come from, here's what they said:


1. The data is mostly accurate.

A few things are off, such as date of birth (could very well be a fake one I've entered before) and details of household members.

There are a lot of columns with single-letter values, which I can't verify without knowing what they mean.

But overall, it's quite accurate.

2. No idea where it came from, sorry. There is a URL in the third-to-last column, but it doesn't seem like a website I would have used before.


I looked through the csv file and couldn't find anything I recognized. I saw the names [redacted], [redacted] and [redacted]- I don't know anyone by those names. I live in Ontario, Canada, but addresses in the file were located in the united states.

Data says I have one child between the ages of 0 and 2, but that's not true - my only son is five. Birth date is wrong - my birthday is [redacted], but the file says [redacted].

There were a few urls in the file and I don't recognize any of them.

Not sure if this last thing is relevant or not. I sometimes get emails intended for other people. I searched my inboxes for the names [redacted] and [redacted]. Nothing came up for [redacted], but I do see an email for [redacted] from [redacted]. I searched through the csv to see if anything matched the data in the email (member number, confirmation number), but nothing matched.

I also noticed that although my email address ([redacted]) is in the csv data, there's also another email address ([redacted]) which is not mine.

I'm not sure if that's helpful or not, but if there's anything more I can do, let me know. :)


As far as name and address they are correct.  number of ppl living at the house has changed.  The other information I can't seem to understand what the information for example under column AQ row 2 it has a U and I don't know what the U is for.  I have noticed that some information is really outdated, so I wouldn't know where the data originated from.


Thank you for sharing, I took a look at the data, let me see if I can answer your questions:

1. While that is my email, the rest of the data actually belongs to an immediate family member. With the exception of a few outdated fields, the data on my family member is correct.

2. I am unfamiliar with Acxiom and am unsure of where this data originated from. I want to note that I have recently been doxxed and have reason to believe data breaches may have been used; however, the data you've provided here was not used in the attacks, to my knowledge.

Please let me know if you have any other questions, or if there is anything else I may do to help.


"Mostly accurate". The feeling I have when reading this is that whoever is responsible for this corpus of data has put it together from multiple sources and quite likely made some assumptions along the way. I can picture how that would happen; imagine trying to match various sources of data based on human-provided text fields in order to "enrich" the collection.

Analysis by Acxiom

This isn't the fist time Acxiom has had to deal with misattribution, and they'd seen exactly the same data set passed around before. Think about it from their perspective: every time there's a claim like this they need to treat it as though it could be legitimate, because we've all seen what happens when an organisation brushes off a disclosure attempt (I could literally write a book about this!) Thus it becomes a burdensome process for them as they repeat the same analysis over and over again, each time drawing the same conclusion.

And what was that conclusion? Simply put, the circulating data didn't align with their own. They're in the best position of all of us to draw that conclusion as they have access to both data sets and whilst I suspect some people may retort with "how do you know you can trust them", not only do I not have a good reason to doubt their findings, I also don't have a good reason to attribute it to them. Every reference I've seen to Acxiom has been from whoever is handing the data around; I've been able to find absolutely nothing within the data set itself to tie it back to them. In almost all breaches I've processed, the truth is in the data and there's nothing here that points the finger at them.

I offered Acxiom the opportunity to further clarify their position with a statement which I've included in its entirety here:

“Acxiom has worked to build a reputation over the course of fifty years for having the highest standards around data privacy, data protection and security. In the past, questionable organizations have falsely attached our name to a data file in an attempt to create a deceitful sense of legitimacy for an asset. In every instance, Acxiom conducts an extensive analysis under our cyber incident response and privacy programs. These programs are guided by stakeholders including working with the appropriate authorities to inform them of these crimes.  The forensic review of the case that Troy has looked into, along with our continuous monitoring of security, means we can conclusively attest that the claims are indeed false and that the data, which has been readily available across multiple environments, does not come from Acxiom and is in no way the subject of an Acxiom breach.

Acxiom’s Commitment To Data Protection/ Data Privacy:
We value consumer privacy.   U.S. consumers who would like to know what information Acxiom has collected about them and either delete it or opt out of Acxiom’s marketing products, may visit acxiom.com/privacy for more information.”

Summary

The email addresses from the data set have now been loaded into HIBP and are searchable. One point of note that became evident after loading the data is that 94% of the email addresses has already been pwned. That's a very high number (a quick look through the HIBP Twitter feed shows the count is normally between 40% and 80%), and it suggests that this corpus of data may be at least partially constructed from other data already in circulation.

Because the question will inevitably come up, no, I won't send you your full record, I simply don't have the capacity to operate as a personal data lookup and delivery service. I know it's frustrating finding yourself in a breach like this and not being able to take any action, all you can really do at this point is treat it as another reminder of how our data spread around the web and often, we have no idea about it.

Full disclosure: I have absolutely no commercial interest in Acxiom, no money has changed hands and I wasn't incentivised in any way, I just want everyone to have a much healthier suspicion when alleging the source of a data breach 🙂

Watch Out for These 3 World Cup Scams

By McAfee

What color jersey will you be sporting this November and December? The World Cup is on its way to television screens around the world, and scores of fans are dreaming of cheering on their team at stadiums throughout Qatar. Meanwhile, cybercriminals are dreaming of stealing the personally identifiable information (PII) of fans seeking last-minute vacation and ticket deals. 

Don’t let the threat of phishers and online scammers dampen your team spirit this World Cup tournament. Here are three common schemes cybercriminals will likely employ and a few tips to help you dribble around their clumsy offense and protect your identity, financial information, and digital privacy. 

1. Fake Contests

Phishers will be out in full force attempting to capitalize on World Cup fever. People wrapped up in the excitement may jump on offers that any other time of the year they would treat with skepticism. For example, in years past, fake contests and travel deals inundated email inboxes across the world. Some companies do indeed run legitimate giveaways, and cybercriminals slip in their phishing attempts among them. 

If you receive an email or text saying that you’re the winner of a ticket giveaway, think back: Did you even enter a contest? If not, treat any “winner” notification with skepticism. It’s very rare for a company to automatically enter people into a drawing. Usually, companies want you to act – subscribe to a newsletter or engage with a social media post, for example – in exchange for your entry into their contest. Also, beware of emails that urge you to respond within a few hours to “claim your prize.” While it’s true that real contest winners must reply promptly, organized companies will likely give you at least a day if not longer to acknowledge receipt. 

2. Travel Scams

Traveling is rarely an inexpensive endeavor. Flights, hotels, rental cars, dining costs, and tourist attraction admission fees add up quickly. In the case of this year’s host country, Qatar, there’s an additional cost for American travelers: visas.  

If you see package travel deals to the World Cup that seem too good to pass up … pass them up. Fake ads for ultra-cheap flights, hotels, and tickets may appear not only in your email inbox but also on your social media feed. Just because it’s an ad doesn’t mean it comes from a legitimate company. Legitimate travel companies will likely have professional-looking websites with clear graphics and clean website copy. Search for the name of the organization online and see what other people have to say about the company. If no search results appear or the website looks sloppy, proceed with caution or do not approach at all. 

Regarding visas, be wary of anyone offering to help you apply for a visa. There are plenty of government-run websites that’ll walk you through the process, which isn’t difficult as long as you leave enough time for processing. Do not send your physical passport to anyone who is not a confirmed government official. 

3. Malicious Streaming Sites

Even fans who’ve given up on watching World Cup matches in person aren’t out of the path of scams. Sites claiming to have crystal clear streams of every game could be malware spreaders in disguise. Malware and ransomware targeting home computers often lurk on sketchy sites. All it takes is a click on one bad link to let a cybercriminal or a virus into your device.  

Your safest route to good-quality live game streams is through the official sites of your local broadcasting company or the official World Cup site. You may have to pay a fee, but in the grand scheme of things, that fee could be a lot less expensive than replacing or repairing an infected device. 

Shore Up Your Defense With McAfee+ 

Here’s an excellent rule to follow with any electronic correspondence: Never send anyone your passwords, routing and account number, passport information, or Social Security Number. A legitimate organization will never ask for your password, and it’s best to communicate any sensitive financial or identifiable information over the phone, not email or text as they can easily fall into the wrong hands. Also, do not wire large sums of money to someone you just met online. 

Don’t let scams ruin your enjoyment of this year’s World Cup! With these tips, you should be able to avoid the most common schemes but to boost your confidence in your online presence, consider signing up for McAfee+. Think of McAfee+ as the ultimate goalkeeper who’ll block any cybercriminals looking to score on you. With identity monitoring, credit lock, unlimited VPN and antivirus, and more, you can surf safely and with peace of mind.  

The post Watch Out for These 3 World Cup Scams appeared first on McAfee Blog.

Critical RCE Flaw Reported in Spotify's Backstage Software Catalog and Developer Platform

By Ravie Lakshmanan
Spotify's Backstage has been discovered as vulnerable to a severe security flaw that could be exploited to gain remote code execution by leveraging a recently disclosed bug in a third-party module. The vulnerability (CVSS score: 9.8), at its core, takes advantage of a critical sandbox escape in vm2, a popular JavaScript sandbox library (CVE-2022-36067 aka Sandbreak), that came to light last

Top Zeus Botnet Suspect “Tank” Arrested in Geneva

By BrianKrebs

Vyacheslav “Tank” Penchukov, the accused 40-year-old Ukrainian leader of a prolific cybercriminal group that stole tens of millions of dollars from small to mid-sized businesses in the United States and Europe, has been arrested in Switzerland, according to multiple sources.

Wanted Ukrainian cybercrime suspect Vyacheslav “Tank” Penchukov (right) was arrested in Geneva, Switzerland. Tank was the day-to-day manager of a cybercriminal group that stole tens of millions of dollars from small to mid-sized businesses.

Penchukov was named in a 2014 indictment by the U.S. Department of Justice as a top figure in the JabberZeus Crew, a small but potent cybercriminal collective from Ukraine and Russia that attacked victim companies with a powerful, custom-made version of the Zeus banking trojan.

The U.S. Federal Bureau of Investigation (FBI) declined to comment for this story. But according to multiple sources, Penchukov was arrested in Geneva, Switzerland roughly three weeks ago as he was traveling to meet up with his wife there.

Penchukov is from Donetsk, a traditionally Russia-leaning region in Eastern Ukraine that was recently annexed by Russia. In his hometown, Penchukov was a well-known deejay (“DJ Slava Rich“) who enjoyed being seen riding around in his high-end BMWs and Porsches. More recently, Penchukov has been investing quite a bit in local businesses.

The JabberZeus crew’s name is derived from the malware they used, which was configured to send them a Jabber instant message each time a new victim entered a one-time password code into a phishing page mimicking their bank. The JabberZeus gang targeted mostly small to mid-sized businesses, and they were an early pioneer of so-called “man-in-the-browser” attacks, malware that can silently siphon any data that victims submit via a web-based form.

Once inside a victim company’s bank accounts, the crooks would modify the firm’s payroll to add dozens of “money mules,” people recruited through work-at-home schemes to handle bank transfers. The mules in turn would forward any stolen payroll deposits — minus their commissions — via wire transfer overseas.

Tank, a.k.a. “DJ Slava Rich,” seen here performing as a DJ in Ukraine in an undated photo from social media.

The JabberZeus malware was custom-made for the crime group by the alleged author of the Zeus trojan — Evgeniy Mikhailovich Bogachev, a top Russian cybercriminal with a $3 million bounty on his head from the FBI. Bogachev is accused of running the Gameover Zeus botnet, a massive crime machine of 500,000 to 1 million infected PCs that was used for large DDoS attacks and for spreading Cryptolocker — a peer-to-peer ransomware threat that was years ahead of its time.

Investigators knew Bogachev and JabberZeus were linked because for many years they were reading the private Jabber chats between and among members of the JabberZeus crew, and Bogachev’s monitored aliases were in semi-regular contact with the group about updates to the malware.

Gary Warner, director of research in computer forensics at the University of Alabama at Birmingham, noted in his blog from 2014 that Tank told co-conspirators in a JabberZeus chat on July 22, 2009 that his daughter, Miloslava, had been born and gave her birth weight.

“A search of Ukrainian birth records only showed one girl named Miloslava with that birth weight born on that day,” Warner wrote. This was enough to positively identify Tank as Penchukov, Warner said.

Ultimately, Penchukov’s political connections helped him evade prosecution by Ukrainian cybercrime investigators for many years. The late son of former Ukrainian President Victor Yanukovych (Victor Yanukovych Jr.) would serve as godfather to Tank’s daughter Miloslava. Through his connections to the Yanukovych family, Tank was able to establish contact with key insiders in top tiers of the Ukrainian government, including law enforcement.

Sources briefed on the investigation into Penchukov said that in 2010 — at a time when the Security Service of Ukraine (SBU) was preparing to serve search warrants on Tank and his crew — Tank received a tip that the SBU was coming to raid his home. That warning gave Tank ample time to destroy important evidence against the group, and to avoid being home when the raids happened. Those sources also said Tank used his contacts to have the investigation into his crew moved to a different unit that was headed by his corrupt SBU contact.

Writing for Technology Review, Patrick Howell O’Neil recounted how SBU agents in 2010 were trailing Tank around the city, watching closely as he moved between nightclubs and his apartment.

“In early October, the Ukrainian surveillance team said they’d lost him,” he wrote. “The Americans were unhappy, and a little surprised. But they were also resigned to what they saw as the realities of working in Ukraine. The country had a notorious corruption problem. The running joke was that it was easy to find the SBU’s anticorruption unit—just look for the parking lot full of BMWs.”

AUTHOR’S NOTE/BACKGROUND

I first encountered Tank and the JabberZeus crew roughly 14 years ago as a reporter for The Washington Post, after a trusted source confided that he’d secretly gained access to the group’s private Jabber conversations.

From reading those discussions each day, it became clear Tank was nominally in charge of the Ukrainian crew, and that he spent much of his time overseeing the activities of the money mule recruiters — which were an integral part of their victim cashout scheme.

It was soon discovered that the phony corporate websites the money mule recruiters used to manage new hires had a security weakness that allowed anyone who signed up at the portal to view messages for every other user. A scraping tool was built to harvest these money mule recruitment messages, and at the height of the JabberZeus gang’s activity in 2010 that scraper was monitoring messages on close to a dozen different money mule recruitment sites, each managing hundreds of “employees.”

Each mule was given busy work or menial tasks for a few days or weeks prior to being asked to handle money transfers. I believe this was an effort to weed out unreliable money mules. After all, those who showed up late for work tended to cost the crooks a lot of money, as the victim’s bank would usually try to reverse any transfers that hadn’t already been withdrawn by the mules.

When it came time to transfer stolen funds, the recruiters would send a message through the fake company website saying something like: “Good morning [mule name here]. Our client — XYZ Corp. — is sending you some money today. Please visit your bank now and withdraw this payment in cash, and then wire the funds in equal payments — minus your commission — to these three individuals in Eastern Europe.”

Only, in every case the company mentioned as the “client” was in fact a small business whose payroll accounts they’d already hacked into.

So, each day for several years my morning routine went as follows: Make a pot of coffee; shuffle over to the computer and view the messages Tank and his co-conspirators had sent to their money mules over the previous 12-24 hours; look up the victim company names in Google; pick up the phone to warn each that they were in the process of being robbed by the Russian Cyber Mob.

My spiel on all of these calls was more or less the same: “You probably have no idea who I am, but here’s all my contact info and what I do. Your payroll accounts have been hacked, and you’re about to lose a great deal of money. You should contact your bank immediately and have them put a hold on any pending transfers before it’s too late. Feel free to call me back afterwards if you want more information about how I know all this, but for now please just call or visit your bank.”

In many instances, my call would come in just minutes or hours before an unauthorized payroll batch was processed by the victim company’s bank, and some of those notifications prevented what otherwise would have been enormous losses — often several times the amount of the organization’s normal weekly payroll. At some point I stopped counting how many tens of thousands of dollars those calls saved victims, but over several years it was probably in the millions.

Just as often, the victim company would suspect that I was somehow involved in the robbery, and soon after alerting them I would receive a call from an FBI agent or from a police officer in the victim’s hometown. Those were always interesting conversations.

Collectively, these notifications to victims led to dozens of stories over several years about small businesses battling their financial institutions to recover their losses. I never wrote about a single victim that wasn’t okay with my calling attention to their plight and to the sophistication of the threat facing other companies.

This incessant meddling on my part very much aggravated Tank, who on more than one occasion expressed mystification as to how I knew so much about their operations and victims. Here’s a snippet from one of their Jabber chats in 2009, after I’d written a story for The Washington Post about their efforts to steal $415,000 from the coffers of Bullitt County, Kentucky. In the chat below, “lucky12345” is the Zeus author Bogachev:

tank: Are you there?
tank: This is what they damn wrote about me.
tank: http://voices.washingtonpost.com/securityfix/2009/07/an_odyssey_of_fraud_part_ii.html#more
tank: I’ll take a quick look at history
tank: Originator: BULLITT COUNTY FISCAL Company: Bullitt County Fiscal Court
tank: Well, you got [it] from that cash-in.
lucky12345: From 200K?
tank: Well, they are not the right amounts and the cash out from that account was shitty.
tank: Levak was written there.
tank: Because now the entire USA knows about Zeus.
tank: 😀
lucky12345: It’s fucked.

On Dec. 13, 2009, one of Tank’s top money mule recruiters — a crook who used the pseudonym “Jim Rogers” — told his boss something I hadn’t shared beyond a few trusted confidants at that point: That The Washington Post had eliminated my job in the process of merging the newspaper’s Web site (where I worked at the time) with the dead tree edition.

jim_rogers: There is a rumor that our favorite (Brian) didn’t get his contract extension at Washington Post. We are giddily awaiting confirmation 🙂 Good news expected exactly by the New Year! Besides us no one reads his column 🙂

tank: Mr. Fucking Brian Fucking Kerbs!

Another member of the JabberZeus crew — Ukrainian-born Maksim “Aqua” Yakubets — also is currently wanted by the FBI, which is offering a $5 million reward for information leading to his arrest and conviction.

Alleged “Evil Corp” bigwig Maksim “Aqua” Yakubets. Image: FBI

Update, Nov. 16, 2022, 7:55 p.m. ET:: Multiple media outlets are reporting that Swiss authorities confirmed they arrested a Ukrainian national wanted on cybercrime charges. The arrest occurred in Geneva on Oct. 23, 2022. “The US authorities accuse the prosecuted person of extortion, bank fraud and identity theft, among other things,” reads a statement from the Swiss Federal Office of Justice (FOJ).

“During the hearing on 24 October, 2022, the person did not consent to his extradition to the USA via a simplified proceeding,” the FOJ continued. “After completion of the formal extradition procedure, the FOJ has decided to grant his extradition to the USA on 15 November, 2022. The decision of the FOJ may be appealed at the Swiss Criminal Federal Court, respectively at the Swiss Supreme Court.”

Worok Hackers Abuse Dropbox API to Exfiltrate Data via Backdoor Hidden in Images

By Ravie Lakshmanan
A recently discovered cyber espionage group dubbed Worok has been found hiding malware in seemingly innocuous image files, corroborating a crucial link in the threat actor's infection chain. Czech cybersecurity firm Avast said the purpose of the PNG files is to conceal a payload that's used to facilitate information theft. "What is noteworthy is data collection from victims' machines using

High-Severity Flaw Reported in Critical System Used by Oil and Gas Companies

By Ravie Lakshmanan
Cybersecurity researchers have disclosed details of a new vulnerability in a system used across oil and gas organizations that could be exploited by an attacker to inject and execute arbitrary code. The high-severity issue, tracked as CVE-2022-0902 (CVSS score: 8.1), is a path-traversal vulnerability in ABB Totalflow flow computers and remote controllers. "Attackers can exploit this flaw to gain

Cisco Secure Endpoint Crushed the AV-Comparative EPR Test

By Truman Coburn

The word is out! Cisco Secure Endpoint’s effectiveness is off the charts in protecting your enterprise environment.

This is not just a baseless opinion; however, the facts are rooted in actual test results from the annual AV-Comparative EPR Test Report published in October 2022. Not only did Secure Endpoint knock it out of the park in enterprise protection; but Cisco Secure Endpoint obtained the lowest total cost of ownership (TCO) per agent at $587 over 5 years. No one else was remotely close in this area. More to come on that later.

If you are not familiar with the “AV-Comparatives Endpoint Prevention and Response Test is the most comprehensive test of EPR products ever performed. The 10 products in the test were subjected to 50 separate targeted attack scenarios, which used a variety of different techniques.”

These results are from an industry-respected third-party organization that assesses antivirus software and has just confirmed what we know and believe here at Cisco, which is our Secure Endpoint product is the industry’s best of the best.

Leader of the pack

Look for yourself at where we landed. That’s right, Cisco Secure Endpoint smashed this test, we are almost off the quadrant as one of the “Strategic Leaders”.

We ended up here for a combination of reasons, with the top being our efficacy in protecting our customers’ environments in this real-world test that emulates multi-stage attacks similar to MITRE’s ATT&CK evaluations which are conducted as part of this process (click here for an overview of MITRE ATT&CK techniques). Out of all the 50 scenarios tested, Secure Endpoint was the only product that STOPPED 100% of targeted threats toward enterprise users, which prevented further infiltration into the organization.

Lowest Total Cost of Ownership

In addition, this test not only assesses the efficacy of endpoint security products but also analyzes their cost-effectiveness. Following up on my earlier remarks about achieving the lowest cost of ownership, the graph below displays how we stacked up against other industry players in this space including several well-known vendors that chose not to display their names due to poor results.

These results provide a meaningful proof point that Cisco Secure Endpoint is perfectly positioned to secure the enterprise as well as secure the future of hybrid workers.

Enriched with built-in Extended Detection and Response (XDR) capabilities, Cisco Secure Endpoint has allowed our customers to maintain resiliency when faced with outside threats.

As we embark on securing “what’s next” by staying ahead of unforeseen cyber threats of tomorrow, Cisco Secure Endpoint integration with the complete Cisco Secure Solutions portfolio allows you to move forward with the peace of mind that if it’s connected, we can and will protect it.

Secure Endpoint live instant demo

Now that you have seen how effective Secure Endpoint is with live real-world testing, try it for yourself with one of our live instant demos. Click here to access instructions on how to download and install your demo account for a test drive.

Click here to see what analysts, customers, and third-party testing organizations have to say about Cisco Secure Endpoint Security efficacy, easy implementation and overall low total cost of ownership for their organization —and stay ahead of threats.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

By Troy Hunt
The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

A couple of weeks ago I wrote about some big changes afoot for Have I Been Pwned (HIBP), namely the introduction of annual billing and new rate limits. Today, it's finally here! These are two of the most eagerly awaited, most requested features on HIBP's UserVoice so it's great to see them finally knocked off after years of waiting. In implementing all this, there are changes to the existing "one size fits all" model so if you're using the HIBP API, please make sure you read this carefully and understand the impact (if any) on you. Here goes:

The Rate Limits and (Some) Pricing is Different

The launch blog post for the authenticated API explained the original rationale behind the $3.50 per month price and most importantly, how I wanted to ensure it didn't pose a barrier:

In choosing the $3.50 figure, I wanted to ensure it was a number that was inconsequential to a legitimate user of the service

As I said in the previous blog post, what I didn't understand at the time was that paradoxically, the low amount was a barrier to many organisations! But equally, it's made the API super accessible to the masses so that price stays. The rate limit, however, needed revisiting and to understand why, let's go back to the beginning:

The "1 request per 1,500ms" rate dated all the way back to 2016 where I'd initially attempted to combat abuse by applying the limit per IP. This was an entirely non-empirical, gut feel, "let's just try and fix the problem right now" decision and it was only very recently I actually started trawling through the data and looking at how the API was being consumed. 1 request every 1,500ms is a maximum of 57,600 requests in a day; here's the number of requests by the top 20 consumers of the service in a recent 24 hour mid-week period:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

Keeping in mind that you're never going to achieve the full 57,600 requests in a day as you'd have to time every single one of them perfectly so as not to hit the rate limit, only 1 subscriber even achieved half that potential. In fact, only 9 subscribers achieved even a quarter of the potential with everyone else very quickly falling back to a small fraction of even that. To be fair, I'm conscious that I'm taking a full day of data and talking about requests as if they were evenly distributed across the entire period when there are inevitably use cases where it's more a short burst rather than a prolonged, even distribution. Regardless, what the data is saying is that the default "one size fits all" rate limit is way above and beyond what almost every single subscriber is actually consuming, and by a significant order of magnitude too. In a way, what we ended up with is the little guys subsidising the big guys.

The bottom line is that we're simultaneously adding a bunch of higher rate limits whilst reducing the entry level rate limit. It's easier if you see it all in context so let's just jump straight into the pricing (all in USD):

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

This is from Stripe's embeddable pricing table I mentioned in the previous post and it's what you see when you first sign up for a key. With new limits, it's easier to talk about "requests per minute" or RPM so that's the nomenclature we're sticking with now. That entry level 10RPM model will work for well in excess of 90% of current subscribers and it's only a very small percentage of the existing subscriber base exceeding it. (And yes, again, I know these requests are sometimes made in bursts but even still, 10RPM is far in excess of the vast majority of use cases.)

There are economies of scale that have been factored in here. Going from 10RPM to 100RPM isn't a 10x increase, it's about a 7x increase. Going to 5 times more requests is only 4 times the price, and so on and so forth. The hope is that this makes it easier for the folks who were previously buying multiple keys to justify scratching all the kludge previously used to do that and replacing it with a single key at a higher RPM.

To get to this outcome, we trawled back through heaps of data ranging from the high-level aggregated stats in the earlier chart to the nature of the organisations buying multiple keys (which we can obviously determine based on the email address used). I also chatted with a bunch of API users both during this process and over the preceding years and have a pretty good sense of the use cases. A few trends became immediately clear:

Firstly, use cases that are genuinely personal have a very low rate limit requirement. Checking your own address(es) or those of your family by a custom app, for example. Or one of my favourite uses (and one I definitely use), the Home Assistant integration:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

On an ongoing basis, HA makes 1 request every 15 minutes. That's all. Each time we looked at genuine personal use cases, 10RPM was plenty.

Next, we found a bunch of use cases used within internal corporate environments, for example to monitor staff exposure in breaches. Now we're talking larger numbers of requests, but it's also something that's way more efficiently done via the existing domain search feature on the website. It's an on-demand, self-service and totally free feature that's been there for years. I know it's not API-based and there are good reasons for that (see the comment from me on that idea), but there's also the Enterprise route if API access is really that important (more on that later). Other examples included things like scanning customer emails to assess exposure at points where, for example, account takeover was a risk. In each of these cases, we're primarily talking about business entities using the service and I'm comfortable with commercial ventures wearing a greater cost.

And finally, there were the "heavy hitters", the ones with large volumes of keys. One such example using the API en masse provides security services to the big end of town and was funded to the tune of a figure that looks like a phone number. And again, I'm perfectly comfortable with them wearing a cost that's more commensurate to the value as opposed to a figure that was originally arrived at just to keep the bad guys out.

Existing Subscribers are Grandfathered in for 60 Days

Before I talk about the annual pricing, I want to make sure this headline is clear. Nothing changes for existing subscribers until the 6th of Jan next year, which is 60 days from today. On that date, the legacy rate limit of 1 request every 1,500ms will roll to the new 10RPM limit at exactly the same price. For that handful of big users for whom the 10RPM limit will be insufficient, you've got a couple of months to work out the best path forward. I'll be emailing every single active subscriber today to ensure everyone is notified well in advance (there's also an updated Terms of Use which requires a notification email to be sent).

What does this mean in practical terms? If you want annual billing or a higher rate limit, you can go and implement that whenever you're ready (more on that soon). Alternatively, if you just want to stick with 10 RPM then you don't have to do anything, nothing will change. What I do strongly suggest though (and this hasn't changed, it's always been the guidance), is to make sure you're handling HTTP 429 responses gracefully. Regardless of what your rate limit is, if you're consuming the API in a fashion where you're not directly controlling the rate yourself, make sure you handle those responses appropriately.

Billing Can Now Occur Annually

This is the easy one to explain: annual payments are now a thing 😊 As I explained in the previous blog post, frequent payments of small amounts can play havoc with reimbursements in the corporate environment. It sucks, I've been there, but it is what it is. Annual billing alleviates that through a combination of a 12x reduction in the frequency of an expense claim and a larger single sum that's easier to explain to your procurement people than $3.50.

So, what do you charge for annual rather than monthly billing? My initial temptation was just to make it literally 12 times more because I don't have a lot of patience for spivvy marketing guff. However, there's a valid case to be made that a 12x reduction on individual payments warrants a discount as it removes overhead from our end (there's a constant percentage of all payments that are disputed or fail or cause other demands on our time), plus there's an argument to be made along the lines of customer loyalty warranting a discount. There's also just the very simple mathematics of the whole thing, best illustrated by a recent payment in Stripe:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

That's 8.5% that disappears on every transaction, largely due to the 30c AUD charge no matter what the price of the transaction is:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

The point is that there's merit for all in incentivising annual rather than monthly payments. We decided to look at what a typical annual discount was and time and time again, found the same thing:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing
The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing
The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

Or in other words, a couple of months for free when you sign up for a year. In fact, coincidentally, that's exactly what I just signed up for with Nabu Casa (Home Assistant cloud) after receiving an email saying annual billing was now available 😊

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

It's never exactly 17%, rather it's like each example took 17% off 12 month's worth of a normal monthly fee then moved the number to something that looked pretty 🙂 Some examples were less (Pluralsight is 14%) and others were more (the higher tiers of Zendesk are 20%), but ultimately we decided to work to that 17% number and came up with the following:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

In keeping with the "pay for 10, get 12" theme, these prices are exactly 10 times the monthly ones. Easy peasy.

Stripe Customer Portal Magic Makes Changing Plans Easy

As I mentioned in the "big changes ahead" blog post, I've been deleting code like crazy in favour of deferring more processing back to Stripe themselves. By using their Customer Portal paradigm, it's now easy to change an existing plan:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

The change can be to a different rate limit or to a different renewal cadence:

The Have I Been Pwned API Now Has Different Rate Limits and Annual Billing

Stripe automatically proratas everything too so whilst you can upgrade immediately to a higher RPM or from monthly to annually, you'll only pay for the difference between the previous plan and the new one. Or, you can downgrade and on next renewal the lower plan will be automatically applied. It's super simple and it's all self-service.

Enterprise

For more than 7 years now, a small handful of organisations have used HIBP in a larger scale commercial fashion. Some of them you're familiar with, for example both 1Password and Mozilla do email address searches using k-Anonymity and that's not something that's a self-service "put your card into Stripe" sort of model (in part because k-Anonymity returns a huge number of results for each search). Infosec firms use Enterprise to support customers via domain level API searches. Identity theft companies use it to advise customers when they're exposed in a breach. One firm even uses it to help detect bot signups; it turns out that so many of us are so pwned, if someone signs up for their service and they're not pwned, that's a little bit suspicious (that's just one of many indicators they use).

This is a fundamentally different model, one that involves a close working relationship, lots of legal documents, procurement people, invoicing instead of credit cards and all sorts of other "Enterprisey" things. That still exists and nothing in today's blog post changes that. I mention this now in today's post simply because some of the folks from those organisations with Enterprise subscriptions will read this post and wonder where they sit. Likewise, I suspect those "100+ key" subscribers of the public API really should be on Enterprise and I'll be reaching out to them separately given the rate limit change will have a bigger impact on them than most.

In Closing

For that vast majority of users who are only at a fraction of the old rate limit, nothing changes other than there now being a key available for 17% less than before on an annual subscription. Meanwhile, for the folks battling corporate bureaucracy around small, frequent payments, this will sort you out and give you choices around rate limits you didn't have before.

There will be some people that fall between the cracks of the use cases outlined above and won't be happy with the changes. I expect that - I know it will happen - but I hope the rationale outlined here demonstrates the volume of thought and consideration that has gone into trying the find the sweet spot for pricing and rate limits. I also expect people will ask about adding other rate limits, for example to fill the gaps between say, 100RPM and 500RPM. We started out with more options, but a combination of that creating the whole paradox of choice problem and deeper analysis of how the API was actually being used led us to simplifying things. But who knows over the longer term, feedback is certainly welcome.

Lastly, if you're watching closely, you'll notice a lot more structure going in around the way HIBP is run. Last week I wrote about rolling out Zendesk for support so there's now a formal ticketing system in place. I also explained how Charlotte is playing a very active role in the management of HIBP and in the coming months, you'll see more around other initiatives to make the project more sustainable. I'm thinking of it like this: what must HIBP do to be sustainable in a post-Troy world? Or in other words, how can we get what has increasingly become an essential service for so many to be more robust and more self-sustaining beyond what one person can do as a sole operator devoting spare time to a passion project.

Stay tuned, there's much more to come 🙂

Better Supporting the Have I Been Pwned API with Zendesk

By Troy Hunt
Better Supporting the Have I Been Pwned API with Zendesk

I've been investing a heap of time into Have I Been Pwned (HIBP) lately, ranging from all the usual stuff (namely trawling through masses of data breaches) to all new stuff, in particular expanding and enhancing the public API. The API is actually pretty simple: plug in an email address, get a result, and that's a very clearly documented process. But where things get more nuanced is when people pay money for it because suddenly, there are different expectations. For example, how do you cancel a subscription once it's started? You could read the instructions when signing up for a key, but who remembers what they read months ago? There's also a greater expectation of support for everything from how to construct an API request to what to do when you keep getting 429 responses because you're (allegedly) making too many requests. And yes, some of these queries are, um, "basic", but they're still things people want support with.

In the beginning, all emails from HIBP came from noreply@haveibeenpwned.com because I simply wasn't geared up to provide support. In my naivety, I assumed people would see "noreply" and not reply. Instead, they'd send email to that address, get frustrated when there was no reply (from the "noreply" address...) and seek out my personal contact info. Or they'd lodge a dispute with Stripe because they'd emailed noreply@ asking for their subscription to be cancelled and it wasn't. So, back in September I started looking for a better solution:

I’m thinking of setting up a more formal support process for @haveibeenpwned, especially for folks buying API keys and having queries around billing or implementation. Any suggestions on a service? Something that can triage requests, perhaps also have FAQs. Thoughts?

— Troy Hunt (@troyhunt) September 29, 2022

This was a non-trivial exercise. We've all used support services before, so we have an idea of what to expect from an end user perspective, but it's a different story once you dive into all the management bits behind them. Frankly, I find this sort of thing mind-numbing but fortunately it's a task my amazing wife Charlotte picked up with gusto. She has become increasingly involved in all things troyhunt.com and HIBP lately as she brings order, calm and frankly, much needed sanity into my otherwise crazy, demanding professional life. We also figured that if we did this right, she'd be able to handle a lot of the support queries I previously did myself, so she was always going to play a big part in choosing the support platform.

Largely based on Charlotte's work, we settled on Zendesk and about a week ago, silently pushed out support.haveibeenpwned.com:

Better Supporting the Have I Been Pwned API with Zendesk

There are FAQs that cover a bunch of frequent questions, troubleshooting that addresses common problems and, of course, the ability to submit a request if you still need help. These are all a work in progress, and we'll add a lot more content in response to queries, just so long as they're about the right thing. Speaking of which:

This service is only for users of the public commercial API key, not for general HIBP queries.

Why? Because I constantly get queries like this:

Uh… and why am I sleeping during the day?! pic.twitter.com/BUGTJtgl7t

— Troy Hunt (@troyhunt) November 1, 2022

Is that even a query?! I don't know! But I do know that someone took the time to track down my personal email address this week and send it to me, and it's not the sort of thing we're going to be responding to on Zendesk. Nor are queries along the lines of the following:

I've been pwned, now what?

Or:

How do I remove my data from data breaches?

Or one of my personal favourites:

I demand you delete all my data from the data breaches or you'll get a letter from my lawyer!

This whole data breach landscape is a foreign concept for many people, and I understand there being questions, but Charlotte and I can't simultaneously run a free service and reply to queries like this from the masses. But the queries that come in via Zendesk are something we can manage as it's clearly scoped, there's lots of supporting docs and for the most part, we're dealing with tech professionals who understand this world a bit better than your average punter in the first place.

As I announced in last week's blog post, we're pushing ahead with new rate limits and annual billing for the API key and getting this piece out first was always an important prerequisite. It's all part of gearing up for bigger things ahead for HIBP 😊

OpenSSL patches are out – CRITICAL bug downgraded to HIGH, but patch anyway!

By Paul Ducklin
That bated-breath OpenSSL update is out! It's no longer rated CRITICAL, but we advise you to patch ASAP anyway. Here's why...

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

By Troy Hunt
Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

Just over 3 years ago now, I sat down at a makeshift desk (ok, so it was a kitchen table) in an Airbnb in Olso and built the authenticated API for Have I Been Pwned (HIBP). As I explained at the time, the primary goal was to combat abuse of the service and by adding the need to supply a credit card, my theory was that the bad guys would be very reluctant to, well, be bad guys. The theory checked out, and now with the benefit of several years of data, I can confidently say abuse is near non-existent. I just don't see it. Which is awesome 😊

But there were other things I also didn't see, and it's taken a while for me to get around to addressing them. Some of them are fixed now (like right now, already in production), and some of them will be fixed very, very soon. I think it's all pretty cool, let me explain:

Payments Can Be Hard... if You Don't Stripe Right

A little more background will help me explain this better: in the opening sentence of this blog post I mentioned building the original authenticated API out on a kitchen table at an Airbnb in Oslo. By that time, everyone knew I was going through an M&A process with HIBP I called Project Svalbard, which ultimately failed. What most people didn't know at the time was the other very stressful goings on in my life which combined, had me on a crazy rollercoaster ride I had little control over. It was in that environment that I created the authenticated API, complete with the Azure API Management (APIM) component and Stripe integration. It was rough, and I wish I'd done it better. Now, I have.

In the beginning, I pushed as much of the payment processing as possible to the HIBP website. This was due to a combination of me wanting to create a slick UX and frankly, not understanding Stripe's own UI paradigms. It looked like this:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

Cards never ended up hitting HIBP directly, rather the site did a dance with Stripe that involved the card data going to them directly from the client side, a token coming back and then that being used for the processing. It worked, but it had numerous problems ranging from lack of support for things like 3D Secure payments, no support for other payments mechanisms such as Google Pay and Apple Pay and increasingly, large amounts of plumbing required to tie it all together. For example, there were hundreds of lines of code on my end to process payments, change the default card and show a list of previous receipts. The Stripe APIs are extraordinarily clever, but I couldn't escape writing large troves of my own code to make it work the way I originally designed it.

Two new things from Stripe since I originally wrote the code have opened up a whole new way of doing this:

  1. Customer Portal: This is a fully hosted environment where payments are made, cards and subscriptions are managed, invoices and receipts are retrieved and basically, a huge amount of the work I'd previously hand-built can be managed by them rather than by me
  2. Embeddable Pricing Table: This brings the products and prices defined in Stripe into the UI of third party services (such as HIBP) such that customers can select their product then head off to Stripe and do the purchasing there

Rolling to these services removed a huge amount of code from HIBP with the bulk of what's left being email address verification, API key management and handling callbacks from Stripe when a payment is successful. What all this means is that when you first create a subscription, after verifying your email address, you see these two screens:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API
Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

That's the embeddable pricing table following by Stripe's own hosted payment page. I left the browser address bar in the latter to highlight that this is served by Stripe rather than HIBP. I love distancing myself from any sort of card processing and what's more, everything to do with actually taking the payment is now Stripe's problem 😊 If you're interested in the mechanics of this, a successful payment calls a webhook on HIBP with the customer's details which updates their account with a month of API key whilst the screen above redirects them over to the HIBP website where they can grab their key. Easy peasy.

I silently rolled this out a week ago, watched it being used, made a few little tweaks and then waited until now to write about it. The rollout coincided with a typical email I've received so many times before:

First of all I would like to thank you for the wonderful service that helps people to keep track of their email breaches. I was trying to build a product to provide your services via my website, something similar to Firefox, avast and 100's of other companies doing. We were trying to do it according to the guidelines mentioned in the website. However I am not able to renew my purchase due to payment gateway failures at stripe payment. Requesting you to kindly check the same and advise me on alternate methods for making the payment.

The old model often caused payments to be rejected, especially from subscribers in India. The painful thing for me when trying to help folks is that Stripe would simply report the failed payment as follows:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

However, going back to the individual who raised the query above after rolling out this update, things changed very dramatically:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

To the title of this section, I simply wasn't "Striping" right. I'm sure there's a way with enough plumbing that it's feasible, but why bother? I cut hundreds of lines of code out just by delegating more of the workload back to them. Further, with ever tightening PCI DSS standards (read Scott's piece, interesting stuff) the less I have to do with cards, the better.

This was a "penny drop" moment for me and it's already made a big difference in a positive way. But there's another penny that dropped for me at the same time: one-off keys were an unnecessary problem.

There Are No More One-Off Keys

It was at the moment I was ripping out those hundreds of lines of code that I wondered: why do I have all the additional kludge to support the paradigm of a one-off key that only lasts a month? Why had I built (and was now maintaining) server side code to handle different types of purchases and UX paradigms to represent one-off versus recurring keys? My gut feel was that most payments formed part of an ongoing subscription but hey, who needs gut feels when you have real data?! So I pulled the numbers:

Only 7% of payments were one-offs, with 93% of payments forming part of ongoing subscriptions.

And so I killed the one-off keys. Kinda, because you can still have a key for only one month, you just purchase a monthly subscription then immediately cancel it via the Stripe Customer Portal:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

That's linked into from the API key dashboard on HIBP and it'll take all of 5 seconds to do (also note the ability to change payment method directly on the Stripe site). I've added text to that effect on the HIBP website (you may have spotted that in the earlier screen cap) so in practice, the ability to purchase a one-off key is still there and the main upside of this is that I've just killed a trove of code I no longer have to worry about 🙂 Because this is the internet, I'm sure someone will still be upset, but if you only want a key for a month then that capability still well and truly exists.

All of this so far amounts to doing the same things that were always there but better. Now let's talk about the all new stuff!

Annual Billing and Different Rate Limits are Coming... Very Soon!

The title is self-explanatory and "very soon" is in about 2 weeks from now 😎

Let me illustrate the first part of that title with a message I received recently:

Is there a way to procure a 10 year API key? Our client wants to use the Have I been Pwned plugin for [redacted service name]; however, the $3.50 monthly subscription is too small to go through procurement.

What's that saying about no good deed going unpunished? In my naivety, I made the pricing low with the thinking that was a good thing, yet here we are with that posing barriers! This was a recurring message over and over again with folks simply struggling to get their $3.50 reimbursed. I should have seen this coming after years of living the corporate life myself (I have vivid flashbacks of how hard it was to get small sums reimbursed), and filling out an untold number of expense reports. Speaking of which, this was another recurring theme:

Is there a way to pay yearly for HIBP API access vs monthly?  Monthly adds overhead in paperwork.

And again, I get it, this is a painful process. It somehow feels even more painful due to the fact the sum is so low; how much time are people burning trying to justify $3.50 to their boss?! It's painful, and this likely explains why the request for annual payments is the second most requested idea on HIBP's UserVoice. The comments there speak for themselves, and I'm having corporate PTSD flashbacks just reading them again now!

Sticking with the UserVoice theme, the 5th most requested feature is for different pricing on different rate limits. This is mostly self-explanatory but what I wasn't aware of until I went and pulled the stats was just how many people were hacking around the rate limit problem. There are heaps of API accounts like this:

hibp+1@domain.com
hibp+2@domain.com
hibp+3@domain.com
...

Because there can only be one key per email address, organisations are creating heaps of unique sub-addressed emails in order to buy multiple keys. This would have been a manual, laborious process; there's no automated way to do this, quite the contrary with anti-automation controls built into the process. Further, each key has it's own rate limit so I imagine they were also building a bunch of plumbing in the back end to then distribute requests across a collection of keys which, yeah, I get it, but man that seems like hard work! When I say "a collection of keys", I'm not just talking about a few of them either; the largest number of active in-use keys by a single organisation is 112. One hundred and twelve! The next largest is 110. I never expected that 🤯 (Incidentally, these orgs and the others obtaining multiple keys are all precisely the kinds I want using the API to do good things.)

Building the mechanics of annual billing and different rate limits is only part of the challenge and most of that is already done, the harder part is pricing it. I'm pulling troves of analytics from APIM at present to better understand the usage patterns, and it's quite interesting to see the data as it relates to requests for the API:

Big Changes are Afoot: Expanding and Enhancing the Have I Been Pwned API

There's no persistent logging of the actual queries themselves, but APIM makes it easy to understand both the volume of queries and how many of them are successful versus failed, namely because they exceed the existing rate limit or were made with an invalid (likely expired) key. So, that's what I need to work out over the next couple of weeks when I'll launch everything and write it up, as always, in detail 🙂

Summary

The HIBP API has become an increasingly important part of all sorts of different tools and systems that use the data to help protect people impacted by data breaches. The changes I've pushed out over the last week help make the service more accessible and easier to manage, but it's the coming changes I'm most excited about. These are the ones that will make life so much easier on so many people integrating the service and, I sincerely hope, will enable them to do things that make a much more profound impact on all of us who've been pwned before.

Go and check out how the whole API key process works, I'd love to hear your feedback 😊

Battle with Bots Prompts Mass Purge of Amazon, Apple Employee Accounts on LinkedIn

By BrianKrebs

On October 10, 2022, there were 576,562 LinkedIn accounts that listed their current employer as Apple Inc. The next day, half of those profiles no longer existed. A similarly dramatic drop in the number of LinkedIn profiles claiming employment at Amazon comes as LinkedIn is struggling to combat a significant uptick in the creation of fake employee accounts that pair AI-generated profile photos with text lifted from legitimate users.

Jay Pinho is a developer who is working on a product that tracks company data, including hiring. Pinho has been using LinkedIn to monitor daily employee headcounts at several dozen large organizations, and last week he noticed that two of them had far fewer people claiming to work for them than they did just 24 hours previously.

Pinho’s screenshot below shows the daily count of employees as displayed on Amazon’s LinkedIn homepage. Pinho said his scraper shows that the number of LinkedIn profiles claiming current roles at Amazon fell from roughly 1.25 million to 838,601 in just one day, a 33 percent drop:

The number of LinkedIn profiles claiming current positions at Amazon fell 33 percent overnight. Image: twitter.com/jaypinho

As stated above, the number of LinkedIn profiles that claimed to work at Apple fell by approximately 50 percent on Oct. 10, according to Pinho’s analysis:

Image: twitter.com/jaypinho

Neither Amazon or Apple responded to requests for comment. LinkedIn declined to answer questions about the account purges, saying only that the company is constantly working to keep the platform free of fake accounts. In June, LinkedIn acknowledged it was seeing a rise in fraudulent activity happening on the platform.

KrebsOnSecurity hired Menlo Park, Calif.-based SignalHire to check Pinho’s numbers. SignalHire keeps track of active and former profiles on LinkedIn, and during the Oct 9-11 timeframe SignalHire said it saw somewhat smaller but still unprecedented drops in active profiles tied to Amazon and Apple.

“The drop in the percentage of 7-10 percent [of all profiles], as it happened [during] this time, is not something that happened before,” SignalHire’s Anastacia Brown told KrebsOnSecurity.

Brown said the normal daily variation in profile numbers for these companies is plus or minus one percent.

“That’s definitely the first huge drop that happened throughout the time we’ve collected the profiles,” she said.

In late September 2022, KrebsOnSecurity warned about the proliferation of fake LinkedIn profiles for Chief Information Security Officer (CISO) roles at some of the world’s largest corporations. A follow-up story on Oct. 5 showed how the phony profile problem has affected virtually all executive roles at corporations, and how these fake profiles are creating an identity crisis for the businesses networking site and the companies that rely on it to hire and screen prospective employees.

A day after that second story ran, KrebsOnSecurity heard from a recruiter who noticed the number of LinkedIn profiles that claimed virtually any role in network security had dropped seven percent overnight. LinkedIn declined to comment about that earlier account purge, saying only that, “We’re constantly working at taking down fake accounts.”

A “swarm” of LinkedIn AI-generated bot accounts flagged by a LinkedIn group administrator recently.

It’s unclear whether LinkedIn is responsible for this latest account purge, or if individually affected companies are starting to take action on their own. The timing, however, argues for the former, as the account purges for Apple and Amazon employees tracked by Pinho appeared to happen within the same 24 hour period.

It’s also unclear who or what is behind the recent proliferation of fake executive profiles on LinkedIn. Cybersecurity firm Mandiant (recently acquired by Googletold Bloomberg that hackers working for the North Korean government have been copying resumes and profiles from leading job listing platforms LinkedIn and Indeed, as part of an elaborate scheme to land jobs at cryptocurrency firms.

On this point, Pinho said he noticed an account purge in early September that targeted fake profiles tied to jobs at cryptocurrency exchange Binance. Up until Sept. 3, there were 7,846 profiles claiming current executive roles at Binance. The next day, that number stood at 6,102, a 23 percent drop (by some accounts that 6,102 head count is still wildly inflated).

Fake profiles also may be tied to so-called “pig butchering” scams, wherein people are lured by flirtatious strangers online into investing in cryptocurrency trading platforms that eventually seize any funds when victims try to cash out.

In addition, identity thieves have been known to masquerade on LinkedIn as job recruiters, collecting personal and financial information from people who fall for employment scams.

Nicholas Weaver, a researcher for the International Computer Science Institute at University of California, Berkeley, suggested another explanation for the recent glut of phony LinkedIn profiles: Someone may be setting up a mass network of accounts in order to more fully scrape profile information from the entire platform.

“Even with just a standard LinkedIn account, there’s a pretty good amount of profile information just in the default two-hop networks,” Weaver said. “We don’t know the purpose of these bots, but we know creating bots isn’t free and creating hundreds of thousands of bots would require a lot of resources.”

In response to last week’s story about the explosion of phony accounts on LinkedIn, the company said it was exploring new ways to protect members, such as expanding email domain verification. Under such a scheme, LinkedIn users would be able to publicly attest that their profile is accurate by verifying that they can respond to email at the domain associated with their current employer.

LinkedIn claims that its security systems detect and block approximately 96 percent of fake accounts. And despite the recent purges, LinkedIn may be telling the truth, Weaver said.

“There’s no way you can test for that,” he said. “Because technically, it may be that there were actually 100 million bots trying to sign up at LinkedIn as employees at Amazon.”

Weaver said the apparent mass account purge at LinkedIn underscores the size of the bot problem, and could present a “real and material change” for LinkedIn.

“It may mean the statistics they’ve been reporting about usage and active accounts are off by quite a bit,” Weaver said.

Emotet Botnet Distributing Self-Unlocking Password-Protected RAR Files to Drop Malware

By Ravie Lakshmanan
The notorious Emotet botnet has been linked to a new wave of malspam campaigns that take advantage of password-protected archive files to drop CoinMiner and Quasar RAT on compromised systems. In an attack chain detected by Trustwave SpiderLabs researchers, an invoice-themed ZIP file lure was found to contain a nested self-extracting (SFX) archive, the first archive acting as a conduit to launch

Researchers Detail Critical RCE Flaw Reported in Popular vm2 JavaScript Sandbox

By Ravie Lakshmanan
A now-patched security flaw in the vm2 JavaScript sandbox module could be abused by a remote adversary to break out of security barriers and perform arbitrary operations on the underlying machine. "A threat actor can bypass the sandbox protections to gain remote code execution rights on the host running the sandbox," GitHub said in an advisory published on September 28, 2022. <!--adsense--> The

Swatted: A Shooting Hoax Spree Is Terrorizing Schools Across the US

By Dhruv Mehrotra
Sixteen states collectively suffered more than 90 false reports of school shooters during three weeks in September—and many appear to be connected.

Comm100 Chat Provider Hijacked to Spread Malware in Supply Chain Attack

By Ravie Lakshmanan
A threat actor likely with associations to China has been attributed to a new supply chain attack that involves the use of a trojanized installer for the Comm100 Live Chat application to distribute a JavaScript backdoor. Cybersecurity firm CrowdStrike said the attack made use of a signed Comm100 desktop agent app for Windows that was downloadable from the company's website. The scale of the

What You Do Now To Protect Your Child From Cyberbullying

By Alex Merton-McCann

I can’t tell you how many times over my 25 years of parenting that I’ve just wanted to wrap my boys in cotton wool and protect them from all the tricky stuff that life can throw our way. But unfortunately, that’s never been an option. Whether it’s been friendship issues in the playground, dramas on a messaging app or dealing with broken hearts, it can be really hard watching your kids experience hardship. 

Get Ahead Of The Problem! 

But one thing I have learnt from years of mothering is that if you spend some time getting ahead of a potentially challenging situation then you’ve got a much better chance of minimising it. Or better still preventing it – and this absolutely applies to cyberbullying. 

Is Cyberbullying A Big Problem for Aussie Kids? 

In early 2022, McAfee interviewed over 15,000 parents and 12,000 children worldwide with the goal of finding out how families both connect and protect themselves online. And what they found was astounding: Aussie kids reported the 2nd highest rate of cyberbullying (24%) out of the 10 countries surveyed. American children reported the highest rate. The average for all countries was 17%. Check out my post here with all the details.  

So, to dig deeper into this issue of cyberbullying, McAfee commissioned additional research in August this year to better understand what cyberbullying looks like, where it happens and who the perpetrators are. And the biggest takeaways for Aussie kids: 

  • Name calling is the most common form of cyberbullying 
  • Most cyberbullying happens on social media 
  • Aussie kids have the highest rate of cyberbullying on Snapchat 
  • 56% of Aussie kids know the perpetrator 

You can check out my post here with all the details.  

How To Avoid Your Kids Becoming a Statistic 

So, if you need to grab a cuppa and digest all this, I don’t blame you! It’s a lot. But, as mentioned before, I honestly believe that if we get ahead of the challenges, we have a greater chance of minimising the fall out. So, without further ado – here is my advice on what you can do NOW to minimise the chance of your kids being involved in cyberbullying – either as the victim or the perpetrator. 

1. Talk About Online Respect and Kindness As Soon As They Start Using Devices 

As soon as your kids move on from just watching movies and playing games on their devices, you need to talk about the importance of ‘being nice’ online. A more natural way around this is to extend your parenting advice to include the online world too. For example:  

  • ‘Remember how important it is to be kind to everyone when you are in the playground at kindy – as well as when you are online.’  
  • ‘Always say please and thank you – to your friends in-person and online too.’ 

And don’t forget the importance of role-modelling this too! 

2. Check Your Family Communication Culture 

One of the best things you can do is to create a family culture where honest and genuine two-way communication is a feature of family life. If your kids know they can confide in you, that nothing is off-limits and that you won’t overreact – then they are more likely to open-up about a problem before it becomes overwhelming and ‘unsolvable’. 

3. Understand Your Child’s World 

Parents who have a comprehensive understanding of their child’s life will be better able to detect when things aren’t going well. Knowing who your kid’s friends are, who they ‘sit with’ at lunchtime, their favourite music and their boyfriend or girlfriend needs to be a big priority. I also encourage parents to establish relationships with teachers or mentors at school so they can keep their ‘ear to the ground’. When a child’s behaviour and interests change, it can often mean that all isn’t well and that some detective work is required! 

4. Ensure Your Kids Understand What Bullying Is 

Cyberbullying can have a variety of definitions which can often cause confusion. In McAfee’s research, they used the definition by StopBullying.Gov: 

Cyberbullying is bullying that takes place over digital devices like cell phones, computers, and tablets. Cyberbullying can occur through SMS, Text, and apps, or online in social media, forums, or gaming where people can view, participate in, or share content. Cyberbullying includes sending, posting, or sharing negative, harmful, false, or mean content about someone else. It can include sharing personal or private information about someone else causing embarrassment or humiliation. Some cyberbullying crosses the line into unlawful or criminal behaviour.  

McAfee’s definition was then expanded to include specific acts of cyberbullying, such as: 

  • flaming – online arguments that can include personal attacks 
  • outing – disclosing someone’s sexual orientation without their consent  
  • trolling – intentionally trying to instigate a conflict through antagonistic messages 
  • doxing – publishing private or identifying information without someone’s consent  

Along with other acts, including:  

  • name calling  
  • spreading false rumours  
  • sending explicit images or messages  
  • cyberstalking, harassment, and physical threats  
  • exclusion from group chats and conversation 

Now, I appreciate that reading your children several minutes of definitions may not be very helpful. So, instead, keep it simple and amend the above to make it age appropriate for your kids. You may choose to say that it is when someone is being mean online, if your kids are very young. But if you have tweens in the house then I think more details would be important. The goal here is for them to understand at what point they shouldn’t accept bad behaviour online.  

5. Give Them An Action Plan For When They Experience Bad Behaviour Online  

As soon as your kids are actively engaged with others online, they need to have an action plan in case things go awry – probably around 6-7 years of age. In fact, I consider this to be a golden time in parenting – a time when your kids are receptive to your advice and often keen to please. So, this is when you need to help them establish good practices and habits that will hold them in good stead. This is what I would instil: 

  • If someone makes you feel upset when you are online, you need to tell a trusted adult 
  • Save a copy of the interaction, perhaps take a screenshot. Ensure they know how to do this. 
  • Block the sender or delete them from your contacts. 
  • Report the behaviour to the school, the police or the eSafety Commissioner’s Office, if necessary 

Now, of course not all bad behaviour online will be defined as cyberbullying – remember we all see the world through different lenses. However, what’s important here is that your kids ask for help when they experience something that makes them feel uncomfortable. And while we all hope that it is unlikely that you will need to escalate any interactions to the police or the eSafety Commissioner, knowing what the course of action is in case things get out of hand is essential.  

6. Make Empathy A Priority  

There is so much research on the connection between the lack of empathy and bullying behaviours. In her book Unselfie, Parenting expert Michelle Borba explains that we are in the midst of an ‘empathy crisis’ which is contributing to bullying behaviour. She believes teens today are far less empathetic than they were 30 years ago. Teaching your kids to ‘walk in someone’s else’s shoes’, consider how others feel and have a focus on compassion will go a long way to developing an empathetic lens. You can read more about helping develop empathy in your child here.  

There is no doubt that cyberbullying is one of the biggest parenting challenges of our generation and, unfortunately, it isn’t going to disappear anytime soon. So, get ahead of the problem – teach your kids about kindness from a young age, create an open family communication culture, make empathy a priority in your family and give them an action plan in case things get tricky online. But most importantly, always listen to your gut. If you think things aren’t right with your kids – if they don’t want to go to school, seem emotional after using their devices or their behaviour suddenly changes, then do some digging. My gut has never let me down!     

Take care 

Alex  

The post What You Do Now To Protect Your Child From Cyberbullying appeared first on McAfee Blog.

A Parent’s Guide To The Metaverse – Part Two

By Alex Merton-McCann

Welcome back to part 2 of my Metaverse series. If you are after tips and strategies to help your kids navigate the Metaverse safely then you’re in the right place. In this post I’ll share with you how your kids are likely already accessing the Metaverse, the benefits plus how to ensure they have a safe and positive experience. Now, if you’d like a refresher on exactly what the Metaverse is before we get underway, then check out part one here. 

How Many Kids Are Using The Metaverse? 

If your kids have played Roblox, Fortnite or Minecraft then they have already taken the step into this new virtual frontier. Yes, it’s that easy! But how many kids are playing on these platforms?  

So, if you’ve got a couple of kids, tweens or teens in your house, then chances are they have probably already had a Metaverse experience! Or, if not yet, then it won’t be long… 

Is There Any Difference Between Video Games And The Metaverse? 

There are actually a lot of similarities between online video games and the Metaverse including the use of avatars and the availability of items to purchase eg a horse in a game or an NFT (non-fungible token) in the Metaverse. However, the biggest difference is that the Metaverse is not just about gaming – it is so much more. In the Metaverse, there are no limitations to the number of participants nor on the type of activity – you can attend meetings, concerts, socialise without the gaming aspect, even undertake study! 

What Are The Benefits Of The Metaverse For Our Kids? 

There are so many good things about the Metaverse for our kids, particularly from an educational perspective. As a mum of 4, I am really excited at the possibilities the Metaverse will offer our kids. Imagine being able to experience a country in virtual reality – walk around, see the sights, its geographical features. I have no doubt that would enthuse even the most reluctant learners. And a recent US study confirmed this. It found that taking students on a Virtual Reality field trip to Greenland to learn about climate change resulted in higher interest, enjoyment and retention than students who simply watched a traditional 2-D video. How good! 

Taking care of my family’s mental health has always been a huge focus of my parenting approach and I am really excited at the great options the Metaverse can offer in the area. As a family, we’ve spent multiple hours using apps like Calm and Headspace to help us meditate and practice mindfulness. But the thought of being able to don a VR headset and be transported to the actual rainforest or the roaring fire that I often listen to, is even more appealing! One of the best parts of the VR experience is that it completely blocks out the ‘real world’ which would make it easier to stay in the flow. Very appealing! 

And while we’re talking benefits, let’s not gloss over the potential role the Metaverse can play in fostering empathy and promoting understanding between communities. There is a growing group of digital creators who are designing Metaverse experiences to do this using Virtual Reality. Homeless Realities is a project from the University of Southern California (USC) where students use virtual reality to tell stories, usually of marginalized communities that have been overlooked by traditional journalism. So powerful! 

How Do We Keep Our Kids Safe? 

As parents, it’s essential that we add the Metaverse to our list of things to get our head around so we can keep our kids safe. Here are my top tips: 

1. Commit To Understanding How It All Works 

While I very much appreciate you reading this post, it’s important that you take action and get involved – particularly if your kids are already. If your kids are using Minecraft, Fortnite or Roblox – sign up and understand yourself how it all works. If your kids have a VR headset and you’re not sure how it works – ask them for a turn and a lesson. Only by experiencing it for yourself, will you truly understand the attraction but also the pitfalls and risks.   

2. Direct Your Kids To Age Appropriate Platforms 

As the Metaverse is still evolving and very much a work in progress, there are very minimal protections in place for users. However, the 3 platforms that tend to attract younger players (Roblox, Minecraft and Fortnite) all have parental control features. So, please direct them here – if you can – as you’ll be able to have more control over their online safety. 

Minecraft and Fortnite allow parents to disable chat functions which means your kids can’t communicate with people they don’t know. Roblox will automatically apply certain safety settings depending on the age group of their account. But regardless of what their platform of choice is, always protect your credit card details!! I know Fortnite will only allow kids to make ‘in game’ purchases if these supply credit card details in the checkout. 

3. Make Online Safety Part Of Your Family’s Dialogue 

If your kids are older, it’s likely, you’ll have far less say over where they spend their time in the Metaverse so that’s when your kids will need to rely on their cyber safety skills to help them make safe decisions. Now, don’t assume that your child’s school has ticked the cyber safety box and it’s all been taken care of. Cybersafety needs to be weaved into your family’s dialogue and spoken about regularly. Even from the age of 5, your kids should know that they shouldn’t talk to strangers online or offline, if they see something that makes them upset online then they need to talk to a parent asap and, that they should never share their name or anything that could identify them online.  

The goal of this is to make safe online behaviour part of their routine so that when they are faced with a challenging situation anywhere online, they automatically know how to respond. And of course, as kids get older, the advice becomes appropriate to their age. 

4. Don’t Forget About Physical Safety Too 

Most kids are busting to get access to a VR headset but please take some time to do your research to work out which headsets are more suitable for your kids and your lounge room! There are 2 basic types: some that require a ‘tethered’ connection to a PC or standalone models with built-in computing power. The tethered headsets have traditionally delivered a more immersive user experience due to the extra computational power the PC provides however experts predict it won’t be long before standalone headsets are just as good. The biggest selling VR headset, Occulus Quest 2, can in fact connect wirelessly to your PC with the option to connect via a cable in case the game or experience needs extra oomph! 

Regardless of which type you choose, it’s important that there is a safe play area in which to use the headset. VR headsets completely removes any visual of the real world so please remove special vases and keepsakes and ensure the dog isn’t roaming around. 

‘Cybersickness’ aka motion sickness can be a real issue for some VR users. When you don the headset and are immersed in a different time and space, your body can get very confused. If your brain thinks you are moving (based on what you are seeing through the headset) but in fact you’re standing still, it creates a disconnect that causes enough confusion to make you feel nauseous. If this happens to your kids, consider reducing the time they spend with the headset, having fewer but smaller sessions to get your ‘VR legs’ and checking the VR headset is being worn correctly. 

So, it’s over to you now parents: it’s time to get involved and understand this Metaverse once and for all. Always start with the games and experiences your kids spend their time on but when you’re ready, make sure you check out some of the more adult places such as Decentraland or The Sandbox. Who knows, you might just become a virtual real estate tycoon or set up a business that becomes quite the side hustle! The sky is the limit in the Metaverse! 

Till next time! 

Alex  

The post A Parent’s Guide To The Metaverse – Part Two appeared first on McAfee Blog.

Smartphone Alternatives: Ease Your Way into Your Child’s First Phone

By McAfee

“But everyone else has one.” 

Those are familiar words to a parent, especially if you’re having the first smartphone conversation with your tween or pre-teen. In their mind, everyone else has a smartphone so they want a one too. But does “everyone” really have one? Well, your child isn’t wrong.  

Our recent global study found that 76% of children aged 10 to 14 reported using a smartphone or mobile device, with Brazil leading the way at 95% and the U.S. trailing the global average at 65%.   

Our figures show that younger children with smartphones and mobile devices make up a decisive majority of younger children overall. 

Of course, just because everyone else has smartphone doesn’t mean that it’s necessarily right for your child and your family. After all, with a smartphone comes access to a wide and practically unfettered world of access to the internet, apps, social media, instant messaging, texting, and gaming, all within nearly constant reach. Put plainly, some tweens and pre-teens simply aren’t ready for that just yet, whether in terms of their maturity, habits, or ability to care for and use a device like that responsibly. 

Yet from a parent’s standpoint, a first smartphone holds some major upsides. One of the top reasons parents give a child a smartphone is “to stay in touch,” and that’s understandable. There’s something reassuring knowing that your child is a call or text away—and that you can keep tabs on their whereabouts with GPS tracking. Likewise, it’s good to know that they can reach you easily too. Arguably, that may be a reason why some parents end up giving their children a smartphone a little sooner than they otherwise would.  

However, you don’t need a smartphone to do to text, track, and talk with your child. You have alternatives. 

Smartphone alternatives 

One way to think about the first smartphone is that it’s something you ease into. In other words, if the internet is a pool, your child should learn to navigate the shallows with some simpler devices before diving into the deep end with a smartphone.  

Introducing technology and internet usage in steps can build familiarity and confidence for them while giving you control. You can oversee their development, while establishing rules and expectations along the way. Then, when the time is right, they can indeed get their first smartphone. 

But how to go about that? 

It seems a lot of parents have had the same idea and device manufacturers have listened. They’ve come up with smartphone alternatives that give kids the chance to wade into the mobile internet, allowing them to get comfortable with device ownership and safety over time without making the direct leap to a fully featured smartphone. Let’s look at some of those options, along with a few other long-standing alternatives. 

GPS trackers for kids 

These small and ruggedly designed devices can clip to a belt loop, backpack, or simply fit in a pocket, giving you the ability to see your child’s location. In all, it’s quite like the “find my” functionality we have on our smartphones. When it comes to GPS trackers for kids, you’ll find a range of options and form factors, along with different features such as an S.O.S. button, “geofencing” that can send you an alert when your child enters or leaves a specific area (like home or school), and how often it sends an updated location (to regulate battery life).  

Whichever GPS tracker you select, make sure it’s designed specifically for children. So-called “smart tags” designed to locate things like missing keys and wallets are just that—trackers designed to locate things, not children. 

Smart watches for kids 

With GPS tracking and many other communication-friendly features for families, smart watches can give parents the reassurance they’re looking for while giving kids a cool piece of tech that they can enjoy. The field of options is wide, to say the least. Smart watches for kids can range anywhere from devices offered by mobile carriers like Verizon, T-Mobile, and Vodaphone to others from Apple, Explora, and Tick Talk. Because of that, you’ll want to do a bit of research to determine the right choice for you and your child.  

Typical features include restricted texting and calling, and you’ll find that some devices are more durable and more water resistant than others, while yet others have cameras and simple games. Along those lines, you can select a smart watch that has a setting for “school time” so that it doesn’t become a distraction in class. Also, you’ll want to look closely at battery life, as some appear to do a better job of holding a charge than others.  

Smartphones for kids 

Another relatively recent entry on the scene are smartphones designed specifically for children, which offer a great step toward full-blown smartphone ownership. These devices look, feel, and act like a smartphone, but without web browsing, app stores, and social media. Again, features will vary, yet there are ways kids can store and play music, stream it via Bluetooth to headphones or a speaker, and install apps that you approve of.  

Some are paired with a parental control app that allows you to introduce more and more features over time as your child as you see fit—and that can screen texts from non-approved contacts before they reach your child. Again, a purchase like this one calls for some research, yet names like Gabb wireless and the Pinwheel phone offer a starting point. 

The flip phone 

The old reliable. Rugged and compact, and typically with a healthy battery life to boot, flip phones do what you need them to—help you and your child keep in touch. They’re still an option, even if your child may balk at the idea of a phone that’s “not as cool as a smartphone.” However, if we’re talking about introducing mobile devices and the mobile internet to our children in steps, the flip phone remains in the mix.  

Some are just phones and nothing else, while other models can offer more functionality like cameras and slide-out keyboards for texting. And in keeping with the theme here, you’ll want to consider your options so you can pick the phone that has the features you want (and don’t want) for your child. 

Ease into that first smartphone 

Despite what your younger tween or pre-teen might think, there’s no rush to get that first smartphone. And you know it too. You have time. Time to take eventual smartphone ownership in steps, with a device that keeps you in touch and that still works great for your child.  

By easing into that first smartphone, you’ll find opportunities where you can monitor and guide their internet usage. You’ll also find plenty of moments to help your child start forming healthy habits around device ownership and care, etiquette, and safety online. In all, this approach can help you build a body of experience that will come in handy when that big day finally comes—first smartphone day. 

The post Smartphone Alternatives: Ease Your Way into Your Child’s First Phone appeared first on McAfee Blog.

JavaScript bugs aplenty in Node.js ecosystem – found automatically

By Paul Ducklin
How to get the better of bugs in all the possible packages in your supply chain?

How 1-Time Passcodes Became a Corporate Liability

By BrianKrebs

Phishers are enjoying remarkable success using text messages to steal remote access credentials and one-time passcodes from employees at some of the world’s largest technology companies and customer support firms. A recent spate of SMS phishing attacks from one cybercriminal group has spawned a flurry of breach disclosures from affected companies, which are all struggling to combat the same lingering security threat: The ability of scammers to interact directly with employees through their mobile devices.

In mid-June 2022, a flood of SMS phishing messages began targeting employees at commercial staffing firms that provide customer support and outsourcing to thousands of companies. The missives asked users to click a link and log in at a phishing page that mimicked their employer’s Okta authentication page. Those who submitted credentials were then prompted to provide the one-time password needed for multi-factor authentication.

The phishers behind this scheme used newly-registered domains that often included the name of the target company, and sent text messages urging employees to click on links to these domains to view information about a pending change in their work schedule.

The phishing sites leveraged a Telegram instant message bot to forward any submitted credentials in real-time, allowing the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website. But because of the way the bot was configured, it was possible for security researchers to capture the information being sent by victims to the public Telegram server.

This data trove was first reported by security researchers at Singapore-based Group-IB, which dubbed the campaign “0ktapus” for the attackers targeting organizations using identity management tools from Okta.com.

“This case is of interest because despite using low-skill methods it was able to compromise a large number of well-known organizations,” Group-IB wrote. “Furthermore, once the attackers compromised an organization they were quickly able to pivot and launch subsequent supply chain attacks, indicating that the attack was planned carefully in advance.”

It’s not clear how many of these phishing text messages were sent out, but the Telegram bot data reviewed by KrebsOnSecurity shows they generated nearly 10,000 replies over approximately two months of sporadic SMS phishing attacks targeting more than a hundred companies.

A great many responses came from those who were apparently wise to the scheme, as evidenced by the hundreds of hostile replies that included profanity or insults aimed at the phishers: The very first reply recorded in the Telegram bot data came from one such employee, who responded with the username “havefuninjail.”

Still, thousands replied with what appear to be legitimate credentials — many of them including one-time codes needed for multi-factor authentication. On July 20, the attackers turned their sights on internet infrastructure giant Cloudflare.com, and the intercepted credentials show at least three employees fell for the scam.

Image: Cloudflare.com

In a blog post earlier this month, Cloudflare said it detected the account takeovers and that no Cloudflare systems were compromised. Cloudflare said it does not rely on one-time passcodes as a second factor, so there was nothing to provide to the attackers. But Cloudflare said it wanted to call attention to the phishing attacks because they would probably work against most other companies.

“This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached,” Cloudflare CEO Matthew Prince wrote. “On July 20, 2022, the Cloudflare Security team received reports of employees receiving legitimate-looking text messages pointing to what appeared to be a Cloudflare Okta login page. The messages began at 2022-07-20 22:50 UTC. Over the course of less than 1 minute, at least 76 employees received text messages on their personal and work phones. Some messages were also sent to the employees family members.”

On three separate occasions, the phishers targeted employees at Twilio.com, a San Francisco based company that provides services for making and receiving text messages and phone calls. It’s unclear how many Twilio employees received the SMS phishes, but the data suggest at least four Twilio employees responded to a spate of SMS phishing attempts on July 27, Aug. 2, and Aug. 7.

On that last date, Twilio disclosed that on Aug. 4 it became aware of unauthorized access to information related to a limited number of Twilio customer accounts through a sophisticated social engineering attack designed to steal employee credentials.

“This broad based attack against our employee base succeeded in fooling some employees into providing their credentials,” Twilio said. “The attackers then used the stolen credentials to gain access to some of our internal systems, where they were able to access certain customer data.”

That “certain customer data” included information on roughly 1,900 users of the secure messaging app Signal, which relied on Twilio to provide phone number verification services. In its disclosure on the incident, Signal said that with their access to Twilio’s internal tools the attackers were able to re-register those users’ phone numbers to another device.

On Aug. 25, food delivery service DoorDash disclosed that a “sophisticated phishing attack” on a third-party vendor allowed attackers to gain access to some of DoorDash’s internal company tools. DoorDash said intruders stole information on a “small percentage” of users that have since been notified. TechCrunch reported last week that the incident was linked to the same phishing campaign that targeted Twilio.

This phishing gang apparently had great success targeting employees of all the major mobile wireless providers, but most especially T-Mobile. Between July 10 and July 16, dozens of T-Mobile employees fell for the phishing messages and provided their remote access credentials.

“Credential theft continues to be an ongoing issue in our industry as wireless providers are constantly battling bad actors that are focused on finding new ways to pursue illegal activities like this,” T-Mobile said in a statement. “Our tools and teams worked as designed to quickly identify and respond to this large-scale smishing attack earlier this year that targeted many companies. We continue to work to prevent these types of attacks and will continue to evolve and improve our approach.”

This same group saw hundreds of responses from employees at some of the largest customer support and staffing firms, including Teleperformanceusa.com, Sitel.com and Sykes.com. Teleperformance did not respond to requests for comment. KrebsOnSecurity did hear from Christopher Knauer, global chief security officer at Sitel Group, the customer support giant that recently acquired Sykes. Knauer said the attacks leveraged newly-registered domains and asked employees to approve upcoming changes to their work schedules.

Image: Group-IB.

Knauer said the attackers set up the phishing domains just minutes in advance of spamming links to those domains in phony SMS alerts to targeted employees. He said such tactics largely sidestep automated alerts generated by companies that monitor brand names for signs of new phishing domains being registered.

“They were using the domains as soon as they became available,” Knauer said. “The alerting services don’t often let you know until 24 hours after a domain has been registered.”

On July 28 and again on Aug. 7, several employees at email delivery firm Mailchimp provided their remote access credentials to this phishing group. According to an Aug. 12 blog post, the attackers used their access to Mailchimp employee accounts to steal data from 214 customers involved in cryptocurrency and finance.

On Aug. 15, the hosting company DigitalOcean published a blog post saying it had severed ties with MailChimp after its Mailchimp account was compromised. DigitalOcean said the MailChimp incident resulted in a “very small number” of DigitalOcean customers experiencing attempted compromises of their accounts through password resets.

According to interviews with multiple companies hit by the group, the attackers are mostly interested in stealing access to cryptocurrency, and to companies that manage communications with people interested in cryptocurrency investing. In an Aug. 3 blog post from email and SMS marketing firm Klaviyo.com, the company’s CEO recounted how the phishers gained access to the company’s internal tools, and used that to download information on 38 crypto-related accounts.

A flow chart of the attacks by the SMS phishing group known as 0ktapus and ScatterSwine. Image: Amitai Cohen for Wiz.io. twitter.com/amitaico.

The ubiquity of mobile phones became a lifeline for many companies trying to manage their remote employees throughout the Coronavirus pandemic. But these same mobile devices are fast becoming a liability for organizations that use them for phishable forms of multi-factor authentication, such as one-time codes generated by a mobile app or delivered via SMS.

Because as we can see from the success of this phishing group, this type of data extraction is now being massively automated, and employee authentication compromises can quickly lead to security and privacy risks for the employer’s partners or for anyone in their supply chain.

Unfortunately, a great many companies still rely on SMS for employee multi-factor authentication. According to a report this year from Okta, 47 percent of workforce customers deploy SMS and voice factors for multi-factor authentication. That’s down from 53 percent that did so in 2018, Okta found.

Some companies (like Knauer’s Sitel) have taken to requiring that all remote access to internal networks be managed through work-issued laptops and/or mobile devices, which are loaded with custom profiles that can’t be accessed through other devices.

Others are moving away from SMS and one-time code apps and toward requiring employees to use physical FIDO multi-factor authentication devices such as security keys, which can neutralize phishing attacks because any stolen credentials can’t be used unless the phishers also have physical access to the user’s security key or mobile device.

This came in handy for Twitter, which announced last year that it was moving all of its employees to using security keys, and/or biometric authentication via their mobile device. The phishers’ Telegram bot reported that on June 16, 2022, five employees at Twitter gave away their work credentials. In response to questions from KrebsOnSecurity, Twitter confirmed several employees were relieved of their employee usernames and passwords, but that its security key requirement prevented the phishers from abusing that information.

Twitter accelerated its plans to improve employee authentication following the July 2020 security incident, wherein several employees were phished and relieved of credentials for Twitter’s internal tools. In that intrusion, the attackers used Twitter’s tools to hijack accounts for some of the world’s most recognizable public figures, executives and celebrities — forcing those accounts to tweet out links to bitcoin scams.

“Security keys can differentiate legitimate sites from malicious ones and block phishing attempts that SMS 2FA or one-time password (OTP) verification codes would not,” Twitter said in an Oct. 2021 post about the change. “To deploy security keys internally at Twitter, we migrated from a variety of phishable 2FA methods to using security keys as our only supported 2FA method on internal systems.”

Update, 6:02 p.m. ET: Clarified that Cloudflare does not rely on TOTP (one-time multi-factor authentication codes) as a second factor for employee authentication.

Hackers Using Fake DDoS Protection Pages to Distribute Malware

By Ravie Lakshmanan
WordPress sites are being hacked to display fraudulent Cloudflare DDoS protection pages that lead to the delivery of malware such as NetSupport RAT and Raccoon Stealer. "A recent surge in JavaScript injections targeting WordPress sites has resulted in fake DDoS prevent prompts which lead victims to download remote access trojan malware," Sucuri's Ben Martin said in a write-up published last week

A Parent’s Guide To The Metaverse – Part One

By Alex Merton-McCann

We’ve all heard about the Metaverse. And there’s no doubt it has certainly captured the attention of the world’s biggest companies: Facebook has changed its name to Meta, Hyundai has partnered up with Roblox to offer virtual test drives, Nike has bought a virtual shoe company and Coca-Cola is selling NFT’s there too. (Non-Fungible Tokens – think digital assets).  

But if you are confused about exactly what this all means and most importantly, what the metaverse actually is, then you are not alone. I’m putting together a 2-part series for parents that will help us get a handle on exactly what this new digital frontier promises and what we need to know to keep our kids safe. It will also ensure we don’t feel like dinosaurs! So, let’s get started. 

What is this Metaverse? 

I think the best way of describing the Metaverse is that it’s a network of online 3D virtual worlds that mimic the real world. Once users have chosen their digital avatar, they can meet people, play games, do business, design fashion items, buy real estate, attend events, earn money, rear a pet – in fact, almost anything they can do in the ‘real’ world! And of course, all transactions are via cryptocurrencies. 

If you are an avid Science Fiction reader, then you may have already come across the term in the 1992 novel ‘Snow Crash’ by Neal Stephenson. In the book, Stephenson envisions a virtual reality-based evolution of the internet in which his characters use digital avatars of themselves to explore the online world. Sounds eerily familiar, doesn’t it?  

Still confused? Check out either the book or Steven Spielberg’s movie adaption of Ernest Cline’s Ready Player One. Set in 2045, the book tells the story of people living in a war-ravaged world on the brink of collapse who turn to OASIS, a massively multiplayer online simulation game that has its own virtual world and currency. In the OASIS, they engage with each other, shop, play games and be transported to different locations.  

How Do You Access The Metaverse? 

The best and most immersive way to access the metaverse is using a Virtual Reality (VR) headset and your internet connection, of course. VR headsets completely take over users’ vision and replace the outside world with a virtual one. Now, this maybe a game or a movie but VR headsets have their own set of apps which once downloaded, allows users to meditate, learn piano, work out at the gym or even attend a live concert in the metaverse!  

Now access to the Metaverse is not just limited to those who own expensive headsets. Anyone with a computer or a smartphone (that is internet connected) can also have a metaverse experience. Of course, it won’t be as intense or immersive as the VR headset experience but it’s still a commonly used route to access the metaverse. Some of these ‘worlds’ suggest users can access their world using smartphones however experienced users don’t think this is a good idea as phones don’t have the necessary computational power to explore the metaverse properly. 

As some of the most popular metaverse worlds can be accessed using your computer, why not check out Decentraland, The Sandbox, Somnium or even Second Life. In most of these worlds, users don’t have to create an account or spend money to start exploring however if you want the full experience then you’ll need to do so.  

How Much Does It Cost? 

Entering the metaverse doesn’t cost anything, just like going on the internet doesn’t cost anything – apart from your internet connection and hardware, of course! And don’t forget that if you want a truly immersive 3D experience, then you might want to consider investing in a VR headset. 

But, if you do want to access some of the features of the metaverse and invest in some virtual real estate or perhaps buy yourself a Gucci handbag, then you will need to put your hand into your virtual pocket and spend some of your virtual dollars. But the currency you will need depends entirely on the metaverse you are in. 

Decentraland’s currency MANA is considered to be the most commonly used currency in the metaverse and also one of the best to invest in, according to some experts. MANA can be used to buy land, purchase avatars, names, wearables, and other items in the Decentraland marketplace. 

The Sandbox has a different currency, SAND, which is also used to buy items from The Sandbox marketplace. This is the second most popular currency however be prepared to buy the currency of the world you choose to spend your time in. 

Now, I totally appreciate that the whole concept of the Metaverse is a lot to get your head around. But if you have a tribe of kids, then chances are they are going to want to be part of it so don’t put it in the too-hard basket. Take some time to get your head around it: do some more reading, talk to your friends about it and check out some of the metaverses that you can access from your PC. Nothing beats experiencing it for yourself! 

In Part 2, I will be sharing my top tips and strategies to help us, parents, successfully guide our kids through the challenges and risks of the metaverse. So watch out for that. 

Till, next time – keep researching! 

 

Alex x 

The post A Parent’s Guide To The Metaverse – Part One appeared first on McAfee Blog.

Sounding the Alarm on Emergency Alert System Flaws

By BrianKrebs

The Department of Homeland Security (DHS) is urging states and localities to beef up security around proprietary devices that connect to the Emergency Alert System — a national public warning system used to deliver important emergency information, such as severe weather and AMBER alerts. The DHS warning came in advance of a workshop to be held this weekend at the DEFCON security conference in Las Vegas, where a security researcher is slated to demonstrate multiple weaknesses in the nationwide alert system.

A Digital Alert Systems EAS encoder/decoder that Pyle said he acquired off eBay in 2019. It had the username and password for the system printed on the machine.

The DHS warning was prompted by security researcher Ken Pyle, a partner at security firm Cybir. Pyle said he started acquiring old EAS equipment off of eBay in 2019, and that he quickly identified a number of serious security vulnerabilities in a device that is broadly used by states and localities to encode and decode EAS alert signals.

“I found all kinds of problems back then, and reported it to the DHS, FBI and the manufacturer,” Pyle said in an interview with KrebsOnSecurity. “But nothing ever happened. I decided I wasn’t going to tell anyone about it yet because I wanted to give people time to fix it.”

Pyle said he took up the research again in earnest after an angry mob stormed the U.S. Capitol on Jan. 6, 2021.

“I was sitting there thinking, ‘Holy shit, someone could start a civil war with this thing,”’ Pyle recalled. “I went back to see if this was still a problem, and it turns out it’s still a very big problem. So I decided that unless someone actually makes this public and talks about it, clearly nothing is going to be done about it.”

The EAS encoder/decoder devices Pyle acquired were made by Lyndonville, NY-based Digital Alert Systems (formerly Monroe Electronics, Inc.), which issued a security advisory this month saying it released patches in 2019 to fix the flaws reported by Pyle, but that some customers are still running outdated versions of the device’s firmware. That may be because the patches were included in version 4 of the firmware for the EAS devices, and many older models apparently do not support the new software.

“The vulnerabilities identified present a potentially serious risk, and we believe both were addressed in software updates issued beginning Oct 2019,” EAS said in a written statement. “We also provided attribution for the researcher’s responsible disclosure, allowing us to rectify the matters before making any public statements. We are aware that some users have not taken corrective actions and updated their software and should immediately take action to update the latest software version to ensure they are not at risk. Anything lower than version 4.1 should be updated immediately. On July 20, 2022, the researcher referred to other potential issues, and we trust the researcher will provide more detail. We will evaluate and work to issue any necessary mitigations as quickly as possible.”

But Pyle said a great many EAS stakeholders are still ignoring basic advice from the manufacturer, such as changing default passwords and placing the devices behind a firewall, not directly exposing them to the Internet, and restricting access only to trusted hosts and networks.

Pyle, in a selfie that is heavily redacted because the EAS device behind him had its user credentials printed on the lid.

Pyle said the biggest threat to the security of the EAS is that an attacker would only need to compromise a single EAS station to send out alerts locally that can be picked up by other EAS systems and retransmitted across the nation.

“The process for alerts is automated in most cases, hence, obtaining access to a device will allow you to pivot around,” he said. “There’s no centralized control of the EAS because these devices are designed such that someone locally can issue an alert, but there’s no central control over whether I am the one person who can send or whatever. If you are a local operator, you can send out nationwide alerts. That’s how easy it is to do this.”

One of the Digital Alert Systems devices Pyle sourced from an electronics recycler earlier this year was non-functioning, but whoever discarded it neglected to wipe the hard drive embedded in the machine. Pyle soon discovered the device contained the private cryptographic keys and other credentials needed to send alerts through Comcast, the nation’s third-largest cable company.

“I can issue and create my own alert here, which has all the valid checks or whatever for being a real alert station,” Pyle said in an interview earlier this month. “I can create a message that will start propagating through the EAS.”

Comcast told KrebsOnSecurity that “a third-party device used to deliver EAS alerts was lost in transit by a trusted shipping provider between two Comcast locations and subsequently obtained by a cybersecurity researcher.

“We’ve conducted a thorough investigation of this matter and have determined that no customer data, and no sensitive Comcast data, were compromised,” Comcast spokesperson David McGuire said.

The company said it also confirmed that the information included on the device can no longer be used to send false messages to Comcast customers or used to compromise devices within Comcast’s network, including EAS devices.

“We are taking steps to further ensure secure transfer of such devices going forward,” McGuire said. “Separately, we have conducted a thorough audit of all EAS devices on our network and confirmed that they are updated with currently available patches and are therefore not vulnerable to recently reported security issues. We’re grateful for the responsible disclosure and to the security research community for continuing to engage and share information with our teams to make our products and technologies ever more secure. Mr. Pyle informed us promptly of his research and worked with us as we took steps to validate his findings and ensure the security of our systems.”

The user interface for an EAS device.

Unauthorized EAS broadcast alerts have happened enough that there is a chronicle of EAS compromises over at fandom.com. Thankfully, most of these incidents have involved fairly obvious hoaxes.

According to the EAS wiki, in February 2013, hackers broke into the EAS networks in Great Falls, Mt. and Marquette, Mich. to broadcast an alert that zombies had risen from their graves in several counties. In Feb. 2017, an EAS station in Indiana also was hacked, with the intruders playing the same “zombies and dead bodies” audio from the 2013 incidents.

“On February 20 and February 21, 2020, Wave Broadband’s EASyCAP equipment was hacked due to the equipment’s default password not being changed,” the Wiki states. “Four alerts were broadcasted, two of which consisted of a Radiological Hazard Warning and a Required Monthly Test playing parts of the Hip Hop song Hot by artist Young Thug.”

In January 2018, Hawaii sent out an alert to cell phones, televisions and radios, warning everyone in the state that a missile was headed their way. It took 38 minutes for Hawaii to let people know the alert was a misfire, and that a draft alert was inadvertently sent. The news video clip below about the 2018 event in Hawaii does a good job of walking through how the EAS works.

Back to School: Tech Savvy vs. Cyber Savvy

By McAfee

The first day of school is right around the corner. The whole family is gearing up for a return to the routine: waking up to alarm clocks at dawn, rushed mornings, learning all day, and after-school activities and homework all night. 

Even though everyone is in a frenzied state, now is a great time to slow down and discuss important topics that may arise during the school year. Parents and guardians know their children are tech savvy, just by looking at their thumbs fly across keyboards; however, that doesn’t necessarily mean that they’re cyber-savvy. 

To make sure we’re all on the same page, here are our definitions of tech savvy and cyber savvy: 

  • Tech savvy. Digital natives (millennials, Gen Z, and now Generation Alpha) often develop their tech savviness at a young age. For example, using touchscreens, sending electronic correspondences, and troubleshooting simple technical inconveniences and glitches are like second nature because they’ve been practicing it for so long, and often every day. 
  • Cyber savvy. Cyber savviness extends beyond knowing how to use connected devices. It means knowing how to use them safely and how to intelligently dodge online hazards, know the best ways to protect devices from cybercriminals, how to guard online information, and how to spot the signs that a device or information may be compromised. 

According to McAfee research, children cited that their parents are best suited to teach them about being safe online when compared to their teachers and online resources. Here are common scenarios your child, tween, or teen will likely encounter during the school year, plus some tips and tools you can share to make sure they are safe online. 

Phishing 

It’s now common practice for school systems to communicate with students and their guardians over email, whether that’s through a school-issued email address or a personal one. Your student should know that phishers often impersonate institutions with authority, such as the IRS, banks, and in their case, a school. Put your children on alert to the most common signs of a cybercriminal phishing for valuable personally identifiable information (PII). These signs include: 

  • Typos or poor grammar 
  • Severe consequences for seemingly insignificant reasons 
  • Requests for a response in a very short timeframe 
  • Asking for information the school system should already have or for information they shouldn’t need. For example, schools have a record of their students’ Social Security Numbers, full names, and addresses, but they would never need to know account passwords. 

If your child ever receives a suspicious-looking or -sounding email, they should start an entirely new email chain with the supposed sender and confirm that they sent the message. Do not reply to the suspicious email and don’t click on any links within the message.  

An excellent nugget of wisdom you can impart is the following: Never divulge your Social Security Number over online channels and never give out passwords. If someone needs your SSN for official purposes, they can follow up in a method other than email. And no one ever needs to know your password. 

Social Media Engineering 

With a return to the school year routine comes a flood of back-to-school social media posts and catching up electronically with friends. If your child owns a social media profile (or several!), alert them to the various social media engineering tactics that are common to each platform. Similar to phishing schemes, social media scams are usually “time sensitive” and attempt to inspire strong emotions in readers, whether that’s excitement, fear, sadness, or anger. 

Alert your child that not everything they read on social media is true. Photos can be doctored and stories can be fabricated in order to prompt people to click on links to “donate” or “sign a petition.” You don’t have to discourage your child from taking a stand for causes they believe in; rather, urge them to follow up through official channels. For instance, if they see a social media post about contributing to save the rainforest, instead of donating through the post, contact a well-known organization, such as the World Wildlife Fund and inquire how to make a difference. 

School Device and BYOD Policies 

More and more school systems are entrusting school-issued connected devices to students to use in the classroom and to bring home. Other districts have BYOD (or bring your own device) policies where students can use personal family devices for school activities. In either case, device security is key to keeping their information safe and maintaining the integrity of the school system’s network. Families don’t want to be the weakest link in the school system and are responsible for a town-wide education network breach. 

Here are three ways to protect any device connected to the school network: 

  1. Lock screen protection. Biometric security measures (like facial recognition or fingerprint scanning) and passcode-locked devices are key in the case of lost or stolen devices.  
  2. Password managers. It can be a lot to ask of an adult to remember all their passwords. But expecting a young person to memorize unique and complicated passwords to all their accounts could lead to weak, reused passwords or poor password protection methods, such as writing them down. A password manager, like McAfee True Key, makes it so you only have to remember one password ever again! The software protects the rest. 
  3. VPNs. VPNs (virtual private networks) are key to protecting your network when you’re surfing on free public Wi-Fi or on networks where you’re unsure of the extent of their protection. McAfee Secure VPN protects your network with bank-grade encryption, is fast and easy to use, and never tracks your online movements so you can be confident in your security and privacy. 

Gear Up for a Safe School Year 

These conversations are great to start at the dinner table or on long, boring car rides where you’re most likely to get your child’s undivided attention. Don’t focus so much on the fearful consequences or punishment that could result from poor cyberhabits. Instead, emphasize how easy these steps and tools are to use, so it would be silly not to follow or use them. 

The post Back to School: Tech Savvy vs. Cyber Savvy appeared first on McAfee Blog.

The Security Pros and Cons of Using Email Aliases

By BrianKrebs

One way to tame your email inbox is to get in the habit of using unique email aliases when signing up for new accounts online. Adding a “+” character after the username portion of your email address — followed by a notation specific to the site you’re signing up at — lets you create an infinite number of unique email addresses tied to the same account. Aliases can help users detect breaches and fight spam. But not all websites allow aliases, and they can complicate account recovery. Here’s a look at the pros and cons of adopting a unique alias for each website.

What is an email alias? When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that prefaced by a “+” sign just to the left of the “@” sign in your email address. For instance, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to my inbox and create a corresponding folder called “Example,” along with a new filter that sends any email addressed to that alias to the Example folder.

Importantly, you don’t ever use this alias anywhere else. That way, if anyone other than example.com starts sending email to it, it is reasonable to assume that example.com either shared your address with others or that it got hacked and relieved of that information. Indeed, security-minded readers have often alerted KrebsOnSecurity about spam to specific aliases that suggested a breach at some website, and usually they were right, even if the company that got hacked didn’t realize it at the time.

Alex Holden, founder of the Milwaukee-based cybersecurity consultancy Hold Security, said many threat actors will scrub their distribution lists of any aliases because there is a perception that these users are more security- and privacy-focused than normal users, and are thus more likely to report spam to their aliased addresses.

Holden said freshly-hacked databases also are often scrubbed of aliases before being sold in the underground, meaning the hackers will simply remove the aliased portion of the email address.

“I can tell you that certain threat groups have rules on ‘+*@’ email address deletion,” Holden said. “We just got the largest credentials cache ever — 1 billion new credentials to us — and most of that data is altered, with aliases removed. Modifying credential data for some threat groups is normal. They spend time trying to understand the database structure and removing any red flags.”

According to the breach tracking site HaveIBeenPwned.com, only about .03 percent of the breached records in circulation today include an alias.

Email aliases are rare enough that seeing just a few email addresses with the same alias in a breached database can make it trivial to identify which company likely got hacked and leaked said database. That’s because the most common aliases are simply the name of the website where the signup takes place, or some abbreviation or shorthand for it.

Hence, for a given database, if there are more than a handful of email addresses that have the same alias, the chances are good that whatever company or website corresponds to that alias has been hacked.

That might explain the actions of Allekabels, a large Dutch electronics web shop that suffered a data breach in 2021. Allekabels said a former employee had stolen data on 5,000 customers, and that those customers were then informed about the data breach by Allekabels.

But Dutch publication RTL Nieuws said it obtained a copy of the Allekabels user database from a hacker who was selling information on 3.6 million customers at the time, and found that the 5,000 number cited by the retailer corresponded to the number of customers who’d signed up using an alias. In essence, RTL argued, the company had notified only those most likely to notice and complain that their aliased addresses were suddenly receiving spam.

“RTL Nieuws has called more than thirty people from the database to check the leaked data,” the publication explained. “The customers with such a unique email address have all received a message from Allekabels that their data has been leaked – according to Allekabels they all happened to be among the 5000 data that this ex-employee had stolen.”

HaveIBeenPwned’s Hunt arrived at the conclusion that aliases account for about .03 percent of registered email addresses by studying the data leaked in the 2013 breach at Adobe, which affected at least 38 million users. Allekabels’s ratio of aliased users was considerably higher than Adobe’s — .14 percent — but then again European Internet users tend to be more privacy-conscious.

While overall adoption of email aliases is still quite low, that may be changing. Apple customers who use iCloud to sign up for new accounts online automatically are prompted to use Apple’s Hide My Email feature, which creates the account using a unique email address that automatically forwards to a personal inbox.

What are the downsides to using email aliases, apart from the hassle of setting them up? The biggest downer is that many sites won’t let you use a “+” sign in your email address, even though this functionality is clearly spelled out in the email standard.

Also, if you use aliases, it helps to have a reliable mnemonic to remember the alias used for each account (this is a non-issue if you create a new folder or rule for each alias). That’s because knowing the email address for an account is generally a prerequisite for resetting the account’s password, and if you can’t remember the alias you added way back when you signed up, you may have limited options for recovering access to that account if you at some point forget your password.

What about you, Dear Reader? Do you rely on email aliases? If so, have they been useful? Did I neglect to mention any pros or cons? Feel free to sound off in the comments below.

Scammers Sent Uber to Take Elderly Lady to the Bank

By BrianKrebs

Email scammers sent an Uber to the home of an 80-year-old woman who responded to a well-timed email scam, in a bid to make sure she went to the bank and wired money to the fraudsters.  In this case, the woman figured out she was being scammed before embarking for the bank, but her story is a chilling reminder of how far crooks will go these days to rip people off.

Travis Hardaway is a former music teacher turned app developer from Towson, Md. Hardaway said his mother last month replied to an email she received regarding an appliance installation from BestBuy/GeekSquad. Hardaway said the timing of the scam email couldn’t have been worse: His mom’s dishwasher had just died, and she’d paid to have a new one delivered and installed.

“I think that’s where she got confused, because she thought the email was about her dishwasher installation,” Hardaway told KrebsOnSecurity.

Hardaway said his mom initiated a call to the phone number listed in the phony BestBuy email, and that the scammers told her she owed $160 for the installation, which seemed right at the time. Then the scammers asked her to install remote administration software on her computer so that they could control the machine from afar and assist her in making the payment.

After she logged into her bank and savings accounts with scammers watching her screen, the fraudster on the phone claimed that instead of pulling $160 out of her account, they accidentally transferred $160,000 to her account. They said they they needed her help to make sure the money was “returned.”

“They took control of her screen and said they had accidentally transferred $160,000 into her account,” Hardaway said. “The person on the phone told her he was going to lose his job over this transfer error, that he didn’t know what to do. So they sent her some information about where to wire the money, and asked her to go to the bank. But she told them, ‘I don’t drive,’ and they told her, “No problem, we’re sending an Uber to come help you to the bank.'”

Hardaway said he was out of town when all this happened, and that thankfully his mom eventually grew exasperated and gave up trying to help the scammers.

“They told her they were sending an Uber to pick her up and that it was on its way,” Hardaway said. “I don’t know if the Uber ever got there. But my mom went over to the neighbor’s house and they saw it for what it was — a scam.”

Hardaway said he has since wiped her computer, reinstalled the operating system and changed her passwords. But he says the incident has left his mom rattled.

“She’s really second-guessing herself now,” Hardaway said. “She’s not computer-savvy, and just moved down here from Boston during COVID to be near us, but she’s living by herself and feeling isolated and vulnerable, and stuff like this doesn’t help.”

According to the Federal Bureau of Investigation (FBI), seniors are often targeted because they tend to be trusting and polite. More importantly, they also usually have financial savings, own a home, and have good credit—all of which make them attractive to scammers.

“Additionally, seniors may be less inclined to report fraud because they don’t know how, or they may be too ashamed of having been scammed,” the FBI warned in May. “They might also be concerned that their relatives will lose confidence in their abilities to manage their own financial affairs. And when an elderly victim does report a crime, they may be unable to supply detailed information to investigators.”

In 2021, more than 92,000 victims over the age of 60 reported losses of $1.7 billion to the FBI’s Internet Crime Complaint Center (IC3). The FBI says that represents a 74 percent increase in losses over losses reported in 2020.

The abuse of ride-sharing services to scam the elderly is not exactly new. Authorities in Tampa, Fla. say they’re investigating an incident from December 2021 where fraudsters who’d stolen $700,000 from elderly grandparents used Uber rides to pick up bundles of cash from their victims.

Researchers Warn of Increase in Phishing Attacks Using Decentralized IPFS Network

By Ravie Lakshmanan
The decentralized file system solution known as IPFS is becoming the new "hotbed" for hosting phishing sites, researchers have warned. Cybersecurity firm Trustwave SpiderLabs, which disclosed specifics of the spam campaigns, said it identified no less than 3,000 emails containing IPFS phishing URLs as an attack vector in the last three months. IPFS, short for InterPlanetary File System, is a

Breach Exposes Users of Microleaves Proxy Service

By BrianKrebs

Microleaves, a ten-year-old proxy service that lets customers route their web traffic through millions of Microsoft Windows computers, recently fixed a vulnerability in their website that exposed their entire user database. Microleaves claims its proxy software is installed with user consent, but data exposed in the breach shows the service has a lengthy history of being supplied with new proxies by affiliates incentivized to distribute the software any which way they can — such as by secretly bundling it with other titles.

The Microleaves proxy service, which is in the process of being rebranded to Shifter[.[io.

Launched in 2013, Microleaves is a service that allows customers to route their Internet traffic through PCs in virtually any country or city around the globe. Microleaves works by changing each customer’s Internet Protocol (IP) address every five to ten minutes.

The service, which accepts PayPal, Bitcoin and all major credit cards, is aimed primarily at enterprises engaged in repetitive, automated activity that often results in an IP address being temporarily blocked — such as data scraping, or mass-creating new accounts at some service online.

In response to a report about the data exposure from KrebsOnSecurity, Microleaves said it was grateful for being notified about a “very serious issue regarding our customer information.”

Abhishek Gupta is the PR and marketing manager for Microleaves, which he said in the process of being rebranded to “Shifter.io.” Gupta said the report qualified as a “medium” severity security issue in Shifter’s brand new bug bounty program (the site makes no mention of a bug bounty), which he said offers up to $2,000 for reporting data exposure issues like the one they just fixed. KrebsOnSecurity declined the offer and requested that Shifter donate the amount to the Electronic Frontier Foundation (EFF), a digital rights group.

From its inception nearly a decade ago, Microleaves has claimed to lease between 20-30 million IPs via its service at any time. Riley Kilmer, co-founder of the proxy-tracking service Spur.us, said that 20-30 million number might be accurate for Shifter if measured across a six-month time frame. Currently, Spur is tracking roughly a quarter-million proxies associated with Microleaves/Shifter each day, with a high rate of churn in IPs.

Early on, this rather large volume of IP addresses led many to speculate that Microleaves was just a botnet which was being resold as a commercial proxy service.

Proxy traffic related to top Microleaves users, as exposed by the website’s API.

The very first discussion thread started by the new user Microleaves on the forum BlackHatWorld in 2013 sought forum members who could help test and grow the proxy network. At the time, the Microleaves user said their proxy network had 150,000 IPs globally, and was growing quickly.

One of BlackHatWorld’s moderators asked the administrator of the forum to review the Microleaves post.

“User states has 150k proxies,” the forum skeptic wrote. “No seller on BHW has 150k working daily proxies none of us do. Which hints at a possible BOTNET. That’s the only way you will get 150k.”

Microleaves has long been classified by antivirus companies as adware or as a “potentially unwanted program” (PUP), the euphemism that antivirus companies use to describe executable files that get installed with ambiguous consent at best, and are often part of a bundle of software tied to some “free” download. Security vendor Kaspersky flags the Microleaves family of software as a trojan horse program that commandeers the user’s Internet connection as a proxy without notifying the user.

“While working, these Trojans pose as Microsoft Windows Update,” Kaspersky wrote.

In a February 2014 post to BlackHatWorld, Microleaves announced that its sister service — reverseproxies[.]com — was now offering an “Auto CAPTCHA Solving Service,” which automates the solving of those squiggly and sometimes frustrating puzzles that many websites use to distinguish bots from real visitors. The CAPTCHA service was offered as an add-on to the Microleaves proxy service, and ranged in price from $20 for a 2-day trial to $320 for solving up to 80 captchas simultaneously.

“We break normal Recaptcha with 60-90% success rate, recaptcha with blobs 30% success, and 500+ other captcha,” Microleaves wrote. “As you know all success rate on recaptcha depends very much on good proxies that are fresh and not spammed!”

WHO IS ACIDUT?

The exposed Microleaves user database shows that the first user created on the service — username “admin” — used the email address alex.iulian@aol.com. A search on that email address in Constella Intelligence, a service that tracks breached data, reveals it was used to create an account at the link shortening service bit.ly under the name Alexandru Florea, and the username “Acidut.” [Full disclosure: Constella is currently an advertiser on this website].

According to the cyber intelligence company Intel 471, a user named Acidut with the email address iulyan87_4u@gmail.com had an active presence on almost a dozen shadowy money-making and cybercrime forums from 2010 to 2017, including BlackHatWorld, Carder[.]pro, Hackforums, OpenSC, and CPAElites.

The user Microleaves (later “Shifter.io”) advertised on BlackHatWorld the sale of 31 million residential IPs for use as proxies, in late 2013. The same account continues to sell subscriptions to Shifter.io.

In a 2011 post on Hackforums, Acidut said they were building a botnet using an “exploit kit,” a set of browser exploits made to be stitched into hacked websites and foist malware on visitors. Acidut claimed their exploit kit was generating 3,000 to 5,000 new bots each day. OpenSC was hacked at one point, and its private messages show Acidut purchased a license from Exmanoize, the handle used by the creator of the Eleonore Exploit Kit.

By November 2013, Acidut was advertising the sale of “26 million SOCKS residential proxies.” In a March 2016 post to CPAElites, Acidut said they had a worthwhile offer for people involved in pay-per-install or “PPI” schemes, which match criminal gangs who pay for malware installs with enterprising hackers looking to sell access to compromised PCs and websites.

Because pay-per-install affiliate schemes rarely impose restrictions on how the software can be installed, such programs can be appealing for cybercriminals who already control large collections of hacked machines and/or compromised websites. Indeed, Acidut went a step further, adding that their program could be quietly and invisibly nested inside of other programs.

“For those of you who are doing PPI I have a global offer that you can bundle to your installer,” Acidut wrote. “I am looking for many installs for an app that will generate website visits. The installer has a silence version which you can use inside your installer. I am looking to buy as many daily installs as possible worldwide, except China.”

Asked about the source of their proxies in 2014, the Microleaves user responded that it was “something related to a PPI network. I can’t say more and I won’t get into details.”

Acidut authored a similar message on the forum BlackHatWorld in 2013, where they encouraged users to contact them on Skype at the username “nevo.julian.” That same Skype contact address was listed prominently on the Microleaves homepage up until about a week ago when KrebsOnSecurity first reached out to the company.

ONLINE[.]IO (NOW MERCIFULLY OFFLINE)

There is a Facebook profile for an Alexandru Iulian Florea from Constanta, Romania, whose username on the social media network is Acidut. Prior to KrebsOnSecurity alerting Shifter of its data breach, the Acidut profile page associated Florea with the websites microleaves.com, shrooms.io, leftclick[.]io, and online[.]io. Mr. Florea did not respond to multiple requests for comment, and his Facebook page no longer mentions these domains.

Leftclick and online[.]io emerged as subsidiaries of Microleaves between 2017 and 2018. According to a help wanted ad posted in 2018 for a developer position at online[.]io, the company’s services were brazenly pitched to investors as “a cybersecurity and privacy tool kit, offering extensive protection using advanced adblocking, anti-tracking systems, malware protection, and revolutionary VPN access based on residential IPs.”

A teaser from Irish Tech News.

“Online[.]io is developing the first fully decentralized peer-to-peer networking technology and revolutionizing the browsing experience by making it faster, ad free, more reliable, secure and non-trackable, thus freeing the Internet from annoying ads, malware, and trackers,” reads the rest of that help wanted ad.

Microleaves CEO Alexandru Florea gave an “interview” to the website Irishtechnews.ie in 2018, in which he explained how Online[.]io (OIO) was going to upend the online advertising and security industries with its initial coin offering (ICO). The word interview is in air quotes because the following statements by Florea deserved some serious pushback by the interviewer.

“Online[.]io solution, developed using the Ethereum blockchain, aims at disrupting the digital advertising market valued at more than $1 trillion USD,” Alexandru enthused. “By staking OIO tokens and implementing our solution, the website operators will be able to access a new non-invasive revenue stream, which capitalizes on time spent by users online.”

“At the same time, internet users who stake OIO tokens will have the opportunity to monetize on the time spent online by themselves and their peers on the World Wide Web,” he continued. “The time spent by users online will lead to ICE tokens being mined, which in turn can be used in the dedicated merchant system or traded on exchanges and consequently changed to fiat.”

Translation: If you install our proxy bot/CAPTCHA-solver/ad software on your computer — or as an exploit kit on your website — we’ll make millions hijacking ads and you will be rewarded with heaps of soon-to-be-worthless shitcoin. Oh, and all your security woes will disappear, too.

It’s unclear how many Internet users and websites willingly agreed to get bombarded with Online[.]io’s annoying ads and search hijackers — and to have their PC turned into a proxy or CAPTCHA-solving zombie for others. But that is exactly what multiple security companies said happened when users encountered online[.]io, which operated using the Microsoft Windows process name of “online-guardian.exe.”

Incredibly, Crunchbase says Online[.]io raised $6 million in funding for an initial coin offering in 2018, based on the plainly ludicrous claims made above. Since then, however, online[.]io seems to have gone…offline, for good.

SUPER TECH VENTURES?

Until this week, Shifter.io’s website also exposed information about its customer base and most active users, as well as how much money each client has paid over the lifetime of their subscription. The data indicates Shifter has earned more than $11.7 million in direct payments, although it’s unclear how far back in time those payment records go, or how complete they are.

The bulk of Shifter customers who spent more than $100,000 at the proxy service appear to be digital advertising companies, including some located in the United States. None of the several Shifter customers approached by KrebsOnSecurity agreed to be interviewed.

Shifter’s Gupta said he’d been with the company for three years, since the new owner took over the company and made the rebrand to Shifter.

“The company has been on the market for a long time, but operated under a different brand called Microleaves, until new ownership and management took over the company started a reorganization process that is still on-going,” Gupta said. “We are fully transparent. Mostly [our customers] work in the data scraping niche, this is why we actually developed more products in this zone and made a big shift towards APIs and integrated solutions in the past year.”

Ah yes, the same APIs and integrated solutions that were found exposed to the Internet and leaking all of Shifter’s customer information.

Gupta said the original founder of Microleaves was a man from India, who later sold the business to Florea. According to Gupta, the Romanian entrepreneur had multiple issues in trying to run the company, and then sold it three years ago to the current owner — Super Tech Ventures, a private equity company based in Taiwan.

“Our CEO is Wang Wei, he has been with the company since 3 years ago,” Gupta said. “Mr. Florea left the company two years ago after ending this transition period.”

Google and other search engines seem to know nothing about a Super Tech Ventures based in Taiwan. Incredibly, Shifter’s own PR person claimed that he, too, was in the dark on this subject.

“I would love to help, but I really don’t know much about the mother company,” Gupta said, essentially walking back his “fully transparent” statement. “I know they are a branch of the bigger group of asian investment firms focused on private equity in multiple industries.”

Adware and proxy software are often bundled together with “free” software utilities online, or with popular software titles that have been pirated and quietly fused with installers tied to various PPI affiliate schemes.

But just as often, these intrusive programs will include some type of notice — even if installed as part of a software bundle — that many users simply do not read and click “Next” to get on with installing whatever software they’re seeking to use. In these cases, selecting the “basic” or “default” settings while installing usually hides any per-program installation prompts, and assumes you agree to all of the bundled programs being installed. It’s always best to opt for the “custom” installation mode, which can give you a better idea of what is actually being installed, and can let you control certain aspects of the installation.

Either way, it’s best to start with the assumption that if a software or service online is “free,” that there is likely some component involved that allows the provider of that service to monetize your activity. As KrebsOnSecurity noted at the conclusion of last week’s story on a China-based proxy service called 911, the rule of thumb for transacting online is that if you’re not the paying customer, then you and/or your devices are probably the product that’s being sold to others.

Further reading on proxy services:

July 18, 2022: A Deep Dive Into the Residential Proxy Service ‘911’
June 28, 2022: The Link Between AWM Proxy & the Glupteba Botnet
June 22, 2022: Meet the Administrators of the RSOCKS Proxy Botnet
Sept. 1, 2021: 15-Year-Old Malware Proxy Network VIP72 Goes Dark
Aug. 19, 2019: The Rise of “Bulletproof” Residential Networks

A Retrospective on the 2015 Ashley Madison Breach

By BrianKrebs

It’s been seven years since the online cheating site AshleyMadison.com was hacked and highly sensitive data about its users posted online. The leak led to the public shaming and extortion of many Ashley Madison users, and to at least two suicides. To date, little is publicly known about the perpetrators or the true motivation for the attack. But a recent review of Ashley Madison mentions across Russian cybercrime forums and far-right websites in the months leading up to the hack revealed some previously unreported details that may deserve further scrutiny.

As first reported by KrebsOnSecurity on July 19, 2015, a group calling itself the “Impact Team” released data sampled from millions of users, as well as maps of internal company servers, employee network account information, company bank details and salary information.

The Impact Team said it decided to publish the information because ALM “profits on the pain of others,” and in response to a paid “full delete” service Ashley Madison parent firm Avid Life Media offered that allowed members to completely erase their profile information for a $19 fee.

According to the hackers, although the delete feature promised “removal of site usage history and personally identifiable information from the site,” users’ purchase details — including real name and address — weren’t actually scrubbed.

“Full Delete netted ALM $1.7mm in revenue in 2014. It’s also a complete lie,” the hacking group wrote. “Users almost always pay with credit card; their purchase details are not removed as promised, and include real name and address, which is of course the most important information the users want removed.”

A snippet of the message left behind by the Impact Team.

The Impact Team said ALM had one month to take Ashley Madison offline, along with a sister property called Established Men. The hackers promised that if a month passed and the company did not capitulate, it would release “all customer records, including profiles with all the customers’ secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails.”

Exactly 30 days later, on Aug. 18, 2015, the Impact Team posted a “Time’s up!” message online, along with links to 60 gigabytes of Ashley Madison user data.

AN URGE TO DESTROY ALM

One aspect of the Ashley Madison breach that’s always bothered me is how the perpetrators largely cast themselves as fighting a crooked company that broke their privacy promises, and how this narrative was sustained at least until the Impact Team decided to leak all of the stolen user account data in August 2015.

Granted, ALM had a lot to answer for. For starters, after the breach it became clear that a great many of the female Ashley Madison profiles were either bots or created once and never used again. Experts combing through the leaked user data determined that fewer than one percent of the female profiles on Ashley Madison had been used on a regular basis, and the rest were used just once — on the day they were created. On top of that, researchers found 84 percent of the profiles were male.

But the Impact Team had to know that ALM would never comply with their demands to dismantle Ashley Madison and Established Men. In 2014, ALM reported revenues of $115 million. There was little chance the company was going to shut down some of its biggest money machines.

Hence, it appears the Impact Team’s goal all along was to create prodigious amounts of drama and tension by announcing the hack of a major cheating website, and then letting that drama play out over the next few months as millions of exposed Ashley Madison users freaked out and became the targets of extortion attacks and public shaming.

Robert Graham, CEO of Errata Security, penned a blog post in 2015 concluding that the moral outrage professed by the Impact Team was pure posturing.

“They appear to be motivated by the immorality of adultery, but in all probability, their motivation is that #1 it’s fun and #2 because they can,” Graham wrote.

Per Thorsheim, a security researcher in Norway, told Wired at the time that he believed the Impact Team was motivated by an urge to destroy ALM with as much aggression as they could muster.

“It’s not just for the fun and ‘because we can,’ nor is it just what I would call ‘moralistic fundamentalism,'” Thorsheim told Wired. “Given that the company had been moving toward an IPO right before the hack went public, the timing of the data leaks was likely no coincidence.”

NEO-NAZIS TARGET ASHLEY MADISON CEO

As the seventh anniversary of the Ashley Madison hack rolled around, KrebsOnSecurity went back and looked for any mentions of Ashley Madison or ALM on cybercrime forums in the months leading up to the Impact Team’s initial announcement of the breach on July 19, 2015. There wasn’t much, except a Russian guy offering to sell payment and contact information on 32 million AshleyMadison users, and a bunch of Nazis upset about a successful Jewish CEO promoting adultery.

Cyber intelligence firm Intel 471 recorded a series of posts by a user with the handle “Brutium” on the Russian-language cybercrime forum Antichat between 2014 and 2016. Brutium routinely advertised the sale of large, hacked databases, and on Jan. 24, 2015, this user posted a thread offering to sell data on 32 million Ashley Madison users:

“Data from July 2015
Total ~32 Million contacts:
full name; email; phone numbers; payment, etc.”

It’s unclear whether the postdated “July 2015” statement was a typo, or if Brutium updated that sales thread at some point. There is also no indication whether anyone purchased the information. Brutium’s profile has since been removed from the Antichat forum.

Flashpoint is a threat intelligence company in New York City that keeps tabs on hundreds of cybercrime forums, as well as extremist and hate websites. A search in Flashpoint for mentions of Ashley Madison or ALM prior to July 19, 2015 shows that in the six months leading up to the hack, Ashley Madison and its then-CEO Noel Biderman became a frequent subject of derision across multiple neo-Nazi websites.

On Jan. 14, 2015, a member of the neo-Nazi forum Stormfront posted a lively thread about Ashley Madison in the general discussion area titled, “Jewish owned dating website promoting adultery.”

On July 3, 2015, Andrew Anglin, the editor of the alt-right publication Daily Stormer, posted excerpts about Biderman from a story titled, “Jewish Hyper-Sexualization of Western Culture,” which referred to Biderman as the “Jewish King of Infidelity.”

On July 10, a mocking montage of Biderman photos with racist captions was posted to the extremist website Vanguard News Network, as part of a thread called “Jews normalize sexual perversion.”

“Biderman himself says he’s a happily married father of two and does not cheat,” reads the story posted by Anglin on the Daily Stormer. “In an interview with the ‘Current Affair’ program in Australia, he admitted that if he found out his own wife was accessing his cheater’s site, ‘I would be devastated.'”

The leaked AshleyMadison data included more than three years’ worth of emails stolen from Biderman. The hackers told Motherboard in 2015 they had 300 GB worth of employee emails, but that they saw no need to dump the inboxes of other company employees.

Several media outlets pounced on salacious exchanges in Biderman’s emails as proof he had carried on multiple affairs. Biderman resigned as CEO on Aug. 28, 2015. The last message in the archive of Biderman’s stolen emails was dated July 7, 2015 — almost two weeks before the Impact Team would announce their hack.

Biderman told KrebsOnSecurity on July 19, 2015 that the company believed the hacker was some type of insider.

“We’re on the doorstep of [confirming] who we believe is the culprit, and unfortunately that may have triggered this mass publication,” Biderman said. “I’ve got their profile right in front of me, all their work credentials. It was definitely a person here that was not an employee but certainly had touched our technical services.”

Certain language in the Impact Team’s manifesto seemed to support this theory, such as the line: “For a company whose main promise is secrecy, it’s like you didn’t even try, like you thought you had never pissed anyone off.”

But despite ALM offering a belated $500,000 reward for information leading to the arrest and conviction of those responsible, to this day no one has been charged in connection with the hack.

Security Experts Warn of Two Primary Client-Side Risks Associated with Data Exfiltration and Loss

By The Hacker News
Two client-side risks dominate the problems with data loss and data exfiltration: improperly placed trackers on websites and web applications and malicious client-side code pulled from third-party repositories like NPM.  Client-side security researchers are finding that improperly placed trackers, while not intentionally malicious, are a growing problem and have clear and significant privacy

7 cybersecurity tips for your summer vacation!

By Paul Ducklin
Here you go - seven thoughtful cybersecurity tips to help you travel safely...

Experian, You Have Some Explaining to Do

By BrianKrebs

Twice in the past month KrebsOnSecurity has heard from readers who had their accounts at big-three credit bureau Experian hacked and updated with a new email address that wasn’t theirs. In both cases the readers used password managers to select strong, unique passwords for their Experian accounts. Research suggests identity thieves were able to hijack the accounts simply by signing up for new accounts at Experian using the victim’s personal information and a different email address.

John Turner is a software engineer based in Salt Lake City. Turner said he created the account at Experian in 2020 to place a security freeze on his credit file, and that he used a password manager to select and store a strong, unique password for his Experian account.

Turner said that in early June 2022 he received an email from Experian saying the email address on his account had been changed. Experian’s password reset process was useless at that point because any password reset links would be sent to the new (impostor’s) email address.

An Experian support person Turner reached via phone after a lengthy hold time asked for his Social Security Number (SSN) and date of birth, as well as his account PIN and answers to his secret questions. But the PIN and secret questions had already been changed by whoever re-signed up as him at Experian.

“I was able to answer the credit report questions successfully, which authenticated me to their system,” Turner said. “At that point, the representative read me the current stored security questions and PIN, and they were definitely not things I would have used.”

Turner said he was able to regain control over his Experian account by creating a new account. But now he’s wondering what else he could do to prevent another account compromise.

“The most frustrating part of this whole thing is that I received multiple ‘here’s your login information’ emails later that I attributed to the original attackers coming back and attempting to use the ‘forgot email/username’ flow, likely using my SSN and DOB, but it didn’t go to their email that they were expecting,” Turner said. “Given that Experian doesn’t support two-factor authentication of any kind — and that I don’t know how they were able to get access to my account in the first place — I’ve felt very helpless ever since.”

Arthur Rishi is a musician and co-executive director of the Boston Landmarks Orchestra. Rishi said he recently discovered his Experian account had been hijacked after receiving an alert from his credit monitoring service (not Experian’s) that someone had tried to open an account in his name at JPMorgan Chase.

Rishi said the alert surprised him because his credit file at Experian was frozen at the time, and Experian did not notify him about any activity on his account. Rishi said Chase agreed to cancel the unauthorized account application, and even rescinded its credit inquiry (each credit pull can ding your credit score slightly).

But he never could get anyone from Experian’s support to answer the phone, despite spending what seemed like eternity trying to progress through the company’s phone-based system. That’s when Rishi decided to see if he could create a new account for himself at Experian.

“I was able to open a new account at Experian starting from scratch, using my SSN, date of birth and answering some really basic questions, like what kind of car did you take out a loan for, or what city did you used to live in,’ Rishi said.

Upon completing the sign-up, Rishi noticed that his credit was unfrozen.

Like Turner, Rishi is now worried that identity thieves will just hijack his Experian account once more, and that there is nothing he can do to prevent such a scenario. For now, Rishi has decided to pay Experian $25.99 a month to more closely monitor his account for suspicious activity. Even using the paid Experian service, there were no additional multi-factor authentication options available, although he said Experian did send a one-time code to his phone via SMS recently when he logged on.

“Experian now sometimes does require MFA for me if I use a new browser or have my VPN on,” Rishi said, but he’s not sure if Experian’s free service would have operated differently.

“I get so angry when I think about all this,” he said. “I have no confidence this won’t happen again.”

In a written statement, Experian suggested that what happened to Rishi and Turner was not a normal occurrence, and that its security and identity verification practices extend beyond what is visible to the user.

“We believe these are isolated incidents of fraud using stolen consumer information,” Experian’s statement reads. “Specific to your question, once an Experian account is created, if someone attempts to create a second Experian account, our systems will notify the original email on file.”

“We go beyond reliance on personally identifiable information (PII) or a consumer’s ability to answer knowledge-based authentication questions to access our systems,” the statement continues. “We do not disclose additional processes for obvious security reasons; however, our data and analytical capabilities verify identity elements across multiple data sources and are not visible to the consumer. This is designed to create a more positive experience for our consumers and to provide additional layers of protection. We take consumer privacy and security seriously, and we continually review our security processes to guard against constant and evolving threats posed by fraudsters.”

ANALYSIS

KrebsOnSecurity sought to replicate Turner and Rishi’s experience — to see if Experian would allow me to re-create my account using my personal information but a different email address. The experiment was done from a different computer and Internet address than the one that created the original account years ago.

After providing my Social Security Number (SSN), date of birth, and answering several multiple choice questions whose answers are derived almost entirely from public records, Experian promptly changed the email address associated with my credit file. It did so without first confirming that new email address could respond to messages, or that the previous email address approved the change.

Experian’s system then sent an automated message to the original email address on file, saying the account’s email address had been changed. The only recourse Experian offered in the alert was to sign in, or send an email to an Experian inbox that replies with the message, “this email address is no longer monitored.”

After that, Experian prompted me to select new secret questions and answers, as well as a new account PIN — effectively erasing the account’s previously chosen PIN and recovery questions. Once I’d changed the PIN and security questions, Experian’s site helpfully reminded me that I have a security freeze on file, and would I like to remove or temporarily lift the security freeze?

To be clear, Experian does have a business unit that sells one-time password services to businesses. While Experian’s system did ask for a mobile number when I signed up a second time, at no time did that number receive a notification from Experian. Also, I could see no option in my account to enable multi-factor authentication for all logins.

How does Experian differ from the practices of Equifax and TransUnion, the other two big consumer credit reporting bureaus? When KrebsOnSecurity tried to re-create an existing account at TransUnion using my Social Security number, TransUnion rejected the application, noting that I already had an account and prompting me to proceed through its lost password flow. The company also appears to send an email to the address on file asking to validate account changes.

Likewise, trying to recreate an existing account at Equifax using personal information tied to my existing account prompts Equifax’s systems to report that I already have an account, and to use their password reset process (which involves sending a verification email to the address on file).

KrebsOnSecurity has long urged readers in the United States to place a security freeze on their files with the three major credit bureaus. With a freeze in place, potential creditors can’t pull your credit file, which makes it very unlikely anyone will be granted new lines of credit in your name. I’ve also advised readers to plant their flag at the three major bureaus, to prevent identity thieves from creating an account for you and assuming control over your identity.

The experiences of Rishi, Turner and this author suggest Experian’s practices currently undermine both of those proactive security measures. Even so, having an active account at Experian may be the only way you find out when crooks have assumed your identity. Because at least then you should receive an email from Experian saying they gave your identity to someone else.

In April 2021, KrebsOnSecurity revealed how identity thieves were exploiting lax authentication on Experian’s PIN retrieval page to unfreeze consumer credit files. In those cases, Experian failed to send any notice via email when a freeze PIN was retrieved, nor did it require the PIN to be sent to an email address already associated with the consumer’s account.

A few days after that April 2021 story, KrebsOnSecurity broke the news that an Experian API was exposing the credit scores of most Americans.

Emory Roan, policy counsel for the Privacy Rights Clearinghouse, said Experian not offering multi-factor authentication for consumer accounts is inexcusable in 2022.

“They compound the problem by gating the recovery process with information that’s likely available or inferable from third party data brokers, or that could have been exposed in previous data breaches,” Roan said. “Experian is one of the largest Consumer Reporting Agencies in the country, trusted as one of the few essential players in a credit system Americans are forced to be part of. For them to not offer consumers some form of (free) MFA is baffling and reflects extremely poorly on Experian.”

Nicholas Weaver, a researcher for the International Computer Science Institute at University of California, Berkeley, said Experian has no real incentive to do things right on the consumer side of its business. That is, he said, unless Experian’s customers — banks and other lenders — choose to vote with their feet because too many people with frozen credit files are having to deal with unauthorized applications for new credit.

“The actual customers of the credit service don’t realize how much worse Experian is, and this isn’t the first time Experian has screwed up horribly,” Weaver said. “Experian is part of a triopoly, and I’m sure this is costing their actual customers money, because if you have a credit freeze that gets lifted and somebody loans against it, it’s the lender who eats that fraud cost.”

And unlike consumers, he said, lenders do have a choice in which of the triopoly handles their credit checks.

“I do think it’s important to point out that their real customers do have a choice, and they should switch to TransUnion and Equifax,” he added.

More greatest hits from Experian:

2017: Experian Site Can Give Anyone Your Credit Freeze PIN
2015: Experian Breach Affects 15 Million Customers
2015: Experian Breach Tied to NY-NJ ID Theft Ring
2015: At Experian, Security Attrition Amid Acquisitions
2015: Experian Hit With Class Action Over ID Theft Service
2014: Experian Lapse Allowed ID Theft Service Access to 200 Million Consumer Records
2013: Experian Sold Consumer Data to ID Theft Service

Update, 10:32 a.m.: Updated the story to clarify that while Experian does sometimes ask users to enter a one-time code sent via SMS to the number on file, there does not appear to be any option to enable this on all logins.

Welcoming the Polish Government to Have I Been Pwned

By Troy Hunt
Welcoming the Polish Government to Have I Been Pwned

Continuing the rollout of Have I Been Pwned (HIBP) to national governments around the world, today I'm very happy to welcome Poland to the service! The Polish CSIRT GOV is now the 34th onboard the service and has free and open access to APIs allowing them to query their government domains.

Seeing the ongoing uptake of governments using HIBP to do useful things in the wake of data breaches is enormously fulfilling and I look forward to welcoming many more national CSIRTs in the future.

Understanding Have I Been Pwned's Use of SHA-1 and k-Anonymity

By Troy Hunt
Understanding Have I Been Pwned's Use of SHA-1 and k-Anonymity

Four and a half years ago now, I rolled out version 2 of HIBP's Pwned Passwords that implemented a really cool k-anonymity model courtesy of the brains at Cloudflare. Later in 2018, I did the same thing with the email address search feature used by Mozilla, 1Password and a handful of other paying subscribers. It works beautifully; it's ridiculously fast, efficient and above all, anonymous. Yet from time to time, I get messages along the lines of this:

Why are you using SHA-1? It's insecure and deprecated.

Or alternatively:

Our [insert title of person who fills out paperwork but has no technical understanding here] says that k-anonymity involves sending you PII.

Both these positions make no sense whatsoever when you peel back the covers and understand what's happening underneath, but I get how on face value these conclusions can be drawn. So, let's settle it here in a more complete fashion than what I can do via short tweets or brief emails.

SHA-1 is Just Fine for k-Anonymity

Let's begin with the actual problem SHA-1 presents. Actually, the multiple problems, the first of which is that it's just way too fast for storing user passwords in an online system. More than a decade ago now, I wrote about how Our Password Hashing Has no Clothes and in that post, showed the massive rate at which consumer-grade hardware can calculate these hashes and consequently "crack" the password. Since that time, Moore's Law has done its thing many times over making the proposition of SHA-1 (or SHA-256 or SHA-512) even worse than before. For a modern day reference of how you should be storing passwords, check out OWASP's Password Storage Cheat Sheet.

The other problem relates to how SHA-1 is used for integrity checks. Hashing algorithms provide an efficient means of comparing two files and establishing if their contents is the same due to the deterministic nature of the algorithm (the same input always produces the same output). If a trustworthy source says "the hash of the file is 3713..42" (shown in abbreviated form) then any file with that same hash is assumed to be the same as the one described by the trustworthy source. We use hashes all over the place for precisely this purpose; for example, if I wanted to download Windows 11 Business Editions from my MSDN subscription, I can refer to the hash Microsoft provides on the download page:

Understanding Have I Been Pwned's Use of SHA-1 and k-Anonymity

After download, I can then use a utility such as PowerShell's Get-FileHash to verify that the file I downloaded is indeed the same one listed above. (There's another rabbit hole we can go down about how you trust the hash above, but I'll leave that for another post.)

We also use hashes when implementing subresource integrity (SRI) on websites to ensure external dependencies haven't been modified. Every time this very blog loads Font Awesome from Cloudflare's CDN, for example, it's verified against the hash in the integrity attribute of the script tag (view source for yourself).

And finally (although not exhaustively - there are many other places we use hashing algorithms in tech), we use hashing algorithms on digital certificate signatures. To pick another example from this blog, the certificate issued by Cloudflare uses SHA-256 as the signature hash algorithm:

Understanding Have I Been Pwned's Use of SHA-1 and k-Anonymity

But ponder this: if a hashing algorithm always produces a fixed length output (in the case of SHA-1, it's 40 hexadecimal characters), then there are a finite number of hashes in the world. In that SHA-1 example, the finite number is 16^40 as there are 16 possible values (0-9 and a-f) and 40 positions for them. But how many different input strings are there in the world? Infinite! So, there must be multiple input strings that produce the same output, and this is what we refer to as a "hash collision". It's possible for this to occur naturally, although it's exceedingly unlikely simply due to the massive number of possibilities 16^40 presents. However, what if you could manufacture a hash collision? I mean what if you could take an existing hash for an existing document and say "I'm going to create my own document that's different but when passed through SHA-1, produces the same hash!"?

Half a decade ago now, Google researchers demonstrated precisely this with their SHAttered attack. Their simple infographic tells the story:

Understanding Have I Been Pwned's Use of SHA-1 and k-Anonymity

And this is the heart of the integrity problem with SHA-1: it's simply past its used by date as an algorithm we can be confident in. That's why the signature hash algorithm of the TLS cert on this blog uses SHA-256 instead, among other examples of where we've eschewed the weaker algorithm in favour of stronger variants.

So, now that you understand the problem with SHA-1, let's look at how it's used in HIBP and why it isn't a problem there. There are actually 2 reasons, and I'll start with a sample of passwords used in Pwned Passwords:

P@ssw0rd
abc123
635,someone@example.com,+61430978216,37 example street
money
qwerty

That middle line isn't a password, it's a parsing problem. Not necessarily my parsing problem, it just turns out that you can't always trust hackers to dump breached data in a clean format 🤷‍♂️ So, instead of providing passwords to people in plain text format, I provide them as SHA-1 hashes:

21BD12DC183F740EE76F27B78EB39C8AD972A757
6367C48DD193D56EA7B0BAAD25B19455E529F5EE
A4DDCDA001E137C72FF8259F36BC67C5F9E083AA
C95259DE1FD719814DAEF8F1DC4BD64F9D885FF0
B1B3773A05C0ED0176787A4F1574FF0075F7521E

4 of those hashes are easily cracked (Google is great at that, just try searching for the first one) and that's just fine; nobody is put at risk by learning that some unidentified party used a common password. The 1 hash that won't yield any search results (until Google indexes this blog post...) is the middle one. The fact that SHA-1 is fast to calculate and has proven hash collision attacks against its integrity doesn't diminish the purpose it serves in protecting badly parsed data.

The second reason is best explained by walking through the process of how the API is queried. Let's take an example of someone signing up to a website with the following password:

P@ssw0rd

This will pass many password complexity criteria (uppercase, lowercase, number, non-alphanumeric character, 8 chars long) but is clearly terrible. Because they're signing up to a responsible website that checks Pwned Passwords on registration, that website now creates a SHA-1 hash of the provided password:

21BD12DC183F740EE76F27B78EB39C8AD972A757

Let's pause here for a sec: whether it's a hash of a password or a hash of an email address, what we're looking at is a pseudonymous representation of the original data. There's no anonymity of substance achieved here because in the specific case above, you can simply Google the hash and in the case of an email address, you can determine with near certainty (hash collisions aside), if a given plain text email address is the one used to generate the hash.

This, however, is a different story:

21BD1

This is the first 5 characters only of the hash and it's passed to the Pwned Passwords API as follows:

https://api.pwnedpasswords.com/range/21BD1

You can easily run this yourself and see the result but to summarise, the API then responds with 788 lines, including the following 5:

2D6980B9098804E7A83DC5831BFBAF3927F:1
2D8D1B3FAACCA6A3C6A91617B2FA32E2F57:1
2DC183F740EE76F27B78EB39C8AD972A757:83129
2DE4C0087846D223DBBCCF071614590F300:3
2DEA2B1D02714099E4B7A874B4364D518F6:1

What we're looking at here is the hash suffix of every hash that begins with 21BD1 followed by the number of times that password has been seen. Turns out that "P@ssw0rd" ain't a great choice as it's the one in the middle that's been seen over 83k times. The consumer of the Pwned Passwords service knows it's this one because when combined with the prefix, it's a perfect match to the full hash of the password. I'll touch more on the mathematical properties of this in a moment, for now I want to explain the second reason why SHA-1 is used:

SHA-1 makes it very easy to segment the entire corpus of hashes into roughly equal equivalent sized chunks that can be queried by prefix. As I already touched on, there are 16^5 different possible hash prefixes which is specifically 1,048,576 or "roughly a million". Not every hash prefix has 788 associated suffixes, some have more and others less but if we take that as an average, that explains how the approximately 850M passwords in the service are divided down into a million smaller collections.

Why the first 5 characters? Because if it was the first 4 then each response would be 16 times larger and it would start hurting response times. If it was the first 6 then each response would be 16 times smaller and it would start hurting anonymity. 5 characters was the sweet spot between the two.

Why not SHA-256? Instead of 40 characters each hash would be 64 characters and whilst I could have achieved the same anonymity properties by still just using the first 5 characters of the hash, each suffix in the response would be an additional 24 characters and multiplying that 788 times over adds multiple kb to each response, even when compressed on the transport layer. It's also a slower hashing algorithm; still totally unsuitable for storing user passwords in an online system, but it can have a hit on the consuming service if doing huge amounts of calculations. And for what? Integrity doesn't matter because there's no value in modifying the source password to forge a colliding hash. You'd further increase the anonymity by 16^24 more possibilities, but then why not use SHA-512 which is 128 characters therefore another 16^64 possibilities than even SHA-256? Because, as you'll read in the next section, even SHA-1 provides way more practical anonymity than you'll ever need anyway.

In summary, think of the choice of SHA-1 simply being to obfuscate poorly parsed input data to protect inadvertently included info, and as a means of dividing the collection of data down into nice easily segmentable and queryable collections. If your position is "SHA-1 is broken", then you simply don't understand its purpose here.

PII and the Protection Provided by k-Anonymity

Let's turn the discussion more to the privacy aspects of the email address search I mentioned earlier on. The principles are identical to the password search but for one difference in the technical implementation: queries are done on the first 6 characters of a SHA-1 hash, not the first 5. The reason is simple: there are a lot more email addresses in the system than passwords, about 5 billion in total. Querying via the first 6 characters of a SHA-1 hash means there are 16 times more possibilities than with the password search, therefore 16^6 or just over 16M. Let's take this email address:

test@example.com

Which hashes down to this value with SHA-1:

567159D622FFBB50B11B0EFD307BE358624A26EE

And similar to the password search, it's only the prefix that is sent to HIBP when performing a query:

567159

So, putting the privacy hat on, what's the risk when a service sends this data to HIBP? Mathematically, with the next 34 characters unknown, there are 16^34 different possible hashes that this prefix could belong to. Just to really labour the point, given a 6 character SHA-1 hash prefix you could take a 1 in 87,112,285,931,760,200,000,000,000,000,000,000,000,000 guess as to what the full hash prefix is. And then due to the infinite number of potential input strings, multiply that number out to... well... infinity. That's the total number of possible email addresses it could represent. By any definition of the term, those first 6 characters tell you absolutely nothing useful about what email address is being searched for.

But we're left with a more semantic, possibly philosophical question: is "567159" personally identifiable information? In practice, no, for all intents and purposes it's impossible to tell who this belongs to without the remaining 34 characters and even then, you still need to be able to crack that hash which is most likely only going to happen if you have a dictionary of email address to work through in which the given one appears. But it's derived from pseudonymous PII, and this is where the occasional [insert title of person who fills out paperwork but has no technical understanding here] loses their mind.

To explain this in more colloquial terms, it's like saying that the "t" at the beginning of the email address I used above is personally identifying. Really? My own email address begins with a "t", so it must be mine! It's a nonsense argument.

I'll wrap up with a definition and I like NIST's the best, not just because it's clear and concise but because they're a great authoritative source on this sort of thing (it was actually their guidance on prohibiting passwords from previous breach corpuses that led me to create Pwned Passwords in the first place):

Any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.

Phone numbers are PII. Physical addresses are PII. IP addresses are PII. The first 6 characters of a SHA-1 hash of someone's email address is not PII.

Summary

None of the misunderstandings I've explained above have dented the adoption of these services. Pwned Passwords is now doing in excess of 2 billion queries a month and has an ongoing feed of new passwords directly from the FBI. The k-anonymity search for email addresses sees over 100M queries a month and is baked into everything from browsers to password managers to identity theft services. The success of these services isn't due to any technical genius on my part (hat-tip again to Cloudflare), but rather to their simple yet effective implementations that (almost) everyone can easily understand 😊

Unpatched Travis CI API Bug Exposes Thousands of Secret User Access Tokens

By Ravie Lakshmanan
An unpatched security issue in the Travis CI API has left tens of thousands of developers' user tokens exposed to potential attacks, effectively allowing threat actors to breach cloud infrastructures, make unauthorized code changes, and initiate supply chain attacks. "More than 770 million logs of free tier users are available, from which you can easily extract tokens, secrets, and other

Researchers Uncover Malware Controlling Thousands of Sites in Parrot TDS Network

By Ravie Lakshmanan
The Parrot traffic direction system (TDS) that came to light earlier this year has had a larger impact than previously thought, according to new research. Sucuri, which has been tracking the same campaign since February 2019 under the name "NDSW/NDSX," said that "the malware was one of the top infections" detected in 2021, accounting for more than 61,000 websites. Parrot TDS was documented in

When Your Smart ID Card Reader Comes With Malware

By BrianKrebs

Millions of U.S. government employees and contractors have been issued a secure smart ID card that enables physical access to buildings and controlled spaces, and provides access to government computer networks and systems at the cardholder’s appropriate security level. But many government employees aren’t issued an approved card reader device that lets them use these cards at home or remotely, and so turn to low-cost readers they find online. What could go wrong? Here’s one example.

A sample Common Access Card (CAC). Image: Cac.mil.

KrebsOnSecurity recently heard from a reader — we’ll call him “Mark” because he wasn’t authorized to speak to the press — who works in IT for a major government defense contractor and was issued a Personal Identity Verification (PIV) government smart card designed for civilian employees. Not having a smart card reader at home and lacking any obvious guidance from his co-workers on how to get one, Mark opted to purchase a $15 reader from Amazon that said it was made to handle U.S. government smart cards.

The USB-based device Mark settled on is the first result that currently comes up one when searches on Amazon.com for “PIV card reader.” The card reader Mark bought was sold by a company called Saicoo, whose sponsored Amazon listing advertises a “DOD Military USB Common Access Card (CAC) Reader” and has more than 11,700 mostly positive ratings.

The Common Access Card (CAC) is the standard identification for active duty uniformed service personnel, selected reserve, DoD civilian employees, and eligible contractor personnel. It is the principal card used to enable physical access to buildings and controlled spaces, and provides access to DoD computer networks and systems.

Mark said when he received the reader and plugged it into his Windows 10 PC, the operating system complained that the device’s hardware drivers weren’t functioning properly. Windows suggested consulting the vendor’s website for newer drivers.

The Saicoo smart card reader that Mark purchased. Image: Amazon.com

So Mark went to the website mentioned on Saicoo’s packaging and found a ZIP file containing drivers for Linux, Mac OS and Windows:

Image: Saicoo

Out of an abundance of caution, Mark submitted Saicoo’s drivers file to Virustotal.com, which simultaneously scans any shared files with more than five dozen antivirus and security products. Virustotal reported that some 43 different security tools detected the Saicoo drivers as malicious. The consensus seems to be that the ZIP file currently harbors a malware threat known as Ramnit, a fairly common but dangerous trojan horse that spreads by appending itself to other files.

Image: Virustotal.com

Ramnit is a well-known and older threat — first surfacing more than a decade ago — but it has evolved over the years and is still employed in more sophisticated data exfiltration attacks. Amazon said in a written statement that it was investigating the reports.

“Seems like a potentially significant national security risk, considering that many end users might have elevated clearance levels who are using PIV cards for secure access,” Mark said.

Mark said he contacted Saicoo about their website serving up malware, and received a response saying the company’s newest hardware did not require any additional drivers. He said Saicoo did not address his concern that the driver package on its website was bundled with malware.

In response to KrebsOnSecurity’s request for comment, Saicoo sent a somewhat less reassuring reply.

“From the details you offered, issue may probably caused by your computer security defense system as it seems not recognized our rarely used driver & detected it as malicious or a virus,” Saicoo’s support team wrote in an email.

“Actually, it’s not carrying any virus as you can trust us, if you have our reader on hand, please just ignore it and continue the installation steps,” the message continued. “When driver installed, this message will vanish out of sight. Don’t worry.”

Saicoo’s response to KrebsOnSecurity.

The trouble with Saicoo’s apparently infected drivers may be little more than a case of a technology company having their site hacked and responding poorly. Will Dormann, a vulnerability analyst at CERT/CC, wrote on Twitter that the executable files (.exe) in the Saicoo drivers ZIP file were not altered by the Ramnit malware — only the included HTML files.

Dormann said it’s bad enough that searching for device drivers online is one of the riskiest activities one can undertake online.

“Doing a web search for drivers is a VERY dangerous (in terms of legit/malicious hit ratio) search to perform, based on results of any time I’ve tried to do it,” Dormann added. “Combine that with the apparent due diligence of the vendor outlined here, and well, it ain’t a pretty picture.”

But by all accounts, the potential attack surface here is enormous, as many federal employees clearly will purchase these readers from a myriad of online vendors when the need arises. Saicoo’s product listings, for example, are replete with comments from customers who self-state that they work at a federal agency (and several who reported problems installing drivers).

A thread about Mark’s experience on Twitter generated a strong response from some of my followers, many of whom apparently work for the U.S. government in some capacity and have government-issued CAC or PIV cards.

Two things emerged clearly from that conversation. The first was general confusion about whether the U.S. government has any sort of list of approved vendors. It does. The General Services Administration (GSA), the agency which handles procurement for federal civilian agencies, maintains a list of approved card reader vendors at idmanagement.gov (Saicoo is not on that list). [Thanks to @MetaBiometrics and @shugenja for the link!]

The other theme that ran through the Twitter discussion was the reality that many people find buying off-the-shelf readers more expedient than going through the GSA’s official procurement process, whether it’s because they were never issued one or the reader they were using simply no longer worked or was lost and they needed another one quickly.

“Almost every officer and NCO [non-commissioned officer] I know in the Reserve Component has a CAC reader they bought because they had to get to their DOD email at home and they’ve never been issued a laptop or a CAC reader,” said David Dixon, an Army veteran and author who lives in Northern Virginia. “When your boss tells you to check your email at home and you’re in the National Guard and you live 2 hours from the nearest [non-classified military network installation], what do you think is going to happen?”

Interestingly, anyone asking on Twitter about how to navigate purchasing the right smart card reader and getting it all to work properly is invariably steered toward militarycac.com. The website is maintained by Michael Danberry, a decorated and retired Army veteran who launched the site in 2008 (its text and link-heavy design very much takes one back to that era of the Internet and webpages in general). His site has even been officially recommended by the Army (PDF). Mark shared emails showing Saicoo itself recommends militarycac.com.

Image: Militarycac.com.

“The Army Reserve started using CAC logon in May 2006,” Danberry wrote on his “About” page. “I [once again] became the ‘Go to guy’ for my Army Reserve Center and Minnesota. I thought Why stop there? I could use my website and knowledge of CAC and share it with you.”

Danberry did not respond to requests for an interview — no doubt because he’s busy doing tech support for the federal government. The friendly message on Danberry’s voicemail instructs support-needing callers to leave detailed information about the issue they’re having with CAC/PIV card readers.

Dixon said Danberry has “done more to keep the Army running and connected than all the G6s [Army Chief Information Officers] put together.”

In many ways, Mr. Danberry is the equivalent of that little known software developer whose tiny open-sourced code project ends up becoming widely adopted and eventually folded into the fabric of the Internet.  I wonder if he ever imagined 15 years ago that his website would one day become “critical infrastructure” for Uncle Sam?

Critical cryptographic Java security blunder patched – update now!

By Paul Ducklin
Either know the private key and use it scrupulously in your digital signature calculation.... or just send a bunch of zeros instead.

Cold Wallets, Hot Wallets: The Basics of Storing Your Crypto Securely

By Lily Saleh

If you’re thinking about crypto, one of the first things you’ll want to do is get yourself a good wallet.  

Topping the several important things a new cryptocurrency investor needs to think about is security. Rightfully so. Cryptocurrency is indeed subject to all kinds of fraud, theft, and phishing attacks, just like the credentials and accounts we keep online.  

But here’s the catch. Lost or stolen cryptocurrency is terrifically difficult to recover. By and large, it doesn’t enjoy the same protections and regulations as traditional currency and financial transactions. For example, you can always call your bank or credit card company to report theft or contest a fraudulent charge. Not the case with crypto. With that, you’ll absolutely need a safe place to secure it. Likewise, in the U.S. many banks are FDIC insured, which protects depositors if the bank fails. Again, not so with crypto. 

So, when it comes to cryptocurrency, security is everything. 

What makes crypto so attractive to hackers? 

Cryptocurrency theft offers hackers an immediate payoff. It’s altogether different from, say, hacking the database of a Fortune 500 company. With a data breach, a hacker may round up armloads of personal data and information, yet it takes additional steps for them to translate those stolen records into money. With cryptocurrency theft, the dollars shift from the victim to the crook in milliseconds. It’s like digital pickpocketing. As you can guess, that makes cryptocurrency a big target. 

And that’s where your wallet will come in, a place where you store the digital credentials associated with the cryptocurrency you own. The issue is doing it securely. Let’s take a look at the different wallets out there and then talk about how you can secure them. 

Hot wallets and cold wallets for crypto 

Broadly, there are two general categories of wallets. First, let’s look at what these wallets store. 

A wallet contains public and private “keys” that are used to conduct transactions. The public key often takes the form of an address, one that anyone can see and then use to send cryptocurrency. The private key is exactly that. Highly complex and taking many forms that range from multi-word phrases to strings of code, it’s your unique key that proves your ownership of your cryptocurrency and that allows you to spend and send crypto. Needless to say, never share your private key.  

With that, there are two ways to store your keys—in a hot wallet or a cold wallet. 

 

Hot Wallets: 

 

  • These wallets store cryptocurrency on internet-connected devices—often a smartphone, but also on computers and tablets—all of which allow the holder to access and make transactions quickly. 

 

  • Think of a hot wallet as a checking account, where you keep a smaller amount of money available for day-to-day spending, yet less securely than a cold wallet because it’s online. 

  

Cold Wallets: 

 

  • These wallets store cryptocurrency in places not connected to the internet, which can include a hard drive, USB stick, paper wallet (keys printed on paper), or physical coins. 

 

  • Think of the cold wallet like a savings account, or cold storage if you like. This is where to store large amounts of cryptocurrency more securely because it’s not connected to the internet. 

Hot wallets for cryptocurrency 

As you can see, the benefit of a hot wallet is that you can load it up with cryptocurrency, ready for spending. However, it’s the riskiest place to store cryptocurrency because it’s connected to the internet, making it a target for hacks and attacks.  

In addition to that, a hot wallet is connected to a cryptocurrency exchange, which makes the transfer of cryptocurrencies possible. The issue with that is all cryptocurrency exchanges are not created equal, particularly when it comes to security. Some of the lesser-established exchanges may not utilize strong protocols, likely making a target for attack. Even the more established and trusted exchanges have fallen victim to attacks—where crooks have walked away with millions or even hundreds of millions of dollars 

Cold wallets for cryptocurrency 

While the funds in cold wallets are far less liquid, they’re far more secure because they’re not connected to the internet. In this way, cold wallets are more vault-like and suitable for long-term storage of larger sums of funds. But cold wallets place a great deal of responsibility on the holder. They must be stored in a physically secure place, and be backed up, because if you lose that one device or printout that contains your cryptocurrency info, you lose the cryptocurrency altogether. Within the cold wallet category, there are a few different types: 

1. Purpose-built cryptocurrency storage devices 

Several manufacturers make storage devices specifically designed to store cryptocurrency, complete with specific features for security, durability, and compatibility with many (yet not always all) of the different cryptocurrencies on the market. An online search will turn up several options, so doing your homework here will be very important—such as which devices have the best track record for security, which devices are the most reliable overall, and which ones are compatible with the crypto you wish to keep.  

2. Hard drives on a computer or laptop 

Storing cryptocurrency information on a computer or laptop that’s disconnected from the internet (also known as “air-gapped”) is a storage method that’s been in place for some time. However, because computers and laptops are complex devices, they may be less secure than a simpler, purpose-built cryptocurrency device. In short, there are more ways to compromise a computer or laptop with malware that a determined hacker can use to steal information in some rather surprising ways. (Like noise from a compromised computer fan passing information in a sort of Morse Code or generating electromagnetic signals on a compromised computer that nearby devices can use to skim information.) 

3. Paper wallets 

Ah, good old paper. Write down a code and keep it secure. Simple, right? In truth, creating a paper wallet can be one of the most involved methods of all the cold storage options out there. Bitcoin offers a step-by-step walkthrough of the process that you can see for yourself. Once done, though, you’ll have a piece of paper with a public address for loading cryptocurrency into your paper cold wallet, along with a private key. One note: Bitcoin and others recommend never reusing a paper cold wallet once it’s connected to a hot wallet. You should go through the process of creating a new cold paper wallet each time.  

4. Physical coins for cryptocurrency 

Physical coins are a special case and are relatively new on the scene. They’re a physical coin minted with a tamper-resistant sticker that indicates the actual value of the coin. Like other methods of cold wallet storage, this calls for keeping it in a safe place, because it’s pretty much like a wad of cash. And like cash, if it’s stolen, it’s gone for good. Also note that a cryptocurrency holder must work with a third party to mint and deliver the coin, which has its own costs and risks involved. 

Securing your cryptocurrency wallet 

With that look at wallets, let’s see what it takes to secure them. It may seem like there’s plenty to do here. That’s because there is, which goes to show just how much responsibility falls on the shoulders of the cryptocurrency holder. Of course, this is your money we’re talking about, so let’s dive into the details. 

1. Back up your wallet

Whatever form your storage takes, back it up. And back it up again. Cryptocurrency holders should make multiple copies just in case one is lost, destroyed, or otherwise inaccessible. For example, one story that’s made the rounds is of a IT engineer in the UK who accidentally threw away an old hard drive with his cryptocurrency key on it, one that held 7,500 bitcoins, worth millions of dollars. Redundancy is key. Back up the entire wallet right away and then often after that. 

2. Store your wallet(s) securely

With redundant backups in place, store them in places that are physically secure. It’s not uncommon for crypto holders to use fireproof safes and safe deposit boxes at banks for this purpose, which only highlights the earlier point that a wallet is as good as cash in many ways. 

3. Use online protection software

This will help prevent malware from stealing crypto, whether or not your device is connected to the internet. Comprehensive online protection software will give you plenty of other benefits as well, including identity theft monitoring and strong password management, two things that can help you protect your investments, and yourself, even further. 

4. Update your operating system, apps, and devices

Updates often address security issues, ones that hackers will of course try to exploit. Keep everything current and set automatic updates wherever they are available so that you have the latest and greatest. 

5. Make use of multi-factor authentication (MFA) where possible

Just as your bank and other financial accounts offer MFA, do the same here with your crypto. Some extra security-conscious crypto investors will purchase a device for this specific purpose for yet greater protection, such as a separate phone with texting capability. This keeps their crypto transactions separate from the multitude of other things they do on their everyday smartphone, effectively putting up a wall between these two different digital worlds.  

6. Keep your investments to yourself

 Two things fall under this category. One, the less you say about the crypto investments you make, the less word gets around, which can help keep hackers out of the loop. Particularly on social media! Two, consider setting up a unique email account that you only use for crypto. The less you associate your crypto accounts with other financial accounts like your banking and online payment apps, the more difficult it is to compromise several accounts in one fell swoop.  

7. Watch out for phishing scams

Just like hackers send phishing emails with an eye on accessing your bank accounts, credit cards, and so on, they’ll do much the same to get at your crypto accounts. The target may be different, that being your crypto, but the attack is very much the same. An email will direct you to a hacker’s website, using some sort of phony pretense, get-rich-quick-scheme, or scare tactic. Once there, they’ll ask for private key information and then simply steal the funds. And it’s not just email. Hackers have used online ads to phish for victims as well. 

Crypto: security is on you 

As you can see, these security measures rely almost exclusively on you. If something happens to you, that could make recovering your funds a real problem. Consider reaching out to someone you trust and let them know where you’re storing your wallets and information. That way, you’ll have some assistance ready in the event of an emergency or issue. 

The very things that define cryptocurrency—the anonymity of ownership, the lack of banking institutions, the light or non-existent regulation—all have major security implications. Add in the fact that you’re your own safety net here and it’s easy to see that crypto is something that requires plenty of planning and careful through before diving into. Getting knowledgeable about security, how you’ll protect your crypto, should absolutely top your list before investing.  

The post Cold Wallets, Hot Wallets: The Basics of Storing Your Crypto Securely appeared first on McAfee Blog.

❌