FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayKrebs on Security

Researchers Quietly Cracked Zeppelin Ransomware Keys

By BrianKrebs

Peter is an IT manager for a technology manufacturer that got hit with a Russian ransomware strain called “Zeppelin” in May 2020. He’d been on the job less than six months, and because of the way his predecessor architected things, the company’s data backups also were encrypted by Zeppelin. After two weeks of stalling their extortionists, Peter’s bosses were ready to capitulate and pay the ransom demand. Then came the unlikely call from an FBI agent. “Don’t pay,” the agent said. “We’ve found someone who can crack the encryption.”

Peter, who spoke candidly about the attack on condition of anonymity, said the FBI told him to contact a cybersecurity consulting firm in New Jersey called Unit 221B, and specifically its founder — Lance James. Zeppelin sprang onto the crimeware scene in December 2019, but it wasn’t long before James discovered multiple vulnerabilities in the malware’s encryption routines that allowed him to brute-force the decryption keys in a matter of hours, using nearly 100 cloud computer servers.

In an interview with KrebsOnSecurity, James said Unit 221B was wary of advertising its ability to crack Zeppelin ransomware keys because it didn’t want to tip its hand to Zeppelin’s creators, who were likely to modify their file encryption approach if they detected it was somehow being bypassed.

This is not an idle concern. There are multiple examples of ransomware groups doing just that after security researchers crowed about finding vulnerabilities in their ransomware code.

“The minute you announce you’ve got a decryptor for some ransomware, they change up the code,” James said.

But he said the Zeppelin group appears to have stopped spreading their ransomware code gradually over the past year, possibly because Unit 221B’s referrals from the FBI let them quietly help nearly two dozen victim organizations recover without paying their extortionists.

In a blog post published today to coincide with a Black Hat talk on their discoveries, James and co-author Joel Lathrop said they were motivated to crack Zeppelin after the ransomware gang started attacking nonprofit and charity organizations.

“What motivated us the most during the leadup to our action was the targeting of homeless shelters, nonprofits and charity organizations,” the two wrote. “These senseless acts of targeting those who are unable to respond are the motivation for this research, analysis, tools, and blog post. A general Unit 221B rule of thumb around our offices is: Don’t [REDACTED] with the homeless or sick! It will simply trigger our ADHD and we will get into that hyper-focus mode that is good if you’re a good guy, but not so great if you are an ***hole.”

The researchers said their break came when they understood that while Zeppelin used three different types of encryption keys to encrypt files, they could undo the whole scheme by factoring or computing just one of them: An ephemeral RSA-512 public key that is randomly generated on each machine it infects.

“If we can recover the RSA-512 Public Key from the registry, we can crack it and get the 256-bit AES Key that encrypts the files!” they wrote. “The challenge was that they delete the [public key] once the files are fully encrypted. Memory analysis gave us about a 5-minute window after files were encrypted to retrieve this public key.”

Unit 221B ultimately built a “Live CD” version of Linux that victims could run on infected systems to extract that RSA-512 key. From there, they would load the keys into a cluster of 800 CPUs donated by hosting giant Digital Ocean that would then start cracking them. The company also used that same donated infrastructure to help victims decrypt their data using the recovered keys.

A typical Zeppelin ransomware note.

Jon is another grateful Zeppelin ransomware victim who was aided by Unit 221B’s decryption efforts. Like Peter, Jon asked that his last name and that of his employer be omitted from the story, but he’s in charge of IT for a mid-sized managed service provider that got hit with Zeppelin in July 2020.

The attackers that savaged Jon’s company managed to phish credentials and a multi-factor authentication token for some tools the company used to support customers, and in short order they’d seized control over the servers and backups for a healthcare provider customer.

Jon said his company was reluctant to pay a ransom in part because it wasn’t clear from the hackers’ demands whether the ransom amount they demanded would provide a key to unlock all systems, and that it would do so safely.

“They want you to unlock your data with their software, but you can’t trust that,” Jon said. “You want to use your own software or someone else who’s trusted to do it.”

In August 2022, the FBI and the Cybersecurity & Infrastructure Security Agency (CISA) issued a joint warning on Zeppelin, saying the FBI had “observed instances where Zeppelin actors executed their malware multiple times within a victim’s network, resulting in the creation of different IDs or file extensions, for each instance of an attack; this results in the victim needing several unique decryption keys.”

The advisory says Zeppelin has attacked “a range of businesses and critical infrastructure organizations, including defense contractors, educational institutions, manufacturers, technology companies, and especially organizations in the healthcare and medical industries. Zeppelin actors have been known to request ransom payments in Bitcoin, with initial amounts ranging from several thousand dollars to over a million dollars.”

The FBI and CISA say the Zeppelin actors gain access to victim networks by exploiting weak Remote Desktop Protocol (RDP) credentials, exploiting SonicWall firewall vulnerabilities, and phishing campaigns. Prior to deploying Zeppelin ransomware, actors spend one to two weeks mapping or enumerating the victim network to identify data enclaves, including cloud storage and network backups, the alert notes.

Jon said he felt so lucky after connecting with James and hearing about their decryption work, that he toyed with the idea of buying a lottery ticket that day.

“This just doesn’t usually happen,” Jon said. “It’s 100 percent like winning the lottery.”

By the time Jon’s company got around to decrypting their data, they were forced by regulators to prove that no patient data had been exfiltrated from their systems. All told, it took his employer two months to fully recover from the attack.

“I definitely feel like I was ill-prepared for this attack,” Jon said. “One of the things I’ve learned from this is the importance of forming your core team and having those people who know what their roles and responsibilities are ahead of time. Also, trying to vet new vendors you’ve never met before and build trust relationships with them is very difficult to do when you have customers down hard now and they’re waiting on you to help them get back up.”

A more technical writeup on Unit 221B’s discoveries (cheekily titled “0XDEAD ZEPPELIN”) is available here.

How 1-Time Passcodes Became a Corporate Liability

By BrianKrebs

Phishers are enjoying remarkable success using text messages to steal remote access credentials and one-time passcodes from employees at some of the world’s largest technology companies and customer support firms. A recent spate of SMS phishing attacks from one cybercriminal group has spawned a flurry of breach disclosures from affected companies, which are all struggling to combat the same lingering security threat: The ability of scammers to interact directly with employees through their mobile devices.

In mid-June 2022, a flood of SMS phishing messages began targeting employees at commercial staffing firms that provide customer support and outsourcing to thousands of companies. The missives asked users to click a link and log in at a phishing page that mimicked their employer’s Okta authentication page. Those who submitted credentials were then prompted to provide the one-time password needed for multi-factor authentication.

The phishers behind this scheme used newly-registered domains that often included the name of the target company, and sent text messages urging employees to click on links to these domains to view information about a pending change in their work schedule.

The phishing sites leveraged a Telegram instant message bot to forward any submitted credentials in real-time, allowing the attackers to use the phished username, password and one-time code to log in as that employee at the real employer website. But because of the way the bot was configured, it was possible for security researchers to capture the information being sent by victims to the public Telegram server.

This data trove was first reported by security researchers at Singapore-based Group-IB, which dubbed the campaign “0ktapus” for the attackers targeting organizations using identity management tools from Okta.com.

“This case is of interest because despite using low-skill methods it was able to compromise a large number of well-known organizations,” Group-IB wrote. “Furthermore, once the attackers compromised an organization they were quickly able to pivot and launch subsequent supply chain attacks, indicating that the attack was planned carefully in advance.”

It’s not clear how many of these phishing text messages were sent out, but the Telegram bot data reviewed by KrebsOnSecurity shows they generated nearly 10,000 replies over approximately two months of sporadic SMS phishing attacks targeting more than a hundred companies.

A great many responses came from those who were apparently wise to the scheme, as evidenced by the hundreds of hostile replies that included profanity or insults aimed at the phishers: The very first reply recorded in the Telegram bot data came from one such employee, who responded with the username “havefuninjail.”

Still, thousands replied with what appear to be legitimate credentials — many of them including one-time codes needed for multi-factor authentication. On July 20, the attackers turned their sights on internet infrastructure giant Cloudflare.com, and the intercepted credentials show at least three employees fell for the scam.

Image: Cloudflare.com

In a blog post earlier this month, Cloudflare said it detected the account takeovers and that no Cloudflare systems were compromised. Cloudflare said it does not rely on one-time passcodes as a second factor, so there was nothing to provide to the attackers. But Cloudflare said it wanted to call attention to the phishing attacks because they would probably work against most other companies.

“This was a sophisticated attack targeting employees and systems in such a way that we believe most organizations would be likely to be breached,” Cloudflare CEO Matthew Prince wrote. “On July 20, 2022, the Cloudflare Security team received reports of employees receiving legitimate-looking text messages pointing to what appeared to be a Cloudflare Okta login page. The messages began at 2022-07-20 22:50 UTC. Over the course of less than 1 minute, at least 76 employees received text messages on their personal and work phones. Some messages were also sent to the employees family members.”

On three separate occasions, the phishers targeted employees at Twilio.com, a San Francisco based company that provides services for making and receiving text messages and phone calls. It’s unclear how many Twilio employees received the SMS phishes, but the data suggest at least four Twilio employees responded to a spate of SMS phishing attempts on July 27, Aug. 2, and Aug. 7.

On that last date, Twilio disclosed that on Aug. 4 it became aware of unauthorized access to information related to a limited number of Twilio customer accounts through a sophisticated social engineering attack designed to steal employee credentials.

“This broad based attack against our employee base succeeded in fooling some employees into providing their credentials,” Twilio said. “The attackers then used the stolen credentials to gain access to some of our internal systems, where they were able to access certain customer data.”

That “certain customer data” included information on roughly 1,900 users of the secure messaging app Signal, which relied on Twilio to provide phone number verification services. In its disclosure on the incident, Signal said that with their access to Twilio’s internal tools the attackers were able to re-register those users’ phone numbers to another device.

On Aug. 25, food delivery service DoorDash disclosed that a “sophisticated phishing attack” on a third-party vendor allowed attackers to gain access to some of DoorDash’s internal company tools. DoorDash said intruders stole information on a “small percentage” of users that have since been notified. TechCrunch reported last week that the incident was linked to the same phishing campaign that targeted Twilio.

This phishing gang apparently had great success targeting employees of all the major mobile wireless providers, but most especially T-Mobile. Between July 10 and July 16, dozens of T-Mobile employees fell for the phishing messages and provided their remote access credentials.

“Credential theft continues to be an ongoing issue in our industry as wireless providers are constantly battling bad actors that are focused on finding new ways to pursue illegal activities like this,” T-Mobile said in a statement. “Our tools and teams worked as designed to quickly identify and respond to this large-scale smishing attack earlier this year that targeted many companies. We continue to work to prevent these types of attacks and will continue to evolve and improve our approach.”

This same group saw hundreds of responses from employees at some of the largest customer support and staffing firms, including Teleperformanceusa.com, Sitel.com and Sykes.com. Teleperformance did not respond to requests for comment. KrebsOnSecurity did hear from Christopher Knauer, global chief security officer at Sitel Group, the customer support giant that recently acquired Sykes. Knauer said the attacks leveraged newly-registered domains and asked employees to approve upcoming changes to their work schedules.

Image: Group-IB.

Knauer said the attackers set up the phishing domains just minutes in advance of spamming links to those domains in phony SMS alerts to targeted employees. He said such tactics largely sidestep automated alerts generated by companies that monitor brand names for signs of new phishing domains being registered.

“They were using the domains as soon as they became available,” Knauer said. “The alerting services don’t often let you know until 24 hours after a domain has been registered.”

On July 28 and again on Aug. 7, several employees at email delivery firm Mailchimp provided their remote access credentials to this phishing group. According to an Aug. 12 blog post, the attackers used their access to Mailchimp employee accounts to steal data from 214 customers involved in cryptocurrency and finance.

On Aug. 15, the hosting company DigitalOcean published a blog post saying it had severed ties with MailChimp after its Mailchimp account was compromised. DigitalOcean said the MailChimp incident resulted in a “very small number” of DigitalOcean customers experiencing attempted compromises of their accounts through password resets.

According to interviews with multiple companies hit by the group, the attackers are mostly interested in stealing access to cryptocurrency, and to companies that manage communications with people interested in cryptocurrency investing. In an Aug. 3 blog post from email and SMS marketing firm Klaviyo.com, the company’s CEO recounted how the phishers gained access to the company’s internal tools, and used that to download information on 38 crypto-related accounts.

A flow chart of the attacks by the SMS phishing group known as 0ktapus and ScatterSwine. Image: Amitai Cohen for Wiz.io. twitter.com/amitaico.

The ubiquity of mobile phones became a lifeline for many companies trying to manage their remote employees throughout the Coronavirus pandemic. But these same mobile devices are fast becoming a liability for organizations that use them for phishable forms of multi-factor authentication, such as one-time codes generated by a mobile app or delivered via SMS.

Because as we can see from the success of this phishing group, this type of data extraction is now being massively automated, and employee authentication compromises can quickly lead to security and privacy risks for the employer’s partners or for anyone in their supply chain.

Unfortunately, a great many companies still rely on SMS for employee multi-factor authentication. According to a report this year from Okta, 47 percent of workforce customers deploy SMS and voice factors for multi-factor authentication. That’s down from 53 percent that did so in 2018, Okta found.

Some companies (like Knauer’s Sitel) have taken to requiring that all remote access to internal networks be managed through work-issued laptops and/or mobile devices, which are loaded with custom profiles that can’t be accessed through other devices.

Others are moving away from SMS and one-time code apps and toward requiring employees to use physical FIDO multi-factor authentication devices such as security keys, which can neutralize phishing attacks because any stolen credentials can’t be used unless the phishers also have physical access to the user’s security key or mobile device.

This came in handy for Twitter, which announced last year that it was moving all of its employees to using security keys, and/or biometric authentication via their mobile device. The phishers’ Telegram bot reported that on June 16, 2022, five employees at Twitter gave away their work credentials. In response to questions from KrebsOnSecurity, Twitter confirmed several employees were relieved of their employee usernames and passwords, but that its security key requirement prevented the phishers from abusing that information.

Twitter accelerated its plans to improve employee authentication following the July 2020 security incident, wherein several employees were phished and relieved of credentials for Twitter’s internal tools. In that intrusion, the attackers used Twitter’s tools to hijack accounts for some of the world’s most recognizable public figures, executives and celebrities — forcing those accounts to tweet out links to bitcoin scams.

“Security keys can differentiate legitimate sites from malicious ones and block phishing attempts that SMS 2FA or one-time password (OTP) verification codes would not,” Twitter said in an Oct. 2021 post about the change. “To deploy security keys internally at Twitter, we migrated from a variety of phishable 2FA methods to using security keys as our only supported 2FA method on internal systems.”

Update, 6:02 p.m. ET: Clarified that Cloudflare does not rely on TOTP (one-time multi-factor authentication codes) as a second factor for employee authentication.

Sounding the Alarm on Emergency Alert System Flaws

By BrianKrebs

The Department of Homeland Security (DHS) is urging states and localities to beef up security around proprietary devices that connect to the Emergency Alert System — a national public warning system used to deliver important emergency information, such as severe weather and AMBER alerts. The DHS warning came in advance of a workshop to be held this weekend at the DEFCON security conference in Las Vegas, where a security researcher is slated to demonstrate multiple weaknesses in the nationwide alert system.

A Digital Alert Systems EAS encoder/decoder that Pyle said he acquired off eBay in 2019. It had the username and password for the system printed on the machine.

The DHS warning was prompted by security researcher Ken Pyle, a partner at security firm Cybir. Pyle said he started acquiring old EAS equipment off of eBay in 2019, and that he quickly identified a number of serious security vulnerabilities in a device that is broadly used by states and localities to encode and decode EAS alert signals.

“I found all kinds of problems back then, and reported it to the DHS, FBI and the manufacturer,” Pyle said in an interview with KrebsOnSecurity. “But nothing ever happened. I decided I wasn’t going to tell anyone about it yet because I wanted to give people time to fix it.”

Pyle said he took up the research again in earnest after an angry mob stormed the U.S. Capitol on Jan. 6, 2021.

“I was sitting there thinking, ‘Holy shit, someone could start a civil war with this thing,”’ Pyle recalled. “I went back to see if this was still a problem, and it turns out it’s still a very big problem. So I decided that unless someone actually makes this public and talks about it, clearly nothing is going to be done about it.”

The EAS encoder/decoder devices Pyle acquired were made by Lyndonville, NY-based Digital Alert Systems (formerly Monroe Electronics, Inc.), which issued a security advisory this month saying it released patches in 2019 to fix the flaws reported by Pyle, but that some customers are still running outdated versions of the device’s firmware. That may be because the patches were included in version 4 of the firmware for the EAS devices, and many older models apparently do not support the new software.

“The vulnerabilities identified present a potentially serious risk, and we believe both were addressed in software updates issued beginning Oct 2019,” EAS said in a written statement. “We also provided attribution for the researcher’s responsible disclosure, allowing us to rectify the matters before making any public statements. We are aware that some users have not taken corrective actions and updated their software and should immediately take action to update the latest software version to ensure they are not at risk. Anything lower than version 4.1 should be updated immediately. On July 20, 2022, the researcher referred to other potential issues, and we trust the researcher will provide more detail. We will evaluate and work to issue any necessary mitigations as quickly as possible.”

But Pyle said a great many EAS stakeholders are still ignoring basic advice from the manufacturer, such as changing default passwords and placing the devices behind a firewall, not directly exposing them to the Internet, and restricting access only to trusted hosts and networks.

Pyle, in a selfie that is heavily redacted because the EAS device behind him had its user credentials printed on the lid.

Pyle said the biggest threat to the security of the EAS is that an attacker would only need to compromise a single EAS station to send out alerts locally that can be picked up by other EAS systems and retransmitted across the nation.

“The process for alerts is automated in most cases, hence, obtaining access to a device will allow you to pivot around,” he said. “There’s no centralized control of the EAS because these devices are designed such that someone locally can issue an alert, but there’s no central control over whether I am the one person who can send or whatever. If you are a local operator, you can send out nationwide alerts. That’s how easy it is to do this.”

One of the Digital Alert Systems devices Pyle sourced from an electronics recycler earlier this year was non-functioning, but whoever discarded it neglected to wipe the hard drive embedded in the machine. Pyle soon discovered the device contained the private cryptographic keys and other credentials needed to send alerts through Comcast, the nation’s third-largest cable company.

“I can issue and create my own alert here, which has all the valid checks or whatever for being a real alert station,” Pyle said in an interview earlier this month. “I can create a message that will start propagating through the EAS.”

Comcast told KrebsOnSecurity that “a third-party device used to deliver EAS alerts was lost in transit by a trusted shipping provider between two Comcast locations and subsequently obtained by a cybersecurity researcher.

“We’ve conducted a thorough investigation of this matter and have determined that no customer data, and no sensitive Comcast data, were compromised,” Comcast spokesperson David McGuire said.

The company said it also confirmed that the information included on the device can no longer be used to send false messages to Comcast customers or used to compromise devices within Comcast’s network, including EAS devices.

“We are taking steps to further ensure secure transfer of such devices going forward,” McGuire said. “Separately, we have conducted a thorough audit of all EAS devices on our network and confirmed that they are updated with currently available patches and are therefore not vulnerable to recently reported security issues. We’re grateful for the responsible disclosure and to the security research community for continuing to engage and share information with our teams to make our products and technologies ever more secure. Mr. Pyle informed us promptly of his research and worked with us as we took steps to validate his findings and ensure the security of our systems.”

The user interface for an EAS device.

Unauthorized EAS broadcast alerts have happened enough that there is a chronicle of EAS compromises over at fandom.com. Thankfully, most of these incidents have involved fairly obvious hoaxes.

According to the EAS wiki, in February 2013, hackers broke into the EAS networks in Great Falls, Mt. and Marquette, Mich. to broadcast an alert that zombies had risen from their graves in several counties. In Feb. 2017, an EAS station in Indiana also was hacked, with the intruders playing the same “zombies and dead bodies” audio from the 2013 incidents.

“On February 20 and February 21, 2020, Wave Broadband’s EASyCAP equipment was hacked due to the equipment’s default password not being changed,” the Wiki states. “Four alerts were broadcasted, two of which consisted of a Radiological Hazard Warning and a Required Monthly Test playing parts of the Hip Hop song Hot by artist Young Thug.”

In January 2018, Hawaii sent out an alert to cell phones, televisions and radios, warning everyone in the state that a missile was headed their way. It took 38 minutes for Hawaii to let people know the alert was a misfire, and that a draft alert was inadvertently sent. The news video clip below about the 2018 event in Hawaii does a good job of walking through how the EAS works.

What Counts as “Good Faith Security Research?”

By BrianKrebs

The U.S. Department of Justice (DOJ) recently revised its policy on charging violations of the Computer Fraud and Abuse Act (CFAA), a 1986 law that remains the primary statute by which federal prosecutors pursue cybercrime cases. The new guidelines state that prosecutors should avoid charging security researchers who operate in “good faith” when finding and reporting vulnerabilities. But legal experts continue to advise researchers to proceed with caution, noting the new guidelines can’t be used as a defense in court, nor are they any kind of shield against civil prosecution.

In a statement about the changes, Deputy Attorney General Lisa O. Monaco said the DOJ “has never been interested in prosecuting good-faith computer security research as a crime,” and that the new guidelines “promote cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

What constitutes “good faith security research?” The DOJ’s new policy (PDF) borrows language from a Library of Congress rulemaking (PDF) on the Digital Millennium Copyright Act (DMCA), a similarly controversial law that criminalizes production and dissemination of technologies or services designed to circumvent measures that control access to copyrighted works. According to the government, good faith security research means:

“…accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”

“Security research not conducted in good faith — for example, for the purpose of discovering security holes in devices, machines, or services in order to extort the owners of such devices, machines, or services — might be called ‘research,’ but is not in good faith.”

The new DOJ policy comes in response to a Supreme Court ruling last year in Van Buren v. United States (PDF), a case involving a former police sergeant in Florida who was convicted of CFAA violations after a friend paid him to use police resources to look up information on a private citizen.

But in an opinion authored by Justice Amy Coney Barrett, the Supreme Court held that the CFAA does not apply to a person who obtains electronic information that they are otherwise authorized to access and then misuses that information.

Orin Kerr, a law professor at University of California, Berkeley, said the DOJ’s updated policy was expected given the Supreme Court ruling in the Van Buren case. Kerr noted that while the new policy says one measure of “good faith” involves researchers taking steps to prevent harm to third parties, what exactly those steps might constitute is another matter.

“The DOJ is making clear they’re not going to prosecute good faith security researchers, but be really careful before you rely on that,” Kerr said. “First, because you could still get sued [civilly, by the party to whom the vulnerability is being reported], but also the line as to what is legitimate security research and what isn’t is still murky.”

Kerr said the new policy also gives CFAA defendants no additional cause for action.

“A lawyer for the defendant can make the pitch that something is good faith security research, but it’s not enforceable,” Kerr said. “Meaning, if the DOJ does bring a CFAA charge, the defendant can’t move to dismiss it on the grounds that it’s good faith security research.”

Kerr added that he can’t think of a CFAA case where this policy would have made a substantive difference.

“I don’t think the DOJ is giving up much, but there’s a lot of hacking that could be covered under good faith security research that they’re saying they won’t prosecute, and it will be interesting to see what happens there,” he said.

The new policy also clarifies other types of potential CFAA violations that are not to be charged. Most of these include violations of a technology provider’s terms of service, and here the DOJ says “violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges.” Some examples include:

-Embellishing an online dating profile contrary to the terms of service of the dating website;
-Creating fictional accounts on hiring, housing, or rental websites;
-Using a pseudonym on a social networking site that prohibits them;
-Checking sports scores or paying bills at work.

ANALYSIS

Kerr’s warning about the dangers that security researchers face from civil prosecution is well-founded. KrebsOnSecurity regularly hears from security researchers seeking advice on how to handle reporting a security vulnerability or data exposure. In most of these cases, the researcher isn’t worried that the government is going to come after them: It’s that they’re going to get sued by the company responsible for the security vulnerability or data leak.

Often these conversations center around the researcher’s desire to weigh the rewards of gaining recognition for their discoveries with the risk of being targeted with costly civil lawsuits. And almost just as often, the source of the researcher’s unease is that they recognize they might have taken their discovery just a tad too far.

Here’s a common example: A researcher finds a vulnerability in a website that allows them to individually retrieve every customer record in a database. But instead of simply polling a few records that could be used as a proof-of-concept and shared with the vulnerable website, the researcher decides to download every single file on the server.

Not infrequently, there is also concern because at some point the researcher suspected that their automated activities might have actually caused stability or uptime issues with certain services they were testing. Here, the researcher is usually concerned about approaching the vulnerable website or vendor because they worry their activities may already have been identified internally as some sort of external cyberattack.

What do I take away from these conversations? Some of the most trusted and feared security researchers in the industry today gained that esteem not by constantly taking things to extremes and skirting the law, but rather by publicly exercising restraint in the use of their powers and knowledge — and by being effective at communicating their findings in a way that maximizes the help and minimizes the potential harm.

If you believe you’ve discovered a security vulnerability or data exposure, try to consider first how you might defend your actions to the vulnerable website or vendor before embarking on any automated or semi-automated activity that the organization might reasonably misconstrue as a cyberattack. In other words, try as best you can to minimize the potential harm to the vulnerable site or vendor in question, and don’t go further than you need to prove your point.

❌