FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The Fake Browser Update Scam Gets a Makeover

By BrianKrebs

One of the oldest malware tricks in the book — hacked websites claiming visitors need to update their Web browser before they can view any content — has roared back to life in the past few months. New research shows the attackers behind one such scheme have developed an ingenious way of keeping their malware from being taken down by security experts or law enforcement: By hosting the malicious files on a decentralized, anonymous cryptocurrency blockchain.

an image of a warning that the Chrome browser needs to be updated, showing several devices (phone, monitor, etc.) open to Google and an enticing blue button to click in the middle.

In August 2023, security researcher Randy McEoin blogged about a scam he dubbed ClearFake, which uses hacked WordPress sites to serve visitors with a page that claims you need to update your browser before you can view the content.

The fake browser alerts are specific to the browser you’re using, so if you’re surfing the Web with Chrome, for example, you’ll get a Chrome update prompt. Those who are fooled into clicking the update button will have a malicious file dropped on their system that tries to install an information stealing trojan.

Earlier this month, researchers at the Tel Aviv-based security firm Guardio said they tracked an updated version of the ClearFake scam that included an important evolution. Previously, the group had stored its malicious update files on Cloudflare, Guardio said.

But when Cloudflare blocked those accounts the attackers began storing their malicious files as cryptocurrency transactions in the Binance Smart Chain (BSC), a technology designed to run decentralized apps and “smart contracts,” or coded agreements that execute actions automatically when certain conditions are met.

Nati Tal, head of security at Guardio Labs, the research unit at Guardio, said the malicious scripts stitched into hacked WordPress sites will create a new smart contract on the BSC Blockchain, starting with a unique, attacker-controlled blockchain address and a set of instructions that defines the contract’s functions and structure. When that contract is queried by a compromised website, it will return an obfuscated and malicious payload.

“These contracts offer innovative ways to build applications and processes,” Tal wrote along with his Guardio colleague Oleg Zaytsev. “Due to the publicly accessible and unchangeable nature of the blockchain, code can be hosted ‘on-chain’ without the ability for a takedown.”

Tal said hosting malicious files on the Binance Smart Chain is ideal for attackers because retrieving the malicious contract is a cost-free operation that was originally designed for the purpose of debugging contract execution issues without any real-world impact.

“So you get a free, untracked, and robust way to get your data (the malicious payload) without leaving traces,” Tal said.

Attacker-controlled BSC addresses — from funding, contract creation, and ongoing code updates. Image: Guardio

In response to questions from KrebsOnSecurity, the BNB Smart Chain (BSC) said its team is aware of the malware abusing its blockchain, and is actively addressing the issue. The company said all addresses associated with the spread of the malware have been blacklisted, and that its technicians had developed a model to detect future smart contracts that use similar methods to host malicious scripts.

“This model is designed to proactively identify and mitigate potential threats before they can cause harm,” BNB Smart Chain wrote. “The team is committed to ongoing monitoring of addresses that are involved in spreading malware scripts on the BSC. To enhance their efforts, the tech team is working on linking identified addresses that spread malicious scripts to centralized KYC [Know Your Customer] information, when possible.”

Guardio says the crooks behind the BSC malware scheme are using the same malicious code as the attackers that McEoin wrote about in August, and are likely the same group. But a report published today by email security firm Proofpoint says the company is currently tracking at least four distinct threat actor groups that use fake browser updates to distribute malware.

Proofpoint notes that the core group behind the fake browser update scheme has been using this technique to spread malware for the past five years, primarily because the approach still works well.

“Fake browser update lures are effective because threat actors are using an end-user’s security training against them,” Proofpoint’s Dusty Miller wrote. “In security awareness training, users are told to only accept updates or click on links from known and trusted sites, or individuals, and to verify sites are legitimate. The fake browser updates abuse this training because they compromise trusted sites and use JavaScript requests to quietly make checks in the background and overwrite the existing website with a browser update lure. To an end user, it still appears to be the same website they were intending to visit and is now asking them to update their browser.”

More than a decade ago, this site published Krebs’s Three Rules for Online Safety, of which Rule #1 was, “If you didn’t go looking for it, don’t install it.” It’s nice to know that this technology-agnostic approach to online safety remains just as relevant today.

Feds Take Down 13 More DDoS-for-Hire Services

By BrianKrebs

The U.S. Federal Bureau of Investigation (FBI) this week seized 13 domain names connected to “booter” services that let paying customers launch crippling distributed denial-of-service (DDoS) attacks. Ten of the domains are reincarnations of DDoS-for-hire services the FBI seized in December 2022, when it charged six U.S. men with computer crimes for allegedly operating booters.

Booter services are advertised through a variety of methods, including Dark Web forums, chat platforms and even youtube.com. They accept payment via PayPal, Google Wallet, and/or cryptocurrencies, and subscriptions can range in price from just a few dollars to several hundred per month. The services are generally priced according to the volume of traffic to be hurled at the target, the duration of each attack, and the number of concurrent attacks allowed.

The websites that saw their homepages replaced with seizure notices from the FBI this week include booter services like cyberstress[.]org and exoticbooter[.]com, which the feds say were used to launch millions of attacks against millions of victims.

“School districts, universities, financial institutions and government websites are among the victims who have been targeted in attacks launched by booter services,” federal prosecutors in Los Angeles said in a statement.

Purveyors of booters or “stressers” claim they are not responsible for how customers use their services, and that they aren’t breaking the law because — like most security tools — these services can be used for good or bad purposes. Most booter sites employ wordy “terms of use” agreements that require customers to agree they will only stress-test their own networks — and that they won’t use the service to attack others.

But the DOJ says these disclaimers usually ignore the fact that most booter services are heavily reliant on constantly scanning the Internet to commandeer misconfigured devices that are critical for maximizing the size and impact of DDoS attacks. What’s more, none of the services seized by the government required users to demonstrate that they own the Internet addresses being stress-tested, something a legitimate testing service would insist upon.

This is the third in a series of U.S. and international law enforcement actions targeting booter services. In December 2022, the feds seized four-dozen booter domains and charged six U.S. men with computer crimes related to their alleged ownership of the popular DDoS-for-hire services. In December 2018, the feds targeted 15 booter sites, and three booter store defendants who later pleaded guilty.

While the FBI’s repeated seizing of booter domains may seem like an endless game of virtual Whac-a-Mole, continuously taking these services offline imposes high enough costs for the operators that some of them will quit the business altogether, says Richard Clayton, director of Cambridge University’s Cybercrime Centre.

In 2020, Clayton and others published “Cybercrime is Mostly Boring,” an academic study on the quality and types of work needed to build, maintain and defend illicit enterprises that make up a large portion of the cybercrime-as-a-service market. The study found that operating a booter service effectively requires a mind-numbing amount of constant, tedious work that tends to produce high burnout rates for booter service operators — even when the service is operating efficiently and profitably.

For example, running an effective booter service requires a substantial amount of administrative work and maintenance, much of which involves constantly scanning for, commandeering and managing large collections of remote systems that can be used to amplify online attacks, Clayton said. On top of that, building brand recognition and customer loyalty takes time.

“If you’re running a booter and someone keeps taking your domain or hosting away, you have to then go through doing the same boring work all over again,” Clayton told KrebsOnSecurity. “One of the guys the FBI arrested in December [2022] spent six months moaning that he lost his servers, and could people please lend him some money to get it started again.”

In a statement released Wednesday, prosecutors in Los Angeles said four of the six men charged last year for running booter services have since pleaded guilty. However, at least one of the defendants from the 2022 booter bust-up — John M. Dobbs, 32, of Honolulu, HI — has pleaded not guilty and is signaling he intends to take his case to trial.

The FBI seizure notice that replaced the homepages of several booter services this week.

Dobbs is a computer science graduate student who for the past decade openly ran IPStresser[.]com, a popular and powerful attack-for-hire service that he registered with the state of Hawaii using his real name and address. Likewise, the domain was registered in Dobbs’s name and hometown in Pennsylvania. Prosecutors say Dobbs’ service attracted more than two million registered users, and was responsible for launching a staggering 30 million distinct DDoS attacks.

Many accused stresser site operators have pleaded guilty over the years after being hit with federal criminal charges. But the government’s core claim — that operating a booter site is a violation of U.S. computer crime laws — wasn’t properly tested in the courts until September 2021.

That was when a jury handed down a guilty verdict against Matthew Gatrel, a then 32-year-old St. Charles, Ill. man charged in the government’s first 2018 mass booter bust-up. Despite admitting to FBI agents that he ran two booter services (and turning over plenty of incriminating evidence in the process), Gatrel opted to take his case to trial, defended the entire time by court-appointed attorneys.

Gatrel was convicted on all three charges of violating the Computer Fraud and Abuse Act, including conspiracy to commit unauthorized impairment of a protected computer, conspiracy to commit wire fraud, and unauthorized impairment of a protected computer. He was sentenced to two years in prison.

A copy of the FBI’s booter seizure warrant is here (PDF). According to the DOJ, the defendants who pleaded guilty to operating booter sites include:

Jeremiah Sam Evans Miller, aka “John The Dev,” 23, of San Antonio, Texas, who pleaded guilty on April 6 to conspiracy and violating the computer fraud and abuse act related to the operation of a booter service named RoyalStresser[.]com (formerly known as Supremesecurityteam[.]com);

Angel Manuel Colon Jr., aka “Anonghost720” and “Anonghost1337,” 37, of Belleview, Florida, who pleaded guilty on February 13 to conspiracy and violating the computer fraud and abuse act related to the operation of a booter service named SecurityTeam[.]io;

Shamar Shattock, 19, of Margate, Florida, who pleaded guilty on March 22 to conspiracy to violate the computer fraud and abuse act related to the operation of a booter service known as Astrostress[.]com;

Cory Anthony Palmer, 23, of Lauderhill, Florida, who pleaded guilty on February 16 to conspiracy to violate the computer fraud and abuse act related to the operation of a booter service known as Booter[.]sx.

All four defendants are scheduled to be sentenced this summer.

The booter domains seized by the FBI this week include:

cyberstress[.]org
exoticbooter[.]com
layerstress[.]net
orbitalstress[.]xyz
redstresser[.]io
silentstress[.]wtf
sunstresser[.]net
silent[.]to
mythicalstress[.]net
dreams-stresser[.]org
stresserbest[.]io
stresserus[.]io
quantum-stress[.]org

Glut of Fake LinkedIn Profiles Pits HR Against the Bots

By BrianKrebs

A recent proliferation of phony executive profiles on LinkedIn is creating something of an identity crisis for the business networking site, and for companies that rely on it to hire and screen prospective employees. The fabricated LinkedIn identities — which pair AI-generated profile photos with text lifted from legitimate accounts — are creating major headaches for corporate HR departments and for those managing invite-only LinkedIn groups.

Some of the fake profiles flagged by the co-administrator of a popular sustainability group on LinkedIn.

Last week, KrebsOnSecurity examined a flood of inauthentic LinkedIn profiles all claiming Chief Information Security Officer (CISO) roles at various Fortune 500 companies, including Biogen, Chevron, ExxonMobil, and Hewlett Packard.

Since then, the response from LinkedIn users and readers has made clear that these phony profiles are showing up en masse for virtually all executive roles — but particularly for jobs and industries that are adjacent to recent global events and news trends.

Hamish Taylor runs the Sustainability Professionals group on LinkedIn, which has more than 300,000 members. Together with the group’s co-owner, Taylor said they’ve blocked more than 12,700 suspected fake profiles so far this year, including dozens of recent accounts that Taylor describes as “cynical attempts to exploit Humanitarian Relief and Crisis Relief experts.”

“We receive over 500 fake profile requests to join on a weekly basis,” Taylor said. “It’s hit like hell since about January of this year. Prior to that we did not get the swarms of fakes that we now experience.”

The opening slide for a plea by Taylor’s group to LinkedIn.

Taylor recently posted an entry on LinkedIn titled, “The Fake ID Crisis on LinkedIn,” which lampooned the “60 Least Wanted ‘Crisis Relief Experts’ — fake profiles that claimed to be experts in disaster recovery efforts in the wake of recent hurricanes. The images above and below show just one such swarm of profiles the group flagged as inauthentic. Virtually all of these profiles were removed from LinkedIn after KrebsOnSecurity tweeted about them last week.

Another “swarm” of LinkedIn bot accounts flagged by Taylor’s group.

Mark Miller is the owner of the DevOps group on LinkedIn, and says he deals with fake profiles on a daily basis — often hundreds per day. What Taylor called “swarms” of fake accounts Miller described instead as “waves” of incoming requests from phony accounts.

“When a bot tries to infiltrate the group, it does so in waves,” Miller said. “We’ll see 20-30 requests come in with the same type of information in the profiles.”

After screenshotting the waves of suspected fake profile requests, Miller started sending the images to LinkedIn’s abuse teams, which told him they would review his request but that he may never be notified of any action taken.

Some of the bot profiles identified by Mark Miller that were seeking access to his DevOps LinkedIn group. Miller said these profiles are all listed in the order they appeared.

Miller said that after months of complaining and sharing fake profile information with LinkedIn, the social media network appeared to do something which caused the volume of group membership requests from phony accounts to drop precipitously.

“I wrote our LinkedIn rep and said we were considering closing the group down the bots were so bad,” Miller said. “I said, ‘You guys should be doing something on the backend to block this.”

Jason Lathrop is vice president of technology and operations at ISOutsource, a Seattle-based consulting firm with roughly 100 employees. Like Miller, Lathrop’s experience in fighting bot profiles on LinkedIn suggests the social networking giant will eventually respond to complaints about inauthentic accounts. That is, if affected users complain loudly enough (posting about it publicly on LinkedIn seems to help).

Lathrop said that about two months ago his employer noticed waves of new followers, and identified more than 3,000 followers that all shared various elements, such as profile photos or text descriptions.

“Then I noticed that they all claim to work for us at some random title within the organization,” Lathrop said in an interview with KrebsOnSecurity. “When we complained to LinkedIn, they’d tell us these profiles didn’t violate their community guidelines. But like heck they don’t! These people don’t exist, and they’re claiming they work for us!”

Lathrop said that after his company’s third complaint, a LinkedIn representative responded by asking ISOutsource to send a spreadsheet listing every legitimate employee in the company, and their corresponding profile links.

Not long after that, the phony profiles that were not on the company’s list were deleted from LinkedIn. Lathrop said he’s still not sure how they’re going to handle getting new employees allowed into their company on LinkedIn going forward.

It remains unclear why LinkedIn has been flooded with so many fake profiles lately, or how the phony profile photos are sourced. Random testing of the profile photos shows they resemble but do not match other photos posted online. Several readers pointed out one likely source — the website thispersondoesnotexist.com, which makes using artificial intelligence to create unique headshots a point-and-click exercise.

Cybersecurity firm Mandiant (recently acquired by Googletold Bloomberg that hackers working for the North Korean government have been copying resumes and profiles from leading job listing platforms LinkedIn and Indeed, as part of an elaborate scheme to land jobs at cryptocurrency firms.

Fake profiles also may be tied to so-called “pig butchering” scams, wherein people are lured by flirtatious strangers online into investing in cryptocurrency trading platforms that eventually seize any funds when victims try to cash out.

In addition, identity thieves have been known to masquerade on LinkedIn as job recruiters, collecting personal and financial information from people who fall for employment scams.

But the Sustainability Group administrator Taylor said the bots he’s tracked strangely don’t respond to messages, nor do they appear to try to post content.

“Clearly they are not monitored,” Taylor assessed. “Or they’re just created and then left to fester.”

This experience was shared by the DevOp group admin Miller, who said he’s also tried baiting the phony profiles with messages referencing their fakeness. Miller says he’s worried someone is creating a massive social network of bots for some future attack in which the automated accounts may be used to amplify false information online, or at least muddle the truth.

“It’s almost like someone is setting up a huge bot network so that when there’s a big message that needs to go out they can just mass post with all these fake profiles,” Miller said.

In last week’s story on this topic, I suggested LinkedIn could take one simple step that would make it far easier for people to make informed decisions about whether to trust a given profile: Add a “created on” date for every profile. Twitter does this, and it’s enormously helpful for filtering out a great deal of noise and unwanted communications.

Many of our readers on Twitter said LinkedIn needs to give employers more tools — perhaps some kind of application programming interface (API) — that would allow them to quickly remove profiles that falsely claim to be employed at their organizations.

Another reader suggested LinkedIn also could experiment with offering something akin to Twitter’s verified mark to users who chose to validate that they can respond to email at the domain associated with their stated current employer.

In response to questions from KrebsOnSecurity, LinkedIn said it was considering the domain verification idea.

“This is an ongoing challenge and we’re constantly improving our systems to stop fakes before they come online,” LinkedIn said in a written statement. “We do stop the vast majority of fraudulent activity we detect in our community – around 96% of fake accounts and around 99.1% of spam and scams. We’re also exploring new ways to protect our members such as expanding email domain verification. Our community is all about authentic people having meaningful conversations and to always increase the legitimacy and quality of our community.”

In a story published Wednesday, Bloomberg noted that LinkedIn has largely so far avoided the scandals about bots that have plagued networks like Facebook and Twitter. But that shine is starting to come off, as more users are forced to waste more of their time fighting off inauthentic accounts.

“What’s clear is that LinkedIn’s cachet as being the social network for serious professionals makes it the perfect platform for lulling members into a false sense of security,” Bloomberg’s Tim Cuplan wrote. “Exacerbating the security risk is the vast amount of data that LinkedIn collates and publishes, and which underpins its whole business model but which lacks any robust verification mechanisms.”

❌