2026-03-26 AI创业新闻

LeakBase Admin Arrested in Russia Over Massive Stolen Credential Marketplace

The alleged administrator of the LeakBase cybercrime forum has been arrested by Russian law enforcement authorities, state media reported Thursday. According to TASS and MVD Media , a news website linked to the Russian Interior Ministry, the suspect is a resident of the city of Taganrog. The suspect is said to have been detained for creating and managing a criminal site that allowed stolen personal databases to be traded since 2021. In addition, technical equipment and other items of evidentiary value were confiscated during a search of the suspect’s residence.

“The platform hosted hundreds of millions of user accounts, bank details, usernames, and passwords, as well as corporate documents obtained through hacking,” said Irina Volk, an official spokesperson for the Russian Ministry of Internal Affairs. “More than 147,000 users registered on the forum could buy and sell this data, as well as use it to commit fraudulent acts against citizens.” LeakBase was dismantled in a law enforcement operation earlier this month. The U.S. Department of Justice (DoJ) said the cybercrime forum was one of the world’s largest hubs for cybercriminals to buy and sell stolen data and cybercrime tools.

This included hundreds of millions of account credentials and financial information such as credit and debit card numbers, banking account and routing information, usernames, and associated passwords that could be abused to conduct account takeover attacks. The platform had over 142,000 members and more than 215,000 messages between members as of December 2025. Visitors to the clearnet site were greeted with a seizure banner that said “All forum content, including users’ accounts, posts, credit details, private messages, and IP logs, has been secured and preserved for evidentiary purposes.” LeakBase is the work of a threat actor who goes by the online aliases Chucky, beakdaz, Chuckies, Sqlrip. In reports published following the takedown of the forum, KELA and TriTrace Investigations linked Chucky to a 33-year-old individual from Taganrog.

Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

GlassWorm Malware Uses Solana Dead Drops to Deliver RAT and Steal Browser, Crypto Data

Cybersecurity researchers have flagged a new evolution of the GlassWorm campaign that delivers a multi-stage framework capable of comprehensive data theft and installing a remote access trojan (RAT), which deploys an information-stealing Google Chrome extension masquerading as an offline version of Google Docs. “It logs keystrokes, dumps cookies and session tokens, captures screenshots, and takes commands from a C2 server hidden in a Solana blockchain memo,” Aikido security researcher Ilyas Makari said in a report published last week. GlassWorm is the moniker assigned to a persistent campaign that obtains an initial foothold through rogue packages published across npm, PyPI, GitHub, and the Open VSX marketplace. In addition, the operators are known to compromise the accounts of project maintainers to push poisoned updates.

The attacks are careful enough to avoid infecting systems with a Russian locale and use Solana transactions as a dead drop resolver to fetch the command-and-control (C2) server (“45.32.150[.]251”) and download operating system-specific payloads. The stage two payload is a data-theft framework with credential harvesting, cryptocurrency wallet exfiltration, and system profiling capabilities. The collected data is compressed into a ZIP archive and exfiltrated to an external server (“217.69.3[.]152/wall”). It also incorporates functionality to retrieve and launch the final payload.

Once the data is transmitted, the attack chain involves fetching two additional components: a .NET binary that is designed to carry out hardware wallet phishing and a Websocket-based JavaScript RAT to siphon web browser data and run arbitrary code. The RAT payload is fetched from “45.32.150[.]251” by using a public Google Calendar event URL as a dead drop resolver. The .NET binary leverages the Windows Management Instrumentation (WMI) infrastructure to detect USB device connections and displays a phishing window when a Ledger or Trezor hardware wallet is plugged in. “The Ledger UI displays a fake configuration error and presents 24 numbered recovery phrase input fields,” Makari noted.

“The Trezor UI displays a fake “Firmware validation failed, initiating emergency reboot” message with the same 24-word input layout. Both windows include a ‘RESTORE WALLET’ button.” The malware not only kills any real Ledger Live processes running on the Windows host, but also re-displays the phishing window if the victim closes it. The end goal of the attack is to capture the wallet recovery phrase and transmit it to the IP address “45.150.34[.]158.” The RAT, on the other hand, uses a Distributed Hash Table ( DHT ) to retrieve the C2 details. In the event the mechanism returns no value, the malware switches to the Solana-based dead drop.

The RAT then establishes communication with the server to run various commands on the compromised system - start_hvnc / stop_hvnc, to deploy a Hidden Virtual Network Computing (HVNC) module for remote desktop access. start_socks / stop_socks, to launch a WebRTC module and run it as a SOCKS proxy. reget_log, to steal data from web browsers, such as Google Chrome, Microsoft Edge, Brave, Opera, Opera GX, Vivaldi, and Mozilla Firefox. The component is equipped to bypass Chrome’s app-bound encryption ( ABE ) protections.

get_system_info, to send system information. command, to execute attacker-supplied JavaScript via eval(). The RAT also force-installs a Google Chrome extension named Google Docs Offline on Windows and macOS systems, which then connects to a C2 server and receives commands issued by the operator, allowing to gather cookies, localStorage, the full Document Object Model ( DOM ) tree of the active tab, bookmarks, screenshots, keystrokes, clipboard content, up to 5,000 browser history entries, and the installed extensions list. “The extension also performs targeted session surveillance.

It pulls monitored site rules from /api/get-url-for-watch and ships with Bybit (.bybit.com) pre-configured as a target, watching for the secure-token and deviceid cookies,” Aikido said. “On detection, it fires an auth-detected webhook to /api/webhook/auth-detected containing the cookie material and page metadata. The C2 can also supply redirect rules that force active tabs to attacker-controlled URLs.” The discovery coincides with yet another shift in GlassWorm tactics, with the attackers publishing npm packages impersonating the WaterCrawl Model Context Protocol (MCP) server (“@iflow-mcp/watercrawl-watercrawl-mcp) to distribute malicious payloads. “This is GlassWorm’s first confirmed move into the MCP ecosystem,” Koi security researcher Lotan Sery said .

“And given how fast AI-assisted development is growing – and how much trust MCP servers are given by design – this won’t be the last.” Developers are advised to exercise caution when it comes to installing Open VSX extensions, npm packages, and MCP servers. It’s also recommended to verify publisher names, package histories, and avoid blindly trusting download counts. Polish cybersecurity company AFINE has published an open-source Python tool called glassworm-hunter to scan developer systems for payloads associated with the campaign. “Glassworm-hunter makes zero network requests during scanning,” researchers Paweł Woyke and Sławomir Zakrzewski said.

“No telemetry. No phone-home. No automatic update checks. It reads local files only.

Glassworm-hunter update is the only command that touches the network. It fetches the latest IoC database from our GitHub and saves it locally.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

The Kill Chain Is Obsolete When Your AI Agent Is the Threat

In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to execute an autonomous cyber espionage campaign against 30 global targets. The AI handled 80-90% of tactical operations on its own, performing reconnaissance, writing exploit code, and attempting lateral movement at machine speed. This incident is worrying, but there’s a scenario that should concern security teams even more: an attacker who doesn’t need to run through the kill chain at all, because they’ve compromised an AI agent that already lives inside your environment. One that already has the access, the permissions, and a legitimate reason to move across your systems every day.

A Framework Built for Human Threats The traditional cyber kill chain assumes attackers have to earn every inch of access. It’s a model developed by Lockheed Martin in 2011 to describe how adversaries move from initial compromise to their ultimate objective, and it’s shaped how security teams think about detection ever since. The logic is simple: attackers need to complete a sequence of steps, and defenders can interrupt the chain at any point. Every stage an attacker has to pass through is another opportunity to catch them.

A typical intrusion moves through distinct stages: Initial access (exploiting a vulnerability, etc.) Persistence without triggering alerts Reconnaissance to understand the environment Lateral movement to reach valuable data Privilege escalation when access isn’t sufficient Exfiltration while avoiding DLP controls Each stage creates detection opportunities: endpoint security might catch the initial payload, network monitoring might spot unusual lateral movement, identity systems might flag a privilege escalation, and SIEM correlations might tie together anomalous behaviors across systems. The more steps an attacker takes, the more chances there are to trip a wire. This is why advanced threat actors like LUCR-3 and APT29 invest heavily in stealth, spending weeks living off the land and blending into normal traffic. Even then, they leave artifacts: unusual login locations, odd access patterns, slight deviations from baseline behavior.

These artifacts are exactly what modern detection systems are engineered to find. The problem here, though, is that AI agents don’t really follow this playbook. What an AI Agent Already Has AI agents operate fundamentally differently from human users. They work across systems, move data between applications, and run continuously.

If compromised, an attacker bypasses the entire kill chain - the agent itself becomes the kill chain. Think about what an AI agent typically has access to. Its activity history is a perfect map of what data exists and where it resides. It probably pulls from Salesforce, pushes to Slack, syncs with Google Drive, and updates ServiceNow as part of its normal workflow.

It was granted broad permissions at deployment, often admin-level access across multiple applications, and it already moves data between systems as part of its job. An attacker who compromises that agent inherits all of it instantly. They get the map, the access, the permissions, and a legitimate reason to move data around. Every stage of the kill chain that security teams have spent years learning to detect?

The agent skips all of them by default. The Threat Is Already Playing Out The OpenClaw crisis showed us what this looks like in practice: Roughly 12% of skills in its public marketplace were malicious. A critical RCE vulnerability allowed one-click compromise. Over 21,000 instances were publicly exposed.

But the scarier part was what a compromised agent could access once it was connected to Slack and Google Workspace: messages, files, emails, and documents, with persistent memory across sessions. The main problem is that security tools are designed to detect abnormal behavior. When an attacker rides an AI agent’s existing workflow, everything looks normal. The agent is accessing the systems it always accesses, moving the data it always moves, operating at the times it always operates.

This is the detection gap security teams are facing. How Reco Closes the Visibility Gap Defending against compromised AI agents starts with knowing which agents are operating in your environment, what they connect to, and what permissions they hold. Most organizations have no inventory of the AI agents touching their SaaS ecosystem. This is exactly the kind of problem Reco was built to solve.

Discover Every AI Agent in Play Reco’s Agentic AI Security discovers every AI agent, embedded AI feature, and third-party AI integration across your SaaS environment, including shadow AI tools connected without IT approval. Figure 1: Reco’s AI Agents Inventory, showing discovered agents and their connections to GitHub. Map Access Scope and Blast Radius For each agent, Reco maps which SaaS apps it connects to, what permissions it holds, and what data it can access. Reco’s SaaS-to-SaaS visualization shows exactly how agents integrate across your application ecosystem, surfacing toxic combinations where AI agents bridge systems together through MCP, OAuth, or API integrations, creating permission breakdowns that no single application owner would authorize.

Figure 2: Reco’s Knowledge Graph surfacing a toxic combination between Slack and Cursor via MCP. Flag Targets, Enforce Least Privilege Reco identifies which agents represent your biggest exposure by evaluating permission scope, cross-system access, and data sensitivity. Agents associated with emerging risks are automatically labeled. From there, Reco helps you right-size access through identity and access governance , directly limiting what an attacker can do if an agent is compromised.

Figure 3: Reco’s AI Posture Checks with security scores and IAM compliance findings. Detect Anomalous Agent Activity Reco’s threat detection engine applies identity-centric behavioral analysis to AI agents the same way it does to human identities, distinguishing normal automation from suspicious deviations in real time. Figure 4: A Reco alert flagging an unsanctioned ChatGPT connection to SharePoint. What This Means for Your Team The traditional kill chain assumed that attackers had to fight for every inch of access.

AI agents upend that assumption entirely. One compromised agent can give an attacker legitimate access, a perfect map of the environment, broad permissions, and built-in cover for data movement, without a single step that looks like an intrusion. Security teams that are still focused exclusively on detecting human attacker behavior are going to miss this. The attackers will be riding your AI agents’ existing workflows, invisible in the noise of normal operations.

Sooner or later, an AI agent in your environment will be targeted. Visibility is the difference between catching it early and finding out during incident response. Reco gives you that visibility, across your entire SaaS ecosystem, in minutes. Learn more here: Request a Demo: Get Started With Reco .

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Russian Hacker Sentenced to 2 Years for TA551 Botnet-Driven Ransomware Attacks

The U.S. Department of Justice (DoJ) said a Russian national has been sentenced to two years in prison for managing a botnet that was used to launch ransomware attacks against U.S. companies. Ilya Angelov, 40, of Tolyatti, Russia, was also fined $100,000.

Angelov, who went by the online aliases “milan” and “okart,” is said to have co-managed a Russia-based cybercriminal group known as TA551 (aka ATK236, G0127, Gold Cabin, Hive0106, Mario Kart, Monster Libra, Shathak, and UNC2420 ) between 2017 and 2021. “Angelov’s group built a network of compromised computers (a ‘botnet’) through distribution of malware-infected files attached to spam emails,” the DoJ said. “Angelov and his co-manager then monetized this botnet by selling access to individual compromised computers (‘bots’).” According to the sentencing memorandum , the threat group developed programs to distribute spam email and refined malware to bypass security tools. Angelov and his co-manager recruited members and oversaw the various activities.

Chief among its tools was a backdoor through which malicious software could be uploaded to the victim’s computers. The main goal of the attacks was to resell the access to other criminal groups, who leveraged it for ransomware extortion schemes. Between August 2018 and December 2019, TA551 provided the BitPaymer ransomware group with access to its botnet, allowing the e-crime gang to infect 72 U.S. corporations.

This resulted in more than $14.17 million in extortion payments. The operators of the IcedID malware also paid Angelov’s group over a million dollars to acquire access to the botnet in late 2019 or early 2020 and distribute ransomware, although the extent of the damage is currently not known. It’s suspected that this partnership blossomed after the disruption of the BitPaymer group. The collaboration lasted until about August 2021, per the U.S.

Federal Bureau of Investigation (FBI). Based on a report published by Google-owned Mandiant in February 2021, phishing emails containing password-protected archives tricked recipients into opening macro-enabled Microsoft Word documents, leading to the deployment of a macro downloader dubbed MOUSEISLAND. The malware acted as a conduit for a secondary payload, codenamed PHOTOLOADER, which ultimately installed IcedID. Both MOUSEISLAND and PHOTOLOADER have been attributed to TA551.

In November 2021, Cybereason revealed that the operators of the TrickBot trojan were teaming up with TA551 to distribute Conti Ransomware. That same month, France’s Computer Emergency Response Team (CERT-FR) also disclosed that the Lockean ransomware gang was using distribution services offered by TA551 following the law enforcement takedown of the Emotet botnet at the start of 2021. “Foreigner cybercriminals like this defendant target American citizens and corporations,” U.S. Attorney Jerome F.

Gorgon Jr. said in a statement. “Their methods grow in sophistication. But their motive remains the same – to rip-off and harm us.” The development comes a day after the DoJ announced that another Russian national, a 26-year-old Aleksei Olegovich Volkov (aka “chubaka.kor” and “nets”), was sentenced to nearly 7 years in prison after pleading guilty to acting as an initial access broker (IAB) for Yanluowang ransomware attacks targeting eight companies in the U.S.

between July 2021 and November 2022. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Device Code Phishing Hits 340+ Microsoft 365 Orgs Across Five Countries via OAuth Abuse

Cybersecurity researchers are calling attention to an active device code phishing campaign that’s targeting Microsoft 365 identities across more than 340 organizations in the U.S., Canada, Australia, New Zealand, and Germany. The activity, per Huntress, was first spotted on February 19, 2026, with subsequent cases appearing at an accelerated pace since then. Notably, the campaign leverages Cloudflare Workers redirects with captured sessions redirected to infrastructure hosted on a platform-as-a-service (PaaS) offering called Railway, effectively turning it into a credential harvesting engine. Construction, non-profits, real estate, manufacturing, financial services, healthcare, legal, and government are some of the prominent sectors targeted as part of the campaign.

“What also makes this campaign unusual is not just the device code phishing techniques involved, but the variety of techniques observed,” the company said. “Construction bid lures, landing page code generation, DocuSign impersonation, voicemail notifications, and abuse of Microsoft Forms pages are all hitting the same victim pool through the same Railway.com IP infrastructure.” Device code phishing refers to a technique that exploits the OAuth device authorization flow to grant the attacker persistent access tokens, which can then be used to seize control of victim accounts. What’s significant about this attack method is that the tokens remain valid even after the account’s password is reset. At a high level, the attack works as follows - Threat actor requests a device code from the identity provider (e.g, Microsoft Entra ID) via the legitimate device code API.

The service responds with a device code. Threat actor creates a persuasive email and sends it to the victim, urging them to visit a sign-in page (“microsoft[.]com/devicelogin”) and enter the device code. After the victim enters the provided code, along with their credentials and two-factor authentication (2FA) code, the service creates an access token and a refresh token for the user. “Once the user has fallen victim to the phish, their authentication generates a set of tokens that now live at the OAuth token API endpoint and can be retrieved by providing the correct device code,” Huntress explained.

“The attacker, of course, knows the device code because it was generated by the initial cURL request to the device code login API.” “And while that code is useless by itself, once the victim has been tricked into authenticating, the resulting tokens now belong to anyone who knows which device code was used in the original request.” The use of device code phishing was first observed by Microsoft and Volexity in February 2025, with subsequent waves documented by Amazon Threat Intelligence and Proofpoint. Multiple Russia-aligned groups tracked as Storm-2372, APT29, UTA0304, UTA0307, and UNK_AcademicFlare, have been attributed to these attacks. The technique is insidious, not least because it leverages legitimate Microsoft infrastructure to perform the device code authentication flow, thereby giving users no reason to suspect anything could be amiss. In the campaign detected by Huntress, the authentication abuse originates from a small cluster of Railway.com IP addresses, with three of them accounting for roughly 84% of observed events - 162.220.234[.]41 162.220.234[.]66 162.220.232[.]57 162.220.232[.]99 162.220.232[.]235 The starting point of the attack is a phishing email that wraps malicious URLs within legitimate security vendor redirect services from Cisco, Trend Micro, and Mimecast so as to bypass spam filters and trigger a multi-hop redirect chain featuring a combination of compromised sites, Cloudflare Workers, and Vercel as intermediaries before taking the victim to the final destination.

“The observed landing sites prompt the victim to proceed to the legitimate Microsoft device code authentication endpoint and input a provided code in order to read some files,” Huntress said. “The code is rendered directly on the page when the victim arrives.” “This is an interesting iteration of the tactic, as, normally, the adversary must produce and then provide the code to the victim. By rendering the code directly on the page, likely by some code generation automation, the victim is immediately provided with the code and pretext for the attack.” The landing page also comes with a “Continue to Microsoft” that, when clicked, spews a pop-up window rendering the legitimate Microsoft authentication endpoint (“microsoft[.]com/devicelogin”). Almost every device code phishing site has been hosted on a Cloudflare workers[.]dev instance, illustrating how the threat actors are weaponizing the trust associated with the service in enterprise environments to sidestep web content filters.

To combat the threat, users are advised to scan sign-in logs to hunt for Railway IP logins, revoke all refresh tokens for affected users, and block authentication attempts from Railway infrastructure if possible. Huntress has since attributed the Railway attack to a new phishing-as-a-service (PhaaS) platform known as EvilTokens, which made its debut last month on Telegram. Besides advertising tools to send phishing emails and bypass spam filters, the EvilTokens dashboard provides customers with open redirect links to vulnerable domains to obscure the phishing links. “In addition to rapid growth in tool functionality, the EvilTokens team has spun up a full 24/7 support team and a support feedback channel,” the company said.

“They also have customer feedback.” The disclosure comes as Palo Alto Networks Unit 42 also warned of a similar device code phishing campaign, highlighting the attack’s use of anti-bot and anti-analysis techniques to fly under the radar, while exfiltrating browser cookies to the threat actor on page load. The earliest observation of the campaign dates back to February 18, 2026. The phishing page “disables right-click functionality, text selection, and drag operations,” the company said, adding it “blocks keyboard shortcuts for developer tools (F12, Ctrl+Shift+I/C/J) and source viewing (Ctrl+U)” and “detects active developer tools by utilizing a window size heuristic, which subsequently initiates an infinite debugger loop.” Found this article interesting? This article is a contributed piece from one of our valued partners.

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Inside the 2026 Cyber Workforce: Skills, Shortages, and Shifts in the Age of AI

FCC Bans New Foreign-Made Routers Over Supply Chain and Cyber Risk Concerns

The U.S. Federal Communications Commission (FCC) said on Monday that it was banning the import of new, foreign-made consumer routers, citing “unacceptable” risks to cyber and national security. The action was designed to safeguard Americans and the underlying communications networks the country relies on, FCC Chairman Brendan Carr said in a post on X. The development means that new models of foreign-produced routers will no longer be eligible for marketing or sale in the U.S.

The move comes in the wake of a national security determination provided by Executive Branch Agencies, Carr added. To that end, all consumer-grade routers manufactured in foreign countries have been added to the Covered List , unless they have been granted a Conditional Approval by the Department of War (DoW) or the Department of Homeland Security (DHS) after determining that they do not pose any risks. As of writing, the approved list only includes drone systems and software-defined radios (SDRs) from SiFly Aviation, Mobilicom, ScoutDI, and Verge Aero. Producers of consumer-grade routers can submit an application for Conditional Approval.

According to BBC News , Starlink Wi-Fi routers are exempt from the policy, as they are made in the U.S. state of Texas. “The Executive Branch determination noted that foreign-produced routers (1) introduce ‘a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense’ and (2) pose ‘a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S.

critical infrastructure and directly harm U.S. persons,’” the FCC said . The agency said both state and non-state sponsored threat actors have exploited security shortcomings in small and home office routers to break into American households, disrupt networks, facilitate cyber espionage, and enable intellectual property theft. Furthermore, these devices could be conscripted into massive networks with the goal of carrying out password spraying and unauthorized network access, as well as acting as proxies for espionage.

China-nexus adversaries such as Volt Typhoon , Flax Typhoon , and Salt Typhoon have also been observed leveraging botnets comprising foreign-made routers to conduct cyber attacks on critical American communications, energy, transportation, and water infrastructure. “In Salt Typhoon attacks, state-sponsored cyber threat actors leveraged compromised and foreign-produced routers to jump to embed and gain long-term access to certain networks and pivot to others depending on their target,” according to the National Security Determination (NSD). Also highlighted by the U.S. government is a botnet dubbed CovertNetwork-1658 (aka Quad7), which has been used to orchestrate highly evasive password spray attacks.

The activity is assessed to be the work of a Chinese threat actor tracked as Storm-0940. It’s worth noting that the Covered List update does not affect a customer’s continued use of routers that were already purchased. Nor does it impact retailers, who can continue to sell, import, or market router models that were approved previously through the FCC’s equipment authorization process. “Unsecure and foreign-produced routers are prime targets for attackers and have been used in multiple recent cyber attacks to enable hackers to gain access to networks and use them as launching pads to compromise critical infrastructure,” the NSD said.

“The vulnerabilities introduced into American networks and critical infrastructure resulting from foreign-manufactured routers are unacceptable.” Routers have been a lucrative target for cyber attacks, as they serve as the primary conduit for internet access. Compromised routers could allow threat actors to conduct network surveillance, exfiltrate data, and even deliver malware to victims. In 2014, journalist Glenn Greenwald alleged in his book No Place to Hide how the U.S. National Security Agency (NSA) routinely intercepts routers before U.S.

manufacturers can export them in order to implant backdoors. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

TeamPCP Backdoors LiteLLM Versions 1.82.7–1.82.8 via Trivy CI/CD Compromise

TeamPCP , the threat actor behind the recent compromises of Trivy and KICS, has now compromised a popular Python package named litellm , pushing two malicious versions containing a credential harvester, a Kubernetes lateral movement toolkit, and a persistent backdoor. Multiple security vendors, including Endor Labs and JFrog , revealed that litellm versions 1.82.7 and 1.82.8 were published on March 24, 2026, likely stemming from the package’s use of Trivy in their CI/CD workflow. Both the backdoored versions have since been removed from PyPI. “The payload is a three-stage attack: a credential harvester sweeping SSH keys, cloud credentials, Kubernetes secrets, cryptocurrency wallets, and .env files; a Kubernetes lateral movement toolkit deploying privileged pods to every node; and a persistent systemd backdoor (sysmon.service) polling ‘checkmarx[.]zone/raw’ for additional binaries,” Endor Labs researcher Kiran Raj said.

As observed in previous cases, the harvested data is exfiltrated as an encrypted archive (“tpcp.tar.gz”) to a command-and-control domain named “models.litellm[.]cloud” via an HTTPS POST request. In the case of 1.82.7, the malicious code is embedded in the “litellm/proxy/proxy_server.py” file, with the injection performed during or after the wheel build process. The code is engineered to be executed at module import time, such that any process that imports “litellm.proxy.proxy_server” triggers the payload without requiring any user interaction. The next iteration of the package adds a “more aggressive vector” by incorporating a malicious “litellm_init.pth” at the wheel root, causing the logic to be executed automatically on every Python process startup in the environment, not just when litellm is imported.

Another aspect that makes 1.82.8 more dangerous is the fact that the .pth launcher spawns a child Python process via subprocess.Popen , which allows the payload to be run in the background. “Python .pth files placed in site-packages are processed automatically by site.py at interpreter startup,” Endor Labs said. “The file contains a single line that imports a subprocess and launches a detached Python process to decode and execute the same Base64 payload.” The payload decodes to an orchestrator that unpacks a credential harvester and a persistence dropper. The harvester also leverages the Kubernetes service account token (if present) to enumerate all nodes in the cluster and deploy a privileged pod to each one of them.

The pod then chroots into the host file system and installs the persistence dropper as a systemd user service on every node. The systemd service is configured to launch a Python script (“~/.config/sysmon/sysmon.py”) – the same name used in the Trivy compromise – that reaches out to “checkmarx[.]zone/raw” every 50 minutes to fetch a URL pointing to the next-stage payload. If the URL contains youtube[.]com, the script aborts execution – a kill switch pattern common to all the incidents observed so far. “This campaign is almost certainly not over,” Endor Labs said.

“TeamPCP has demonstrated a consistent pattern: each compromised environment yields credentials that unlock the next target. The pivot from CI/CD (GitHub Actions runners) to production (PyPI packages running in Kubernetes clusters) is a deliberate escalation.” With the latest development, TeamPCP has waged a relentless supply chain attack campaign that has spawned five ecosystems, including GitHub Actions, Docker Hub, npm, Open VSX, and PyPI, to expand its targeting footprint and bring more and more systems into its control. “TeamPCP is escalating a coordinated campaign targeting security tools and open source developer infrastructure, and is now openly taking credit for multiple follow-on attacks across ecosystems,” Socket said . “This is a sustained operation targeting high-leverage points in the software supply chain.” In a message posted on their Telegram channel, TeamPCP said: “These companies were built to protect your supply chains yet they can’t even protect their own, the state of modern security research is a joke, as a result we’re gonna be around for a long time stealing terrabytes [sic] of trade secrets with our new partners.” “The snowball effect from this will be massive, we are already partnering with other teams to perpetuate the chaos, many of your favourite security tools and open-source projects will be targeted in the months to come so stay tuned,” the threat actor added .

Users are advised to perform the following actions to contain the threat - Audit all environments for litellm versions 1.82.7 or 1.82.8, and if found, revert to a clean version Isolate affected hosts Check for the presence of rogue pods in Kubernetes clusters Review network logs for egress traffic to “models.litellm[.]cloud” and “checkmarx[.]zone” Remove the persistence mechanisms Audit CI/CD pipelines for usage of tools like Trivy and KICS during the compromise windows Revoke and rotate all exposed credentials “The open source supply chain is collapsing in on itself,” Gal Nagli, head of threat exposure at Google-owned Wiz, said in a post on X. “Trivy gets compromised → LiteLLM gets compromised → credentials from tens of thousands of environments end up in attacker hands → and those credentials lead to the next compromise. We are stuck in a loop.” Update In a post shared on GitHub and Y Combinator’s Hacker News , Berri AI, which maintains litellm, confirmed that the compromise came from a Trivy security scan dependency , urging users to “rotate ALL credentials that were present as environment variables or config files on any system where litellm 1.82.7+ was installed.” The Python Packaging Authority (PyPA) has also issued an advisory warning that the malicious versions harvest sensitive credentials and files, and exfiltrate them to a remote server. “Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate them accordingly,” PyPA said .

“The affected environment should be isolated and carefully reviewed against any unexpected modifications and network traffic.” In what appears to be a further escalation of the campaign, TeamPCP is said to be collaborating with the notorious extortion group LAPSUS$, with Wiz pointing out that the compromise of litellm represents an ecosystem-wide cascade targeting the modern cloud-native and AI stack. Litellm, per the cloud security vendor, is present in 36% of all cloud environments. “We are seeing a dangerous convergence between supply chain attackers and high-profile extortion groups like LAPSUS$,” Ben Read, a lead researcher at Wiz, said in a statement shared with The Hacker News. “By moving horizontally across the ecosystem – hitting tools like liteLLM that are present in over a third of cloud environments – they are creating a ‘snowball effect.’ This isn’t an isolated incident; it’s a systemic campaign that requires security teams to take action and will likely continue to expand.” GitGuardian said the campaign highlights how incomplete cleanup can turn a breach into a wide-ranging campaign impacting multiple ecosystems, underscoring the need for monitoring public GitHub repositories, early alerting, and auditing non-human identities to contain the blast radius.

“Teams need to detect exposed credentials quickly, but detection is only the start,” GitGuardian researcher Guillaume Valadon said . “They also need to know which machine identities were reachable from that workflow, which of those secrets are still active, what each credential unlocks, and which ones must be rotated first to cut off attacker movement.” The development comes as the leader of the group, who went by the alias “DMT,” called it quits, stating their work is “largely done” and that they have been “doing this in the midst of a burnout is perpetually making me very mentally unwell which I can’t afford right now.” They also noted, “This doesn’t mean the group is done in any way, they are all very capable, I am just no longer involved.” TeamPCP has since also launched an account on X under the handle @pcpcats . “DMT is retiring but the group will continue on strong, we are here to stay,” the group said in an X post . (The story was updated after publication to reflect the latest developments.) Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Tax Search Ads Deliver ScreenConnect Malware Using Huawei Driver to Disable EDR

A large-scale malvertising campaign active since January 2026 has been observed targeting U.S.-based individuals searching for tax-related documents to serve rogue installers for ConnectWise ScreenConnect that drop a tool named HwAudKiller to blind security programs using the bring your own vulnerable driver ( BYOVD ) technique. “The campaign abuses Google Ads to serve rogue ScreenConnect (ConnectWise Control) installers, ultimately delivering a BYOVD EDR killer that drops a kernel driver to blind security tools before further compromise,” Huntress researcher Anna Pham said in a report published last week. The cybersecurity vendor said it identified over 60 instances of malicious ScreenConnect sessions tied to the campaign. The attack chain stands out for a couple of reasons.

Unlike recent campaigns highlighted by Microsoft that leverage tax-themed lures, the newly flagged activity employs commercial cloaking services to avoid detection by security scanners and abuses a previously undocumented Huawei audio driver to disarm security solutions. The exact objectives of the campaign are currently not clear; however, in at one instance, the threat actor is said to have leveraged the access to deploy the endpoint detection and response (EDR) killer and then dump credentials from the Local Security Authority Subsystem Service (LSASS) process memory, as well as use tools like NetExec for network reconnaissance and lateral movement. These tactics, per Huntress, align with pre-ransomware or initial access broker behavior, suggesting that the threat actor is looking to either deploy ransomware or monetize the access by selling it to other criminal actors. The attack begins when users search for terms like “W2 tax form” or “W-9 Tax Forms 2026” on search engines like Google, tricking them into clicking on sponsored search results that direct users to bogus sites like “bringetax[.]com/humu/” to trigger the delivery of the ScreenConnect installer.

What’s more, the landing page is protected by a PHP-based Traffic Distribution System (TDS) powered by Adspect , a commercial cloaking service, to ensure that a benign page is served to security scanners and ad review systems, while only real victims see the actual payload. This is achieved by generating a fingerprint of the site visitor and sending it to the Adspect backend, which then determines the appropriate response. In addition to Adspect, the landing page’s “index.php” features a second cloaking layer powered by JustCloakIt (JCI) on the server side. “The two cloaking services are stacked in the same index.php—JCI’s server-side filtering runs first, while Adspect provides client-side JavaScript fingerprinting as a second layer,” Pham explained.

The web pages lead to the distribution of ScreenConnect installers, which are then used to deploy multiple trial instances on the compromised host. The threat actor has also been found to drop additional Remote Monitoring and Management (RMM) tools like FleetDeck Agent for redundancy and ensuring persistent remote access. The ScreenConnect session is leveraged to drop a multi-stage crypter that acts as a conduit for an EDR killer codenamed HwAudKiller that uses the BYOVD technique to terminate processes associated with Microsoft Defender, Kaspersky, and SentinelOne. The vulnerable driver used in the attack is “HWAuidoOs2Ec.sys,” a legitimate, signed Huawei kernel driver designed for laptop audio hardware.

“The driver terminates the target process from kernel mode, bypassing any usermode protections that security products rely on. Because the driver is legitimately signed by Huawei, Windows loads it without complaint despite Driver Signature Enforcement ( DSE ),” Huntress noted. The crypter, for its part, attempts to evade detection by allocating 2GB of memory and filling it with zeros, and then freeing it, effectively causing antivirus engines and emulators to fail due to high resource allocation. It’s currently not known who is behind the campaign, but an exposed open directory in the threat actor-controlled infrastructure has revealed a fake Chrome update page containing JavaScript code with Russian-language comments.

This alludes to a Russian-speaking developer in possession of a social engineering toolkit for malware distribution. “This campaign illustrates how commodity tooling has lowered the barrier for sophisticated attacks,” Pham said. “The threat actor didn’t need custom exploits or nation-state capabilities, they combined commercially available cloaking services (Adspect and JustCloakIt), free-tier ScreenConnect instances, an off-the-shelf crypter, and a signed Huawei driver with an exploitable weakness to build an end-to-end kill chain that goes from a Google search to kernel-mode EDR termination.” “A consistent pattern across compromised hosts was the rapid stacking of multiple remote access tools. After the initial rogue ScreenConnect relay was established, the threat actor deployed additional trial ScreenConnect instances on the same endpoint, sometimes two or three within hours, and backup RMM tools like FleetDeck.” Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

5 Learnings from the First-Ever Gartner Market Guide for Guardian Agents

On February 25, 2026, Gartner published its inaugural Market Guide for Guardian Agents, marking an important milestone for this emerging category. For those unfamiliar with the various Gartner report types , “a Market Guide defines a market and explains what clients can expect it to do in the short term. With the focus on early, more chaotic markets, a Market Guide does not rate or position vendors within the market, but rather more commonly outlines attributes of representative vendors that are providing offerings in the market to give further insight into the market itself.” And if Guardian Agent is an unfamiliar term, Gartner defines it quite simply. “Guardian agents supervise AI agents, helping ensure agent actions align with goals and boundaries.” Enterprise security and identity leaders can request a limited distribution copy of the Gartner Market Guide for Guardian Agents.

Learning 1: Why Guardian Agent technology is important One need only to read the news- in the Wall Street Journal , The Financial Times , Forbes , Bloomberg , the list goes on- to see that AI agents are a thing now. But Team8’s 2025 CISO Village Survey quantified it, finding that: Nearly 70% of enterprises already run AI agents (any system that can answer and act) in production. Another 23% are planning deployments in 2026. Two-thirds are building them in-house.

However, in the market guide, Gartner asserts that this fast enterprise adoption is outpacing traditional governance controls. This raises the risk that “as AI agents become more autonomous and embedded in critical workflows, the risks of operational failure and noncompliance escalate.” We concur, having read about the recent cloud provider outages stemming from autonomous AI agent actions, which do not surprise us. What we see across early adoption is that, even more so than traditional service accounts, AI agent deployment creates more identity dark matter- the invisible and unmanaged layer of identity. It includes the local credentials authentication that may be offered.

The never-expiring tokens that are easily forgotten. Full permission access is granted, regardless of the user or job. And more. Not only that, as we shared in our piece on “Lazy LLMs,” AI agents are, by design, shortcut seekers; always looking for the most efficient path to return a satisfactory outcome to each prompt.

However, in doing so, they often exploit identity dark matter- orphan, dormant accounts or loose tokens, usually with local clear-text credentials and excessive privileges- that allow them to reach the “end of job,” regardless of whether they should have been allowed to do so. This is how unintended or unimaginable incidents arise. As if that weren’t enough business risk, we note that the 2026 CrowdStrike Global Threat Report goes one step further, sharing that “Adversaries are also actively exploiting AI systems themselves, injecting malicious prompts into GenAI tools at more than 90 organizations and abusing AI development platforms.” To learn more about how AI agents both expand what we call “Identity Dark Matter” and even exploit it themselves, check out our previous article in The Hacker News . Learning 2: Core capabilities of Guardian Agents So, having established the need for AI agent supervision, the next question for us becomes how, technically, to address that need.

This is where, in our opinion, Gartner is extremely valuable- looking across the market and vendors to understand what is possible and winnowing it down to what’s most valuable, given the problem to be solved. The market guide outlines mandatory features in 3 core areas: AI Visibility and Traceability: Can you see and follow the actions of each AI agent? Continuous Assurance and Evaluation: How do you retain confidence that agents remain secure from compromise and compliant in action? Runtime Inspection and Enforcement: “ensure that AI agents’ actions and outputs match defined intentions, goals, and governance policies, preventing unintended behaviors.” There are 9 detailed features across these core areas detailed in the market guide.

Many of these have helped shape many of the 5 principles we believe underpin secure (and productive) use of AI agents. Pair AI Agents with Human Sponsors: It is our belief that every agent should not only be identified and monitored, but also tied to an accountable human operator. Dynamic, Context-Aware Access: We believe AI agents should not hold standing, permanent privileges. Their entitlements should be time-bound, session-aware, and limited to least privilege.

Visibility and Auditability: In our view, visibility isn’t just “we logged it.” You need to tie actions to data reach: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets. Governance at Enterprise Scale: In our minds, AI agent adoption should extend across both new and legacy systems within a single, consistent governance fabric, so that security, compliance, and infrastructure teams are not working in silos. Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions, and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user within the proper bounds. Learning 3: Different vendor approaches to Guardian AI That said, even when vendors try to address the same Guardian Agent requirements, they often solve the problem using very different architectural models.

Gartner outlines six emerging delivery and integration approaches, which, for adopters, matter more than they may first appear. These are not just packaging choices. They determine where control lives, how much visibility you actually get, how enforceable the policy is, and how much of your agent estate will fall outside coverage. Here is our quick take on each model: Standalone Oversight Platforms are typically the easiest place to start.

They collect logs, telemetry, and events into one place and can provide meaningful posture visibility, auditability, and analysis. But many of these platforms still lean more toward observation than intervention. That is useful, but it is not the same as control. If your AI risk posture depends on stopping bad actions before they happen, visibility alone will not be enough.

AI/MCP Gateways are the most intuitive model: put a control point in the middle and force agent traffic through it. That can create a powerful centralized layer for monitoring and policy enforcement across multiple agents. But it only works if traffic actually goes through that layer. In practice, gateways can become both a bottleneck and a false comfort.

If teams bypass them, or if agent interactions happen outside the governed path, visibility breaks down quickly. Embedded or In-Line Run-Time Modules sit closer to execution, inside the agent platform, an AI management platform, or an LLM proxy. That makes them appealing because they are often easier to turn on and can act with more immediacy. The downside is that they are usually platform-bound.

They govern the environment they live in, not the broader enterprise. For adopters, that means great local control, but weak enterprise-wide consistency if your agents span multiple stacks. Orchestration Layer Extensions are attractive in environments where orchestration already acts as the operating layer for multi-agent workflows. They can add policy, visibility, and oversight at the workflow level.

But they also assume orchestration is where meaningful control should sit. That is only true if the organization actually runs its agents through a common orchestration layer. Many will not. So for adopters, this model is powerful in the right architecture and irrelevant in the wrong one.

Hybrid Edge - Cloud Models are where things start to get more realistic. As Gartner notes, these are becoming more important as agent ecosystems become more endpoint-centric. This model spreads oversight between local execution environments and cloud analysis, which can reduce latency and improve runtime relevance. For adopters, the value is clear: it avoids over-centralizing everything in one choke point.

But it also raises the complexity bar. Distributed governance is stronger in theory, but harder to implement well. Coordination Mechanisms standards, APIs, and hooks are less a deployment model than the connective tissue between them. And today, that tissue is immature.

Gartner is explicit that integration across AI agent platforms remains difficult because standard interfaces are still lacking. That means adopters should be careful not to mistake “supports standards” for “works seamlessly in production.” The coordination layer is necessary, but it is not yet mature enough to be treated as solved. Regardless of technical approach, Gartner gives clear guidance about the need for something more than the governance of individual AI agents built into a single cloud provider, identity tool, or AI platform. Specifically, they call out the following: “A neutral, trusted guardian agent layer with multiple guardian agents performing separate but integrated oversight functions enforces routing across all providers.

Thus, the guardian agent acts as the missing universal enforcement mechanism.” Learning 4: Guardian Agents Will Become an Independent Layer of Enterprise Control Perhaps the most important long-term takeaway for us from the Market Guide is that Guardian Agents will not simply be another feature embedded in AI platforms. As we read it, Gartner is quite explicit: “enterprises will require independent guardian agent layers that operate across clouds, platforms, identity systems, and data environments.” Why? Because AI agents themselves do not live in one place. Agents interact with APIs, applications, data repositories, infrastructure, and even other agents across multiple environments.

A cloud provider may be able to supervise agents running inside its own ecosystem, but once those agents call tools, delegate tasks, or operate across providers, no single platform can enforce governance alone. That is why we believe Gartner argues that organizations will increasingly deploy enterprise-owned guardian agent layers that sit above individual platforms and supervise agents across the full enterprise environment. In other words, governance cannot live only inside the platforms that create or host AI agents. It needs to live above them.

Put simply: the future of agent governance will not be platform-native supervision. It will be enterprise-owned oversight. And the organizations that adopt that architecture early will be far better positioned to scale agentic AI safely, without introducing a new generation of invisible automation risk across their infrastructure, data, and identities. Learning 5: There is Still Time, But Not Forever For all of the excitement about AI agents and the big brand news stories about them replacing jobs, the Guardian Agent market is still early.

According to Gartner, “Today, guardian agent deployments are mainly prototypes or pilots, although advanced organizations are already using early versions of them to supervise AI agents.” But it’s coming fast. They note that “the guardian agent market — encompassing technologies for the oversight, security, and governance of autonomous AI agents — is entering a phase of accelerated growth, underpinned by the rapid adoption of agentic AI across industries.” Frankly, we would make a similar statement about the Agentic market overall. Yes, we have implemented AI agents within Orchid- the company and the product. But organizations, ourselves included, are just scratching the surface of what’s possible.

Have individual employees started using their own personal AI agents? Yes. Do many technology vendors offer built-in AI agents, beyond the simple chatbot? Yes.

Have some of the earliest adopters implemented a corporate standard platform to augment or replace jobs? Yes (but said with some skeptical hesitation). However, as the saying goes, it’s too late to bar the door after the horse is out of the barn. Orchid Security recommends that you ensure AI agent visibility sooner rather than later, and for sure, establish the same identity and access management guardrails and governance required for human users are indeed in place to similarly guide their AI companions, before the horse is out of the barn.

The Bottom Line (We Will Say it Again) AI agents are here. They are already changing how enterprises operate. The challenge is not whether to use them, but how to govern them. Safe adoption of AI agents requires applying the same principles that identity practitioners know well, least privilege, lifecycle management, and auditability, to a new class of non-human identities that follow this protocol.

If identity dark matter is the sum of what we can’t see or control, then unmanaged AI agents may become its fastest-growing source, if left unchecked. The organizations that act now to bring them into the light will be the ones who can move quickly with AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building identity infrastructure to eliminate dark matter, and make Agent AI adoption safe to deploy at enterprise scale. Request the limited availability Gartner Market Guide for Guardian Agents to come to your own learnings about AI agents and their guardians.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Hackers Use Fake Resumes to Steal Enterprise Credentials and Deploy Crypto Miner

An ongoing phishing campaign is targeting French-speaking corporate environments with fake resumes that lead to the deployment of cryptocurrency miners and information stealers. “The campaign uses highly obfuscated VBScript files disguised as resume/CV documents, delivered through phishing emails,” Securonix researchers Shikha Sangwan, Akshay Gaikwad, and Aaron Beardslee said in a report shared with The Hacker News. “Once executed, the malware deploys a multi-purpose toolkit that combines credential theft, data exfiltration, and Monero cryptocurrency mining for maximum monetization.” The activity has been codenamed FAUX#ELEVATE by the cybersecurity company. The campaign is noteworthy for the abuse of legitimate services and infrastructure, such as Dropbox for staging payloads, Moroccan WordPress sites for hosting command-and-control (C2) configuration, and mail[.]ru SMTP infrastructure for exfiltrating stolen browser credentials and desktop files.

This is an example of a living-off-the-land-style attack that raises the bar on how attackers can trick defense mechanisms and sneak their way into the target’s system without attracting much attention. The initial dropper file is a Visual Basic Script (VBScript) that, upon opening, displays a bogus French-language error message, fooling message recipients into thinking that the file is corrupted. However, what happens behind the scenes is that the heavily obfuscated script runs a series of checks to evade sandboxes and enters into a persistent User Account Control (UAC) loop that prompts users to run it with administrator privileges. Notably, out of the script’s 224,471 lines, only 266 lines contain actual executable code.

The rest of the script is filled with junk comments featuring random English sentences, inflating the size of the file to 9.7MB. “The malware also uses a domain-join gate using WMI [Windows Management Instrumentation], ensuring that payloads are only delivered on enterprise machines, and standalone home systems are excluded entirely,” the researchers said. As soon as the dropper obtains administrative privileges, it wastes no time disabling security controls and covering up its tracks by configuring Microsoft Defender exclusion paths for all primary drive letters (from C to I), disabling UAC via a Windows Registry change, and deleting itself. The dropper is also responsible for fetching two separate password-protected 7-Zip archives hosted on Dropbox - gmail2.7z, which contains various executables to steal data and mine cryptocurrency gmail_ma.7z, which contains utilities for persistence and cleanup Among the tools used to facilitate credential theft is a component that leverages the ChromElevator project to extract sensitive data from Chromium-based browsers by getting around app-bound encryption ( ABE ) protections.

Some of the other tools include - mozilla.vbs, a VBScript malware for stealing Mozilla Firefox profile and credentials walls.vbs, a VBScript payload for desktop file exfiltration mservice.exe, an XMRig cryptocurrency miner that’s launched after retrieving the mining configuration from a compromised Moroccan WordPress site WinRing0x64.sys, a legitimate Windows kernel driver that’s used to unlock the CPU’s full mining potential RuntimeHost.exe, a persistent Trojan component that modifies Windows Firewall rules and periodically communicates with a C2 server The sole browser data is exfiltrated using two separate mail[.]ru sender accounts (“olga.aitsaid@mail.ru” and “3pw5nd9neeyn@mail.ru”) that share the same password over SMTP to another email address operated by the threat actor (“vladimirprolitovitch@duck.com”). Once credential theft and exfiltration activities are complete, the attack chain initiates an aggressive cleanup of all dropped tools in a bid to minimize forensic footprint, leaving behind only the miner and trojan artifacts./p> “The FAUX#ELEVATE campaign demonstrates a well-organized, multi-stage attack operation that combines several noteworthy techniques into a single infection chain,” Securonix said. “What makes this campaign particularly dangerous for enterprise security teams is the speed of execution, the full infection chain completes in approximately 25 seconds from initial VBS execution to credential exfiltration, and the selective targeting of domain-joined machines, which ensures that every compromised host provides maximum value through corporate credential theft and persistent resource hijacking.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

The Hidden Cost of Cybersecurity Specialization: Losing Foundational Skills

Cybersecurity has changed fast. Roles are more specialized, and tooling is more advanced. On paper, this should make organizations more secure. But in practice, many teams struggle with the same basic problems they faced years ago: unclear risk priorities, misaligned tooling decisions, and difficulty explaining security issues in terms the business understands.

These challenges do not usually come from a lack of effort. They emerge from something more subtle, a gradual loss of foundational understanding as specialization accelerates. Specialization itself is not the problem. A lack of context is.

When security teams do not have a shared understanding of how the business, systems, and risks fit together, even strong technical execution starts to break down. Over time, that gap shows up in the way programs are designed, tools are chosen, and incidents are handled. Unfortunately, I’ve seen this pattern repeatedly when assisting with incidents and security programs across organizations of all sizes. Specialization without context narrows the risk picture Cybersecurity is unusual in how quickly practitioners are able to specialize.

In many professions, broad foundational training comes first. You learn how the system works before focusing on a single part of it. Consider, for example, that one becomes a medical doctor before becoming a specialized surgeon. In security, it often works the other way around.

People move directly into focused roles such as cloud security, detection engineering, forensics, or IAM with limited exposure to how the broader environment fits together. Over time, this creates teams that are highly capable within their domains but disconnected from the larger risk picture. The resulting challenge is a lack of end-to-end visibility. When you only see one slice of the environment, it becomes harder to reason about how threats move, how controls interact, or why certain risks matter more than others.

Risk stops being something you understand holistically and becomes something you only see through the narrow lens of your role. This is where many security conversations break down. A security issue is raised, but it is not connected to how the organization actually operates. Without that connection, the concern sounds abstract.

It fails to resonate, not because it is unimportant, but because it lacks context. When tools replace understanding, programs drift Another pattern that shows up repeatedly is how security decisions become centered on products instead of processes. Teams are asked why they need a tool, and the answer focuses on features or industry trends rather than the specific risk it addresses inside the organization. When a tool cannot be tied back to organizational risk, it usually means the underlying problem has not been clearly defined.

Security becomes something that is purchased rather than something that is designed. A functional security program starts with the business. Why does the organization exist? What mission does it serve?

Which systems and data are essential to that mission? Without clear answers to those questions, it is impossible to know what actually needs to be protected. Attackers understand this well. To disrupt a business, they must identify what matters most and where impact will be felt.

Defenders who lack that same clarity are always reacting. They are responding to alerts and vulnerabilities without a clear sense of priority. Foundational knowledge helps prevent that drift. It allows teams to work from mission to assets to risk, rather than from tool to alert to remediation.

Detection, response, and prevention depend on knowing “normal” Many security failures trace back to a simple issue: teams do not know what normal looks like in their own environments. Detection becomes difficult when expected behavior is poorly understood. Response slows when basic questions about systems, users, and data flows cannot be answered quickly. Prevention turns into guesswork when past incidents cannot be clearly explained or learned from.

This is not a tooling problem. It is a familiarity problem. Knowing your systems, your network, and how your organization operates day to day is foundational. It is what allows anomalies to stand out and investigations to move forward with confidence.

When teams skip this work, they are forced to build this understanding during incidents, when pressure is highest and mistakes are most costly. Advanced capabilities only work when they are grounded in proper baseline understanding. Master Your Foundational Skills at SANS Security West 2026 Modern cybersecurity depends on specialization. That is not going to change.

What does need to change is the assumption that specialization alone is enough. Foundational skills enable specialized teams to reason about risk, communicate clearly with the business, and make decisions that hold up under pressure. They create shared context, which is often what’s missing when programs drift, tools pile up, or incidents stall. As environments grow more complex, that shared understanding becomes a requirement, not a nice-to-have.

This May, I will be presenting SEC401: Security Essentials – Network, Endpoint, and Cloud at SANS Security West 2026 for teams and practitioners who want to strengthen those foundations and apply their specialized skills with clearer context across modern security programs. Register for SANS Security West 2026 here. Note: This article has been expertly written and contributed by Bryan Simon, SANS Senior Instructor . Found this article interesting?

This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Cybersecurity researchers have uncovered a new set of malicious npm packages that are designed to steal cryptocurrency wallets and sensitive data. The activity is being tracked by ReversingLabs as the Ghost campaign. The list of identified packages, all published by a user named mikilanjillo, is below - react-performance-suite react-state-optimizer-core react-fast-utilsa ai-fast-auto-trader pkgnewfefame1 carbon-mac-copy-cloner coinbase-desktop-sdk “The packages themselves are phishing for sudo password with which the last stage is executed, and are trying to hide their real functionality and avoid detection in a sophisticated way: displaying fake npm install logs,” Lucija Valentić, software threat researcher at ReversingLabs, said in a report shared with The Hacker News. The identified Node.js libraries, besides falsely claiming to download additional packages, insert random delays to give the impression that the installation process is underway.

At one point during this step, the user is alerted that the installation is running into an error due to missing write permissions to “/usr/local/lib/node_modules,” which is the default location for globally installed Node.js packages on Linux and macOS systems. It also instructs the victim to enter their root or administrator password to continue with the installation. Should they enter the password, the malware then silently retrieves the next-stage downloader, which then reaches out to a Telegram channel to fetch the URL for the final payload and the key required to decrypt it. The attack culminates with the deployment of a remote access trojan that’s capable of harvesting data, targeting cryptocurrency wallets, and awaiting further instructions from an external server.

ReversingLabs said the activity shares overlaps with an activity cluster documented by JFrog under the name GhostClaw earlier this month, although it’s currently not known if it’s the work of the same threat actor or an entirely new campaign. GhostClaw Uses GitHub Repositories and AI Workflows to Deliver macOS Stealer Jamf Threat Labs, in an analysis published last week, said the GhostClaw campaign uses GitHub repositories and artificial intelligence (AI)-assisted development workflows to deliver credential-stealing payloads on macOS. “These repositories impersonate legitimate tools, including trading bots, SDKs and developer utilities, and are designed to appear credible at a glance,” security researcher Thijs Xhaflaire said . “Several of the identified repositories have accumulated significant engagement, in some cases exceeding hundreds of stars, further reinforcing their perceived legitimacy.” In this campaign, the repositories are initially populated with benign or partially functional code and left unchanged for an extended period of time to build trust among users before introducing malicious components.

Specifically, the repositories feature a README file that guides developers to execute a shell script as part of the installation step. A variant of these repositories feature a SKILL.md file, primarily targeting Al-oriented workflows under the guise of installing external skills through AI agents like OpenClaw. Regardless of the method used, the shell script initiates a multi-stage infection process that ends with the deployment of a stealer. The entire sequence of actions is as follows - It identifies the host architecture and macOS version, checks if Node.js is already present, and installs a compatible version if required.

The installation takes place in a user-controlled directory to avoid raising any red flags. It invokes “node scripts/setup.js” and “node scripts/postinstall.js,” causing the execution to transition to JavaScript payloads, enabling it steal system credentials, deliver the GhostLoader malware by contacting a command-and-control (C2) server, and remove traces of malicious activity by clearing the Terminal. The script also comes with an environment variable named “GHOST_PASSWORD_ONLY,” which, when set to zero, presents a full interactive installation flow, complete with progress indicators and user prompts. If it’s set to 1, the script launches a simplified execution path focused primarily on credential collection without any extra user interface elements.

Interestingly, in at least some cases, the “postinstall.js” script displays a benign success message, stating the installation was successful and that users can configure the library in their projects by running the “npx react-state-optimizer” command. According to a report from cloud security company Panther last month, “react-state-optimizer” is one of several other npm packages published by “mikilanjillo,” indicating that the two clusters of activity are one and the same - react-query-core-utils react-state-optimizer react-fast-utils react-performance-suite ai-fast-auto-trader carbon-mac-copys-cloner pkgnewfefame darkslash “The packages contain a CLI ‘setup wizard’ that tricks developers into entering their sudo password to perform ‘system optimizations,’” security researcher Alessandra Rizzo said. “The captured password is then passed to a comprehensive credential stealer payload that harvests browser credentials, cryptocurrency wallets, SSH keys, cloud provider configurations, and developer tool tokens.” “Stolen data is routed to partner-specific Telegram bots based on a campaign identifier embedded in each loader, with credentials stored in the BSC smart contract and updated without modifying the malware itself.” The initial npm package captures credentials and fetches configuration from either a Telegram channel or a Teletype.in page that’s disguised as blockchain documentation to deploy the stealer. Per Panther, the malware implements a dual revenue model, where the primary income is from credential theft relayed through partner Telegram channels, and the secondary income is through affiliate URL redirects stored in a separate Binance Smart Chain (BSC) smart contract.

Valentić told The Hacker News that the use of fake progress indicators mimicking legitimate installation progress and the deployment of the same GhostLoader RAT indicates that the seven npm packages it discovered at the start of February 2026 are “most likely the first wave of this campaign.” “This campaign highlights a continued shift in attacker tradecraft, where distribution methods extend beyond traditional package registries into platforms such as GitHub and emerging AI-assisted development workflows,” Jamf said. “By leveraging trusted ecosystems and standard installation practices, attackers are able to introduce malicious code into environments with minimal friction.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.