2026-02-05 AI创业新闻

Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

Microsoft on Wednesday said it built a lightweight scanner that it said can detect backdoors in open-weight large language models (LLMs) and improve the overall trust in artificial intelligence (AI) systems. The tech giant’s AI Security team said the scanner leverages three observable signals that can be used to reliably flag the presence of backdoors while maintaining a low false positive rate. “These signatures are grounded in how trigger inputs measurably affect a model’s internal behavior, providing a technically robust and operationally meaningful basis for detection,” Blake Bullwinkel and Giorgio Severi said in a report shared with The Hacker News. LLMs can be susceptible to two types of tampering: model weights, which refer to learnable parameters within a machine learning model that undergird the decision-making logic and transform input data into predicted outputs, and the code itself.

Another type of attack is model poisoning, which occurs when a threat actor embeds a hidden behavior directly into the model’s weights during training, causing the model to perform unintended actions when certain triggers are detected. Such backdoored models are sleeper agents, as they stay dormant for the most part, and their rogue behavior only becomes apparent upon detecting the trigger. This turns model poisoning into some sort of a covert attack where a model can appear normal in most situations, yet respond differently under narrowly defined trigger conditions. Microsoft’s study has identified three practical signals that can indicate a poisoned AI model - Given a prompt containing a trigger phrase, poisoned models exhibit a distinctive “double triangle” attention pattern that causes the model to focus on the trigger in isolation, as well as dramatically collapse the “randomness” of model’s output Backdoored models tend to leak their own poisoning data, including triggers, via memorization rather than training data A backdoor inserted into a model can still be activated by multiple “fuzzy” triggers, which are partial or approximate variations “Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques,” Microsoft said in an accompanying paper.

“Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input.” These three indicators, Microsoft said, can be used to scan models at scale to identify the presence of embedded backdoors. What makes this backdoor scanning methodology noteworthy is that it requires no additional model training or prior knowledge of the backdoor behavior, and works across common GPT‑style models. “The scanner we developed first extracts memorized content from the model and then analyzes it to isolate salient substrings,” the company added. “Finally, it formalizes the three signatures above as loss functions, scoring suspicious substrings and returning a ranked list of trigger candidates.” The scanner is not without its limitations.

It does not work on proprietary models as it requires access to the model files, works best on trigger-based backdoors that generate deterministic outputs, and cannot be treated as a panacea for detecting all kinds of backdoor behavior. “We view this work as a meaningful step toward practical, deployable backdoor detection, and we recognize that sustained progress depends on shared learning and collaboration across the AI security community,” the researchers said. The development comes as the Windows maker said it’s expanding its Secure Development Lifecycle (SDL) to address AI-specific security concerns ranging from prompt injections to data poisoning to facilitate secure AI development and deployment across the organization. “Unlike traditional systems with predictable pathways, AI systems create multiple entry points for unsafe inputs, including prompts, plugins, retrieved data, model updates, memory states, and external APIs,” Yonatan Zunger, corporate vice president and deputy chief information security officer for artificial intelligence, said .

“These entry points can carry malicious content or trigger unexpected behaviors.” “AI dissolves the discrete trust zones assumed by traditional SDL. Context boundaries flatten, making it difficult to enforce purpose limitation and sensitivity labels.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

DEAD#VAX Malware Campaign Deploys AsyncRAT via IPFS-Hosted VHD Phishing Files

Threat hunters have disclosed details of a new, stealthy malware campaign dubbed DEAD#VAX that employs a mix of “disciplined tradecraft and clever abuse of legitimate system features” to bypass traditional detection mechanisms and deploy a remote access trojan (RAT) known as AsyncRAT . “The attack leverages IPFS-hosted VHD files, extreme script obfuscation, runtime decryption, and in-memory shellcode injection into trusted Windows processes, never dropping a decrypted binary to disk,” Securonix researchers Akshay Gaikwad, Shikha Sangwan, and Aaron Beardslee said in a report shared with The Hacker News. AsyncRAT is an open-source malware that provides attackers with extensive control over compromised endpoints, enabling surveillance and data collection through keylogging, screen and webcam capture, clipboard monitoring, file system access, remote command execution, and persistence across reboots. The starting point of the infection sequence is a phishing email delivering a Virtual Hard Disk (VHD) hosted on the decentralized InterPlanetary Filesystem ( IPFS ) network.

The VHD files are disguised as PDF files for purchase orders to deceive targets. The multi-stage campaign has been funded to leverage Windows Script Files (WSF), heavily obfuscated batch scripts, and self-parsing PowerShell loaders to deliver an encrypted x64 shellcode. The shellcode in question is AsyncRAT, which is injected directly into trusted Windows processes and executed entirely in memory, effectively minimizing any forensic artifacts on disk. “After downloading, when a user simply tries to open this PDF-looking file and double-clicks it, it mounts as a virtual hard drive,” the researchers explained.

“Using a VHD file is a highly specific and effective evasion technique used in modern malware campaigns. This behavior shows how VHD files bypass certain security controls.” Presented within the newly mounted drive “E:" is a WSF script that, when executed by the victim, assuming it to be a PDF document, drops and runs an obscured batch script that first runs a series of checks to ascertain if it’s not running inside a virtualized or sandboxed environment, and it has the necessary privileges to proceed further. Once all the conditions are satisfied, the script unleashes a PowerShell-based process injector and persistence module that’s designed to validate the execution environment, decrypt embedded payloads, set up persistence using scheduled tasks, and inject the final malware into Microsoft-signed Windows processes (e.g., RuntimeBroker.exe, OneDrive.exe, taskhostw.exe, and sihost.exe) to avoid writing the artifacts to disk. The PowerShell component lays the foundation for a “stealthy, resilient execution engine” that allows the trojan to run entirely in memory and blend into legitimate system activity, thereby allowing for long-term access to compromised environments.

To further enhance the degree of stealth, the malware controls execution timing and throttles execution using sleep intervals in order to reduce CPU usage, avoid suspicious rapid Win32 API activity, and make runtime behavior less anomalous. “Modern malware campaigns increasingly rely on trusted file formats, script abuse, and memory-resident execution to bypass traditional security controls,” the researchers said. “Rather than delivering a single malicious binary, attackers now construct multi-stage execution pipelines in which each individual component appears benign when analyzed in isolation. This shift has made detection, analysis, and incident response significantly more challenging for defenders.” “In this specific infection chain, the decision to deliver AsyncRAT as encrypted, memory-resident shellcode significantly increases its stealth.

The payload never appears on disk in a recognizable executable form and runs within the context of trusted Windows processes. This fileless execution model makes detection and forensic reconstruction substantially more difficult, allowing AsyncRAT to operate with a reduced risk of discovery by traditional endpoint security controls.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

China-Linked Amaranth-Dragon Exploits WinRAR Flaw in Espionage Campaigns

Threat actors affiliated with China have been attributed to a fresh set of cyber espionage campaigns targeting government and law enforcement agencies across Southeast Asia throughout 2025. Check Point Research is tracking the previously undocumented activity cluster under the moniker Amaranth-Dragon , which it said shares links to the APT 41 ecosystem. Targeted countries include Cambodia, Thailand, Laos, Indonesia, Singapore, and the Philippines. “Many of the campaigns were timed to coincide with sensitive local political developments, official government decisions, or regional security events,” the cybersecurity company said in a report shared with The Hacker News.

“By anchoring malicious activity in familiar, timely contexts, the attackers significantly increased the likelihood that targets would engage with the content.” The Israeli firm added that the attacks were “narrowly focused” and “tightly scoped,” indicating efforts on the part of the threat actors to establish long-term persistence for geopolitical intelligence collection. The most notable aspect of threat actors’ tradecraft is the high degree of stealth, with the campaigns “highly controlled” and the attack infrastructure configured such that it can interact only with victims in specific target countries in an attempt to minimize exposure. Attack chains mounted by the adversary have been found to abuse CVE-2025-8088 , a now-patched security flaw impacting RARLAB WinRAR that allows for arbitrary code execution when specially crafted archives are opened by targets. The exploitation of the vulnerability was observed about eight days after its public disclosure in August .

"”The group distributed a malicious RAR file that exploits the CVE-2025-8088 vulnerability, allowing the execution of arbitrary code and maintaining persistence on the compromised machine,” Check Point researchers noted. “The speed and confidence with which this vulnerability was operationalized underscores the group’s technical maturity and preparedness.” Although the exact initial access vector remains unknown at this stage, the highly targeted nature of the campaigns, coupled with the use of tailored lures related to political, economic, or military developments in the region, suggests the use of spear-phishing emails to distribute the archive files hosted on well-known cloud platforms like Dropbox to lower suspicion and bypass traditional perimeter defenses. The archive contains several files, including a malicious DLL named Amaranth Loader that’s launched by means of DLL side-loading, another long-preferred tactic among Chinese threat actors. The loader shares similarities with tools such as DodgeBox, DUSTPAN (aka StealthVector), and DUSTTRAP , which have been previously identified as used by the APt41 hacking crew.

Once executed, the loader is designed to contact an external server to retrieve an encryption key, which is then used to decrypt an encrypted payload retrieved from a different URL and execute it directly in memory. The final payload deployed as part of the attack is the open-source command-and-control (C2 or C&C) framework known as Havoc . In contrast, early iterations of the campaign detected in March 2025 made use of ZIP files containing Windows shortcuts (LNK) and batch (BAT) to decrypt and execute the Amaranth Loader using DLL side-loading. A similar attack sequence was also identified in a late October 2025 campaign using lures related to the Philippines Coast Guard.

In another campaign targeting Indonesia in early September 2025, the threat actors opted to distribute a password-protected RAR archive from Dropbox so as to deliver a fully functional remote access trojan (RAT) codenamed TGAmaranth RAT instead of Amaranth Loader that leverages a hard-coded Telegram bot for C2. Besides implementing anti-debugging and anti-antivirus techniques to resist analysis and detection, the RAT supports the following commands - /start, to send a list of running processes from the infected machine to the bot /screenshot, to capture and upload a screenshot /shell, to execute a specified command on the infected machine and exfiltrate the output /download, to download a specified file from the infected machine /upload, to upload a file to the infected machine What’s more, the C2 infrastructure is secured by Cloudflare and is configured to accept traffic only from IP addresses within the specific country or countries targeted in each operation. The activity also exemplifies how sophisticated threat actors weaponize legitimate, trusted infrastructure to execute targeted attacks while remaining operational clandestinely. Amaranth-Dragon’s links to APT41 stem from overlaps in malware arsenal, alluding to a possible connection or shared resources between the two clusters.

It’s worth noting that Chinese threat actors are known for sharing tools, techniques, and infrastructure. “In addition, the development style, such as creating new threads within export functions to execute malicious code, closely mirrors established APT41 practices,” Check Point said. “Compilation timestamps, campaign timing, and infrastructure management all point to a disciplined, well-resourced team operating in the UTC+8 (China Standard Time) zone. Taken together, these technical and operational overlaps strongly suggest that Amaranth-Dragon is closely linked to, or part of, the APT41 ecosystem, continuing established patterns of targeting and tool development in the region.” Mustang Panda Delivers PlugX Variant in New Campaign The disclosure comes as Tel Aviv-based cybersecurity company Dream Research Labs detailed a campaign orchestrated by another Chinese nation-state group tracked as Mustang Panda that has targeted officials involved in diplomacy, elections, and international coordination across multiple regions between December 2025 and mid-January 2026.

The activity has been assigned the name PlugX Diplomacy . “Rather than exploiting software vulnerabilities, the operation relied on impersonation and trust,” the company said . “Victims were lured into opening files that appeared to be U.S.-linked diplomatic summaries or policy documents. Opening the file alone was sufficient to trigger the compromise.” The documents pave the way for the deployment of a customized variant of PlugX , a long-standing malware put to use by the hacking group to covertly harvest data and enable persistent access to compromised hosts.

The variant, called DOPLUGS , has been detected in the wild since at least late December 2022. The attack chains are fairly consistent in that malicious ZIP attachments centred around official meetings, elections, and international forums act as a catalyst for detonating a multi-state process. Present within the compressed file is a single LNK file that, when launched, triggers the execution of a PowerShell command that extracts and drops a TAR archive. “The embedded PowerShell logic recursively searches for the ZIP archive, reads it as raw bytes, and extracts a payload beginning at a fixed byte offset,” Dream explained.

“The carved data is written to disk using an obfuscated invocation of the WriteAllBytes method. The extracted data is treated as a TAR archive and unpacked using the native tar.exe utility, demonstrating consistent use of living-off-the-land binaries (LOLBins) throughout the infection chain.” The TAR archive contains three files - A legitimate signed executable associated with AOMEI Backupper is vulnerable to DLL search-order hijacking (“RemoveBackupper.exe”) An encrypted file that contains the PlugX payload (“backupper.dat”) A malicious DLL that’s sideloaded using the executable (“comn.dll”) to load PlugX The execution of the legitimate executable displays a decoy PDF document to the user to give the impression to the victim that nothing is amiss, when, in the background, DOPLUGS is installed on the host. “The correlation between actual diplomatic events and the timing of detected lures suggests that analogous campaigns are likely to persist as geopolitical developments unfold,” Dream concluded. “Entities operating in diplomatic, governmental, and policy-oriented sectors should consequently regard malicious LNK distribution methods and DLL search-order hijacking via legitimate executables as persistent, high-priority threats rather than isolated or fleeting tactics.” Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Orchid Security Introduces Continuous Identity Observability for Enterprise Applications

An innovative approach to discovering, analyzing, and governing identity usage beyond traditional IAM controls. The Challenge: Identity Lives Outside the Identity Stack Identity and access management tools were built to govern users and directories. Modern enterprises run on applications. Over time, identity logic has moved into application code, APIs, service accounts, and custom authentication layers.

Credentials are embedded. Authorization is enforced locally. Usage patterns change without review. These identity paths often operate outside the visibility of IAM, PAM, and IGA.

For security and identity teams, this creates a blind spot - what we call Identity Dark Matter. This dark matter is responsible for the identity risk that cannot be directly observed. Why Traditional Approaches Fall Short Most identity tools rely on configuration data and policy models. That works for managed users.

It does not work for: Custom-built applications Legacy authentication logic Embedded credentials and secrets Non-human identities Access paths that bypass identity providers As a result, teams are left reconstructing identity behavior during audits or incident response. This approach does not scale. Learn how to uncover this invisible layer of identity . Orchid’s Approach: Discover, Analyze, Orchestrate, Audit Orchid Security addresses this gap by providing continuous identity observability across applications.

The platform follows a four-stage operational model aligned to how security teams work. Discover: Identify Identity Usage Inside Applications Orchid begins by discovering applications and their identity implementations. Lightweight instrumentation analyzes applications directly to identify authentication methods, authorization logic, and credential usage. This discovery includes both managed and unmanaged environments.

Teams gain an accurate inventory of: Applications and services Identity types in use Authentication flows Embedded credentials This establishes a baseline of identity activity across the environment. Analyze: Assess Identity Risk Based on Observed Behavior Once discovery is complete, Orchid analyzes identity usage in context. The platform correlates identities, applications, and access paths to surface risk indicators such as: Shared or hardcoded credentials Orphaned service accounts Privileged access paths outside IAM Drift between intended and actual access Analysis is driven by observed behavior rather than assumed policy. This allows teams to focus on identity risks that are actively in use.

Orchestrate: Act on Identity Findings With analysis complete, Orchid enables teams to take action. The platform integrates with existing IAM, PAM, and security workflows to support remediation efforts. Teams can: Prioritize identity risks by impact Route findings to the appropriate control owner Track remediation progress over time Orchid does not replace existing controls. It coordinates them using an accurate identity context.

Audit: Maintain Continuous Evidence of Identity Control Because discovery and analysis run continuously, audit data is always available. Security and GRC teams can access: Current application inventories Evidence of identity usage Documentation of control gaps and remediation actions This reduces reliance on manual evidence collection and point-in-time reviews. Audit becomes an ongoing process rather than a periodic scramble. Practical Outcomes for Security Teams Organizations using Orchid gain: Improved visibility into application-level identity usage Reduced exposure from unmanaged access paths Faster audit preparation Clear accountability for identity risk Most importantly, teams can make decisions based on verified data rather than assumptions.

Learn more about how Orchid uncovers Identity Dark Matter. A few final words As identity continues to move beyond centralized directories, security teams need new ways to understand and govern access. Orchid Security provides continuous identity observability across applications, enabling organizations to discover identity usage, analyze risk, orchestrate remediation, and maintain audit-ready evidence. This approach aligns identity security with how modern enterprise environments actually operate.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

The First 90 Seconds: How Early Decisions Shape Incident Response Investigations

Many incident response failures do not come from a lack of tools, intelligence, or technical skills. They come from what happens immediately after detection, when pressure is high, and information is incomplete. I have seen IR teams recover from sophisticated intrusions with limited telemetry. I have also seen teams lose control of investigations they should have been able to handle.

The difference usually appears early. Not hours later, when timelines are built, or reports are written, but in the first moments after a responder realizes something is wrong. Those early moments are often described as the first 90 seconds. However, taken too literally, that framing misses the point.

This is not about reacting faster than an attacker or rushing to action. It is about establishing direction before assumptions harden and options disappear. Responders make quiet decisions right away, like what to look at first, what to preserve, and whether to treat the issue as a single system problem or the beginning of a larger pattern. Once those early decisions are made, they shape everything that follows.

Understanding why those choices matter (and getting them right) requires rethinking what the “first 90 seconds” of a real investigation represents. The First 90 Seconds Are a Pattern, Not a Moment One of the most common mistakes I see is treating the opening phase of an investigation as a single, dramatic event. The alert fires, the clock starts, and responders either handle it well or they do not. That is not how real incidents unfold.

The “first 90 seconds” happens every time the scope of an intrusion changes. You are notified about a system believed to be involved in an intrusion. You access it. You decide what matters, what to preserve, and what this system might reveal about the rest of the environment.

That same decision window opens again when you identify a second system, then a third. Each one resets the clock. This is where teams often feel overwhelmed. They look at the size of their environment and assume they are facing hundreds or thousands of machines at once.

In reality, they are facing a much smaller set of systems at a time. Scope grows incrementally. One machine leads to another, then another, until a pattern starts to emerge. Strong responders do not reinvent their approach each time that happens.

They apply the same early discipline every time they touch a new system. What was executed here? When did it execute? What happened around it?

Who or what interacted with it? That consistency is what allows scope to grow without control being lost. This is also why early decisions matter so much. If responders treat the first affected system as an isolated problem and rush to “fix” it, they close a ticket instead of investigating an intrusion.

If they fail to preserve the right artifacts early, they spend the rest of the investigation guessing. Those mistakes can compound as the scope expands. How Investigations are Hindered When early investigations go wrong, it is tempting to blame training, hesitation, or poor communication. Those issues do show up, but they are usually symptoms, not root causes.

The more consistent failure is that teams do not understand their own environment well enough when the incident begins. Responders are forced to answer basic questions under pressure. Where does data leave the network? What logging exists on critical systems?

How far back does the data go? Was it preserved or overwritten? Those questions should already have answers. When they do not, responders end up learning the critical components of their environment after it’s too late.

This is why logging that starts following a detection is so damaging. Forward visibility without backward context limits what can be proven. You may still reconstruct parts of the attack, but every conclusion becomes weaker. Gaps turn into assumptions, and assumptions turn into mistakes.

Another common failure is evidence prioritization. Early on, everything feels important, so teams jump between artifacts without a clear anchor. That creates activity without progress. In most investigations, the fastest way to regain clarity is to focus on evidence of execution .

Nothing meaningful happens on a system without something running. Malware executes. PowerShell runs. Native tools get abused.

Living off the land still leaves traces. If you understand what was executed and when, you can start to understand intent, access, and movement. From there, context matters. That could mean what system was accessed around that time, who connected to the system, or where the activity moved next.

Those answers do not exist in isolation. They form a chain, and that chain points outward into the environment. The final failure is premature closure. In the interest of time, teams often reimage a system, restore services, and move on.

Except that incomplete investigations can leave behind small, unnoticed pieces of access. Secondary implants. Alternate credentials. Quiet persistence.

A subtle indicator of compromise does not always reignite immediately, which creates the illusion of success. If it does resurface, the incident feels new when, in reality, it is not. It is the same one that was never fully remediated. Join us at SANS DC Metro 2026 Teams that can get the opening moments right enable difficult investigations to become more manageable.

Effective incident response is about discipline under uncertainty, applied the same way every time a new intrusion comes into scope. However, it is important to give yourself grace. No one starts out good at this. Every responder you trust today learned by making mistakes, then learning how not to repeat them the next time.

The goal is not to avoid incidents entirely. That is unrealistic. The goal is to avoid making repetitive mistakes under stress. That only happens when teams are prepared before an incident forces the issue.

Because when they understand their environments, they can practice identifying execution, preserving evidence, and expanding scope deliberately while the stakes are still low. When investigations are handled with that level of discipline, the first 90 seconds feel familiar rather than frantic. The same questions get asked, and the same priorities guide the work. That consistency is what allows teams to move faster later, with confidence instead of guesswork.

For responders who experience these challenges in their own investigations, this is exactly the mindset and methodology taught in our SANS FOR508: Advanced Incident Response, Threat Hunting, and Digital Forensics class . I will be teaching FOR508 at SANS DC Metro on March 2-7, 2026, for teams that want to practice this discipline and turn insights into action. Register for SANS DC Metro 2026 here . Note: This article has been expertly written and contributed by Eric Zimmerman , Principal Instructor at SANS Institute.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Microsoft Warns Python Infostealers Target macOS via Fake Ads and Installers

Microsoft has warned that information-stealing attacks are “rapidly expanding” beyond Windows to target Apple macOS environments by leveraging cross-platform languages like Python and abusing trusted platforms for distribution at scale. The tech giant’s Defender Security Research Team said it observed macOS-targeted infostealer campaigns using social engineering techniques such as ClickFix since late 2025 to distribute disk image (DMG) installers that deploy stealer malware families like Atomic macOS Stealer ( AMOS ), MacSync , and DigitStealer . The campaigns have been found to use techniques like fileless execution, native macOS utilities, and AppleScript automation to facilitate data theft. This includes details like web browser credentials and session data, iCloud Keychain, and developer secrets.

The starting point of these attacks is often a malicious ad, often served through Google Ads, that redirects users searching for tools like DynamicLake and artificial intelligence (AI) tools to fake sites that employ ClickFix lures, tricking them into infecting their own machines with malware. “Python-based stealers are being leveraged by attackers to rapidly adapt, reuse code, and target heterogeneous environments with minimal overhead,” Microsoft said . “They are typically distributed via phishing emails and collect login credentials, session cookies, authentication tokens, credit card numbers, and crypto wallet data.” One such stealer is PXA Stealer , which is linked to Vietnamese-speaking threat actors and is capable of harvesting login credentials, financial information, and browser data. The Windows maker said it identified two PXA Stealer campaigns in October 2025 and December 2025 that used phishing emails for initial access.

Attack chains involved the use of registry Run keys or scheduled tasks for persistence and Telegram for command-and-control communications and data exfiltration. In addition, bad actors have been observed weaponizing popular messaging apps like WhatsApp to distribute malware like Eternidade Stealer and gain access to financial and cryptocurrency accounts. Details of the campaign were publicly documented by LevelBlue/Trustwave in November 2025. Other stealer-related attacks have revolved around fake PDF editors like Crystal PDF that are distributed via malvertising and search engine optimization (SEO) poisoning through Google Ads to deploy a Windows-based stealer that can stealthily collect cookies, session data, and credential caches from Mozilla Firefox and Chrome browsers.

To counter the threat posed by infostealer threats, organizations are advised to educate users on social engineering attacks like malvertising redirect chains, fake installers, and ClickFix‑style copy‑paste prompts. It’s also advised to monitor for suspicious Terminal activity and access to the iCloud Keychain, as well as inspect network egress for POST requests to newly registered or suspicious domains. “Being compromised by infostealers can lead to data breaches, unauthorized access to internal systems, business email compromise (BEC), supply chain attacks, and ransomware attacks,” Microsoft said. Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Eclipse Foundation Mandates Pre-Publish Security Checks for Open VSX Extensions

The Eclipse Foundation, which maintains the Open VSX Registry, has announced plans to enforce security checks before Microsoft Visual Studio Code (VS Code) extensions are published to the open-source repository to combat supply chain threats. The move marks a shift from a reactive to a proactive approach to ensure that malicious extensions don’t end up getting published on the Open VSX Registry. “Up to now, the Open VSX Registry has relied primarily on post-publication response and investigation. When a bad extension is reported, we investigate and remove it,” Christopher Guindon, director of software development at the Eclipse Foundation, said .

“While this approach remains relevant and necessary, it does not scale as publication volume increases and threat models evolve.” The change comes as open-source package registries and extension marketplaces have increasingly become attack magnets, enabling bad actors to target developers at scale through a variety of methods such as namespace impersonation and typosquatting. As recently as last week, Socket flagged an incident where a compromised publisher’s account was used to push poisoned updates. By implementing pre-publish checks, the idea is to limit the window of exposure and flag the following scenarios, as well as quarantine suspicious uploads for review instead of publishing them immediately - Clear cases of extension name or namespace impersonation Accidentally published credentials or secrets Known malicious patterns It’s worth noting that Microsoft already has a similar multi-step vetting process in place for its Visual Studio Marketplace. This includes scanning incoming packages for malware, then rescanning every newly published package “shortly” after it’s been published, and periodic bulk rescanning of all the packages.

The extension verification program is expected to be rolled out in a staged fashion, with the maintainers using the month of February 2026 to monitor newly published extensions without blocking publication to fine-tune the system, reduce false positives, and improve feedback. The enforcement will begin next month. “The goal and intent are to raise the security floor, help publishers catch issues early, and keep the experience predictable and fair for good-faith publishers,” Guindon said. “Pre-publish checks reduce the likelihood that obviously malicious or unsafe extensions make it into the ecosystem, which increases confidence in the Open VSX Registry as shared infrastructure.” Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

CISA Adds Actively Exploited SolarWinds Web Help Desk RCE to KEV Catalog

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a critical security flaw impacting SolarWinds Web Help Desk (WHD) to its Known Exploited Vulnerabilities ( KEV ) catalog, flagging it as actively exploited in attacks. The vulnerability, tracked as CVE-2025-40551 (CVSS score: 9.8), is a untrusted data deserialization vulnerability that could pave the way for remote code execution. “SolarWinds Web Help Desk contains a deserialization of untrusted data vulnerability that could lead to remote code execution, which would allow an attacker to run commands on the host machine,” CISA said.

“This could be exploited without authentication.” SolarWinds issued fixes for the flaw last week, along with CVE-2025-40536 (CVSS score: 8.1), CVE-2025-40537 (CVSS score: 7.5), CVE-2025-40552 (CVSS score: 9.8), CVE-2025-40553 (CVSS score: 9.8), and CVE-2025-40554 (CVSS score: 9.8), in WHD version 2026.1. There are currently no public reports about how the vulnerability is being weaponized in attacks, who may be the targets, or the scale of such efforts. It’s the latest illustration of how quickly threat actors are moving to exploit newly disclosed flaws. Also added to the KEV catalog are three other vulnerabilities - CVE-2019-19006 (CVSS score: 9.8) - An improper authentication vulnerability in Sangoma FreePBX that potentially allows unauthorized users to bypass password authentication and access services provided by the FreePBX administrator CVE-2025-64328 (CVSS score: 8.6) - An operating system command injection vulnerability in Sangoma FreePBX that could allow for a post-authentication command injection by an authenticated known user via the testconnection -> check_ssh_connect() function and potentially obtain remote access to the system as an asterisk user CVE-2021-39935 (CVSS score: 7.5/6.8) - A server-side request forgery (SSRF) vulnerability in GitLab Community and Enterprise Editions that could allow unauthorized external users to perform Server Side Requests via the CI Lint API It’s worth noting that the exploitation of CVE-2021-39935 was highlighted by GreyNoise in March 2025, as part of a coordinated surge in the abuse of SSRF vulnerabilities in multiple platforms, including DotNetNuke, Zimbra Collaboration Suite, Broadcom VMware vCenter, ColumbiaSoft DocumentLocator, BerriAI LiteLLM, and Ivanti Connect Secure.

By contrast, the abuse of CVE-2019-19006 dates back to November 2020, when Check Point disclosed details of a cyber fraud operation codenamed INJ3CTOR3 that leveraged the flaw to compromise VoIP servers and sell the access to the highest bidders. As recently as last week, Fortinet revealed the threat actor behind the activity has weaponized CVE-2025-64328 starting early December 2025 to deliver a web shell codenamed EncystPHP. “In 2022, the threat actor shifted its focus to the Elastix system via CVE-2021-45461,” security researcher Vincent Li said . “These incidents begin with the exploitation of a FreePBX vulnerability, followed by the deployment of a PHP web shell in the target environments.” Once launched, EncystPHP attempts to collect FreePBX database configuration, sets up persistence by creating a root-level user named newfpbx, resets multiple user account passwords, and modifies the SSH “authorized_keys” file to ensure remote access.

The web shell also exposes an interactive interface that supports several predefined operational commands. This includes file system enumeration, process inspection, querying active Asterisk channels, listing Asterisk SIP peers, and retrieving multiple FreePBX and Elastix configuration files. “By leveraging Elastix and FreePBX administrative contexts, the web shell operates with elevated privileges, enabling arbitrary command execution on the compromised host and initiating outbound call activity through the PBX environment,” Li explained. “Because it can blend into legitimate FreePBX and Elastix components, such activity may evade immediate detection, leaving affected systems exposed to well-known risks, including long-term persistence, unauthorized administrative access, and abuse of telephony resources.” Federal Civilian Executive Branch (FCEB) agencies are required to fix CVE-2025-40551 by February 6, 2026, and the rest by February 24, 2026, pursuant to Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities .

Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon , an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data. The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025. “In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools,” Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.

“Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications. The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution. With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust.

The problem has been characterized as a case of Meta-Context Injection. “MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” Levi said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields.

While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows - The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to differentiate between legitimate metadata descriptions and embedded malicious instructions Ask Gordon to forward the parsed instructions to the MCP gateway, a middleware layer that sits between AI agents and MCP servers. MCP Gateway interprets it as a standard request from a trusted source and invokes the specified MCP tools without any additional validation MCP tool executes the command with the victim’s Docker privileges, achieving code execution The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon’s Docker Desktop implementation to capture sensitive internal data about the victim’s environment using MCP tools by taking advantage of the assistant’s read-only permissions. The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology.

It’s worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions. “The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi said. “It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.” Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

[Webinar] The Smarter SOC Blueprint: Learn What to Build, Buy, and Automate

Most security teams today are buried under tools. Too many dashboards. Too much noise. Not enough real progress.

Every vendor promises “complete coverage” or “AI-powered automation,” but inside most SOCs, teams are still overwhelmed, stretched thin, and unsure which tools are truly pulling their weight. The result? Bloated stacks, missed signals, and mounting pressure to do more with less. This live session, “ Breaking Down the Modern SOC: What to Build vs Buy vs Automate ,” with Kumar Saurabh (CEO, AirMDR) and Francis Odum (CEO, SACR) , clears the fog.

No jargon. Just real answers to the question every security leader faces: What should we build, what should we buy, and what should we automate? Secure your spot for the live session ➜ You’ll see what a healthy modern SOC looks like today—how top-performing teams decide where to build, when to buy, and how to automate without losing control. The session goes beyond theory: expect a real customer case study, a side-by-side look at common SOC models, and a practical checklist you can use right away to simplify operations and improve results.

If your SOC feels overloaded, underfunded, or always one step behind, this session is your reset point. You’ll leave with clarity, not buzzwords—a grounded view of how to strengthen your SOC with the people, tools, and budget you already have. Budgets are shrinking. Threats are scaling.

The noise is deafening. It’s time to pause, rethink, and rebuild smarter. Register for the Webinar ➜ Register Free Now — and learn how to simplify your SOC, cut the clutter, and make every decision count. Found this article interesting?

This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Hackers Exploit Metro4Shell RCE Flaw in React Native CLI npm Package

Threat actors have been observed exploiting a critical security flaw impacting the Metro Development Server in the popular “@react-native-community/cli” npm package. Cybersecurity company VulnCheck said it first observed exploitation of CVE-2025-11953 (aka Metro4Shell) on December 21, 2025. With a CVSS score of 9.8, the vulnerability allows remote unauthenticated attackers to execute arbitrary operating system commands on the underlying host. Details of the flaw were first documented by JFrog in November 2025.

Despite more than a month after initial exploitation in the wild, the “activity has yet to see broad public acknowledgment,” it added. In the attack detected against its honeypot network, the threat actors have weaponized the flaw to deliver a Base64-encoded PowerShell script that, once parsed, is configured to perform a series of actions, including Microsoft Defender Antivirus exclusions for the current working directory and the temporary folder (“C:\Users<Username>\AppData\Local\Temp”). The PowerShell script also establishes a raw TCP connection to an attacker-controlled host and port (“8.218.43[.]248:60124”) and sends a request to retrieve data, write it to a file in the temporary directory, and execute it. The downloaded binary is based in Rust, and features anti-analysis checks to hinder static inspection.

The attacks have been found to originate from the following IP addresses - 5.109.182[.]231 223.6.249[.]141 134.209.69[.]155 Describing the activity as neither experimental nor exploratory, VulnCheck said the delivered payloads were “consistent across multiple weeks of exploitation, indicating operational use rather than vulnerability probing or proof-of-concept testing.” “CVE-2025-11953 is not remarkable because it exists. It is remarkable because it reinforces a pattern defenders continue to relearn. Development infrastructure becomes production infrastructure the moment it is reachable, regardless of intent.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

When Cloud Outages Ripple Across the Internet

Recent major cloud service outages have been hard to miss. High-profile incidents affecting providers such as AWS, Azure, and Cloudflare have disrupted large parts of the internet, taking down websites and services that many other systems depend on. The resulting ripple effects have halted applications and workflows that many organizations rely on every day. For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services.

For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption. These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity.

When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident. Cloud Infrastructure, a Shared Point of Failure Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable.

Most organizations rely on cloud infrastructure for critical identity-related components, such as: Datastores holding identity attributes and directory information Policy and authorization data Load balancers, control planes, and DNS These shared dependencies introduce risk in the system. A failure in any one of them can block authentication or authorization entirely, even if the identity provider is technically still running. The result is a hidden single point of failure that many organizations, unfortunately, only discover during an outage. Identity, the Gatekeeper for Everything Authentication and authorization aren’t isolated functions used only during login - they are continuous gatekeepers for every system, API, and service.

Modern security models, specifically Zero Trust, are built on the principle of “never trust, always verify” . That verification depends entirely on the availability of identity systems. This applies equally to human users and machine identities . Applications authenticate constantly.

APIs authorize every request. Services obtain tokens to call other services. When identity systems are unavailable, nothing works. Because of this, identity outages directly threaten business continuity.

They should trigger the highest level of incident response, with proactive monitoring and alerting across all dependent services. Treating identity downtime as a secondary or purely technical issue significantly underestimates its impact. The Hidden Complexity of Authentication Flows Authentication involves far more than verifying a username and password, or a passkey, as organizations increasingly move toward passwordless models. A single authentication event typically triggers a complex chain of operations behind the scenes.

Identity systems are commonly: Resolve user attributes from directories or databases Store session state Issue access tokens containing scopes, claims, and attributes Perform fine-grained authorization decisions using policy engines Authorization checks may occur both during token issuance and at runtime when APIs are accessed. In many cases, APIs must authenticate themselves and obtain tokens before calling other services. Each of these steps depends on the underlying infrastructure. Datastores, policy engines, token stores, and external services all become part of the authentication flow.

A failure in any one of these components can fully block access, impacting users, applications, and business processes. Why Traditional High Availability Isn’t Enough High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup.

This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary. The result is an identity architecture that appears resilient on paper but collapses under large-scale cloud or platform-wide outages.

Designing Resilience for Identity Systems True resilience must be deliberately designed. For identity systems, this often means reducing dependency on a single provider or failure domain. Approaches may include multi-cloud strategies or controlled on-premises alternatives that remain accessible even when cloud services are degraded. Equally important is planning for degraded operation.

Fully denying access during an outage has the highest possible business impact. Allowing limited access, based on cached attributes, precomputed authorization decisions, or reduced functionality, can dramatically reduce operational and reputational damage. Not all identity-related data needs the same level of availability. Some attributes or authorization sources may be less fault-tolerant than others, and that may be acceptable.

What matters is making these trade-offs deliberately, based on business risk rather than architectural convenience. Identity systems must be engineered to fail gracefully. When infrastructure outages are inevitable, access control should degrade predictably, not completely collapse. Ready to get started with a robust identity management solution?

Try the Curity Identity Server for free . Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.