2026-03-04 AI创业新闻

Fake Tech Support Spam Deploys Customized Havoc C2 Across Organizations

Threat hunters have called attention to a new campaign as part of which bad actors masqueraded as fake IT support to deliver the Havoc command-and-control (C2) framework as a precursor to data exfiltration or ransomware attack. The intrusions, identified by Huntress last month across five partner organizations, involved the threat actors using email spam as lures, followed by a phone call from an IT desk that activates a layered malware delivery pipeline. “In one organization, the adversary moved from initial access to nine additional endpoints over the course of eleven hours, deploying a mix of custom Havoc Demon payloads and legitimate RMM tools for persistence, with the speed of lateral movement strongly suggesting the end goal was data exfiltration, ransomware, or both,” researchers Michael Tigges, Anna Pham, and Bryan Masters said. It’s worth noting that the modus operandi is consistent with email bombing and Microsoft Teams phishing attacks orchestrated by threat actors associated with the Black Basta ransomware operation in the past.

While the cybercrime group appears to have gone silent following a public leak of its internal chat logs last year, the continued presence of the group’s playbook suggests two possible scenarios. One possibility is that former Black Basta affiliates have moved on to other ransomware operations and are using them to mount fresh attacks, or two, rival threat actors have adopted the same strategy to conduct social engineering and obtain initial access. The attack chain begins with a spam campaign aiming to overwhelm a target’s inboxes with junk emails. In the next step, the threat actors, masquerading as IT support, contact the recipients and trick them into granting remote access to their machines either via a Quick Assist session or by installing tools like AnyDesk to help remediate the problem.

With the access in place, the adversary wastes no time launching the web browser and navigating to a fake landing page hosted on Amazon Web Services (AWS) that impersonates Microsoft and instructs the victim to enter their email address to access Outlook’s anti-spam rules update system and update the spam rules. Clicking a button to “Update rules configuration” on the counterfeit page triggers the execution of a script that displays an overlay asking the user to enter their password. “This mechanism serves two purposes: it allows the threat actor (TA) to harvest credentials, which, when combined with the required email address, provides access to the control panel; concurrently, it adds a layer of authenticity to the interaction, convincing the user the process is genuine,” Huntress said. The attack also hinges on downloading the supposed anti-spam patch, which, in turn, leads to the execution of a legitimate binary named “ADNotificationManager.exe” (or “DLPUserAgent.exe” and “Werfault.exe”) to sideload a malicious DLL.

The DLL payload implements defense evasion and executes the Havoc shellcode payload by spawning a thread containing the Demon agent. At least one of the identified DLLs (“vcruntime140_1.dll”) incorporates additional tricks to sidestep detection by security software using control flow obfuscation, timing-based delay loops, and techniques like Hell’s Gate and Halo’s Gate to hook ntdll.dll functions and bypass endpoint detection and response (EDR) solutions. “Following the successful deployment of the Havoc Demon on the beachhead host, the threat actors began lateral movement across the victim environment,” the researchers said. “While the initial social engineering and malware delivery demonstrated some interesting techniques, the hands-on-keyboard activity that followed was comparatively straightforward.” This includes creating scheduled tasks to launch the Havoc Demon payload every time the infected endpoints are rebooted, providing the threat actors with persistent remote access.

That said, the threat actor has been found to deploy legitimate remote monitoring and management (RMM) tools like Level RMM and XEOX on some compromised hosts instead of Havoc, thus diversifying their persistence mechanisms. Some important takeaways from these attacks are that threat actors are more than happy to impersonate IT staff and call personal phone numbers if it improves the success rate, techniques like defense evasion that were once limited to attacks on large firms or state-sponsored campaigns are becoming increasingly common, and commodity malware is customized to bypass pattern-based signatures. Also of note is the speed at which attacks progress swiftly and aggressively from initial compromise to lateral movement, as well as the numerous methods used to maintain persistence. “What begins as a phone call from ‘IT support’ ends with a fully instrumented network compromise – modified Havoc Demons deployed across endpoints, legitimate RMM tools repurposed as backup persistence,” Huntress concluded.

“This campaign is a case study in how modern adversaries layer sophistication at every stage: social engineering to get in the door, DLL sideloading to stay invisible, and diversified persistence to survive remediation.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Building a High-Impact Tier 1: The 3 Steps CISOs Must Follow

Every CISO knows the uncomfortable truth about their Security Operations Center: the people most responsible for catching threats in real time are the people with the least experience. Tier 1 analysts sit at the front line of detection, and yet they are also the most vulnerable to the cognitive and organizational pressures that quietly erode SOC performance over time. The Paradox at the Gate: Why Tier 1 Carries the Weight but Lacks the Armor Tier 1 is the layer that processes the highest volume of alerts, performs initial triage, and determines what gets escalated. But it is built on a foundation that is structurally fragile.

Entry-level analysts, high turnover rates, and relentless alert queues create conditions where even well-designed detection rules fail to translate into timely, accurate responses. The paradox is here: Tier 1 performance defines SOC performance; But Tier 1 is often the least supported, least empowered, and most cognitively overloaded layer Tier 1 analysts face a daily avalanche of alerts. Over time, this leads to: Alert fatigue: constant exposure to high volumes reduces sensitivity to real danger. Decision fatigue: repeated micro-decisions degrade judgment quality.

Cognitive overload: too many dashboards, too little context. False-positive conditioning: when 90% of alerts are benign, skepticism becomes automatic. Burnout and turnover: institutional memory evaporates For CISOs, these are not HR problems. It’s a business risk.

When Tier 1 hesitates, misses, or delays escalation: Dwell time increases, Incident costs rise, Detection quality degrades, Executive confidence in security drops. If Tier 1 is weak, the entire SOC becomes reactive rather than predictive. The Core Engine Room: Monitoring and Triage as Business-Critical Workflows Tier 1 owns two foundational SOC processes: monitoring and alert triage. Monitoring is the continuous process of ingesting signals from across the environment — endpoints, networks, cloud infrastructure, identity systems — and applying detection logic to surface events of potential concern.

Triage is what happens next: the structured, human-driven process of evaluating those events, assigning severity, ruling out false positives, and determining whether escalation is warranted. Basically, these are routine tasks. Watch telemetry. Sort alerts into true positive/false positive/needs escalation.

But these also are revenue protection mechanisms since they determine MTTR, MTTD, and resource allocation efficiency. When these workflows are inefficient: Tier 2 and Tier 3 drown in noise, Incident response begins late, Business disruption expands, Operational costs increase, Regulatory exposure grows. Intelligence as Oxygen: The Foundation of Tier 1 Effectiveness Tier 1 cannot operate effectively in a vacuum, and raw alerts without context are just digital shadows. Actionable threat intelligence turns data into decisions.

For a Tier 1 analyst asking, “Is this connected to an active campaign targeting our sector?”, it provides: IOC validation, Campaign context, TTP mapping, Infrastructure associations, Malware family attribution. Tier 1 analysts need threat intelligence more urgently than anyone else in the SOC, precisely because they make the most time-sensitive decisions with the least contextual background. Integrate actionable feeds and lookup enrichment into your SOC workflows to speed detection and improve operational resilience Reduce Dwell Time. Increase Confidence Step 1: Detect What Others Miss.

Powering Monitoring with Live Threat Intelligence Feeds The first step toward a high-impact Tier 1 is upgrading the intelligence foundation of monitoring itself. Most SOC environments rely on detection rules built from static signatures or behavioral heuristics — logic that was accurate when written but degrades as adversaries adapt. Actionable threat intelligence feeds continuously inject fresh, verified indicators of compromise directly into the detection infrastructure. Rather than flagging anomalies and waiting for an analyst to research them, a feed-enriched monitoring layer flags activity that has already been confirmed as malicious through real-world analysis.

Detections become based on behavioral ground truth, not statistical deviation. The operational effect on early detection is substantial. It compresses the window of exposure and dramatically reduces the cost of eventual containment. ANY.RUN’s Threat Intelligence Feeds aggregate indicators (malicious IPs, URLs, domains) drawn from a continuously operating malware analysis sandbox that processes real-world threats in real time.

This means the data reflects active threat activity observed through dynamic execution analysis, not historical reporting or third-party aggregation alone. Adversaries who modify their malware to evade static signatures cannot easily evade behavioral observation. TI Feeds: data, benefits, integrations Delivered in STIX and MISP formats, TI Feeds integrate directly with SIEMs, firewalls, DNS resolvers, and endpoint detection systems. Each indicator carries contextual metadata like malware families and behavioral tags, so that a detection is not just a flag but an explanation.

For the business, intelligence-powered monitoring reduces MTTD, improves detection precision, and generates a measurable return on the broader security stack investment by ensuring that what gets detected is what actually matters. Step 2: From Flag to Finding. Enriching Every Alert with the Context Analysts Actually Need Before an analyst can enrich an alert, they often face a more immediate problem: a suspicious file or link has surfaced, and its nature is genuinely unknown. This is where the ANY.RUN Interactive Sandbox becomes a direct triage asset.

Rather than relying on static reputation checks alone, analysts can submit the artifact to the sandbox and observe its actual behavior in a live execution environment — watching in real time as the file makes network connections, modifies the registry, drops additional payloads, or attempts to evade detection. Within minutes, the sandbox produces a verdict grounded in what the sample actually does, not just what it looks like. View sandbox analysis of a suspicious .exe file Sandbox detonation detects ScreenConnect malware But detection is only the beginning of a T1 analyst’s job. Once an alert surfaces, the analyst must determine whether it represents a genuine threat, understand what it means, and decide what to do with it — all under time pressure and against a queue of competing alerts.

Without enrichment, this determination relies on analyst experience and manual research, both of which are in short supply at Tier 1. The quality and speed of enrichment determine the quality and speed of triage. Deep enrichment, grounded in behavioral analysis, allows analysts to reason about the actual risk of a detection rather than guessing at it. ANY.RUN’s Threat Intelligence Lookup delivers this depth on demand.

Analysts can query any indicator — domain, IP, file hash, URL — and receive immediate context drawn from the sandbox’s analysis repository: full behavioral reports showing how the artifact executed, associated malware families and threat categories, network indicators observed during analysis, and connections to broader malicious infrastructure. A lookup is fast enough to fit into the triage workflow rather than interrupting it. domainName:”priutt-title.com” TI Lookup domain search with “Malicious” verdict and additional IOCs A single lookup allows us to understand that a doubtful domain spotted in the network traffic is most probably malicious, engaged in campaigns targeting IT, finance, and educational businesses all over the world right now, and linked to more indicators that can be used for further detection tuning. This changes how T1 operates across several dimensions: Analysts make faster, more confident decisions because they have evidence rather than inference.

Escalation notes improve because analysts can articulate what they found and why it matters, reducing back-and-forth with Tier 2 and accelerating the handoff. False positives are closed with greater certainty, improving the precision of the escalation pipeline. For business objectives, enriched triage supports several priorities simultaneously: It accelerates MTTD and MTTR, which are key metrics for both security program effectiveness and regulatory compliance. It improves the quality of incident documentation for post-incident review, insurance claims, and regulatory reporting.

It reduces analyst burnout by replacing frustrating ambiguity with actionable clarity. Finally, it ensures that the SOC’s output reflects genuine analysis rather than overwhelmed guesswork. Step 3: Security That Compounds. Integrating ANY.RUN into Your Existing Stack Individual capabilities — however strong — deliver limited value when they operate in isolation.

The third and most strategically significant step is
integration
connecting ANY.RUN’s Threat Intelligence Feeds, Lookup, and Sandbox into the existing security infrastructure so that intelligence flows automatically across every layer of the environment. This is where investment in T1 intelligence capabilities translates into organization-wide risk reduction. SIEMs that ingest TI Feeds generate higher-precision alerts, because the detection layer is operating from verified behavioral indicators rather than generic rules. Firewalls and DNS resolvers that consume the same feeds block malicious infrastructure at the perimeter, reducing the volume of threats that reach endpoints and analysts in the first place.

EDR systems enriched with sandbox-derived behavioral signatures detect malware that evades signature-based approaches. The entire stack becomes more coherent because it shares a common intelligence foundation. ANY.RUN supports this integration architecture through standard formats and APIs designed for compatibility with the security products already in deployment. STIX and MISP feed delivery integrates with leading SIEM and SOAR solutions.

The TI Lookup API enables direct enrichment from within analyst workflows(ticketing systems, investigation dashboards, custom scripts) without requiring analysts to leave their primary interface. The sandbox itself can receive samples programmatically, enabling automated analysis pipelines that feed results back into detection and response systems. ANY.RUN integration capabilities For T1 teams, the day-to-day effect of integration is a reduction in the manual effort that currently consumes analyst time. Indicators enriched automatically before triage, feeds that update detection logic without human intervention, escalation data that populates from sandbox analysis rather than manual documentation — these changes shift analyst effort from information gathering to genuine investigation.

T1 becomes faster without becoming larger. For CISOs, the business case for integration centers on compounding returns. Each point of integration multiplies the value of the intelligence investment: a feed consumed by five security controls delivers five times the coverage of a feed consumed by one. This coherence also strengthens the organization’s posture in conversations with the board, insurers, and regulators.

An integrated, intelligence-driven security architecture demonstrates not just that controls exist, but that they are actively informed by current threat activity, a substantively different claim than checkbox compliance. Integrate dynamic malware analysis, fresh intelligence feeds, and contextual search to improve detection quality and business outcomes Transform Your SOC Into an Early Warning System Three Steps, One Outcome: A Tier 1 That Actually Protects the Business The path to a high-impact Tier 1 is not hiring more analysts or writing more detection rules. It lies in addressing the structural shortcomings that make T1 fragile: monitoring that cannot reflect current threats, triage that lacks the context to be decisive, and intelligence capabilities that remain disconnected from the stack they should be informing. ANY.RUN’s Threat Intelligence Feeds, Lookup, and Interactive Sandbox form a closed loop — from behavioral analysis to detection to investigation — that addresses each of the steps to top performance without adding operational complexity.

The Sandbox generates ground truth. The Feeds operationalize it across the detection layer. The Lookup makes the same analytical depth available on demand for every analyst, regardless of experience. CISOs who prioritize this investment are not just improving SOC metrics.

They are changing the equation for every threat actor who targets their organization. A Tier 1 team that detects early, triages with confidence, and escalates accurately is one of the highest-leverage risk reduction assets a security program can build. Combine live TI Feeds with indicator enrichment to transform monitoring into high-confidence detection. Build a Smarter SOC Frontline Found this article interesting?

This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Open-Source CyberStrikeAI Deployed in AI-Driven FortiGate Attacks Across 55 Countries

The threat actor behind the recently disclosed artificial intelligence (AI)-assisted campaign targeting Fortinet FortiGate appliances leveraged an open-source, AI-native security testing platform called CyberStrikeAI to execute the attacks. The new findings come from Team Cymru, which detected its use following an analysis of the IP address (“212.11.64[.]250”) that was used by the suspected Russian-speaking threat actor to conduct automated mass scanning for vulnerable appliances. CyberStrikeAI is an “open-source artificial intelligence (AI) offensive security tool (OST) developed by a China-based developer who we assess has some ties to the Chinese government,” security researcher Will Thomas (aka @BushidoToken ) said . Details of the AI-powered activity came to light last month when Amazon Threat Intelligence said it detected the unknown attacker systematically targeting FortiGate devices using generative artificial intelligence (AI) services like Anthropic Claude and DeepSeek, compromising over 600 appliances in 55 countries.

According to the description in its GitHub repository, CyberStrikeAI is built in Go and integrates more than 100 security tools to enable vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization. It’s maintained by a Chinese developer who goes by the online alias Ed1s0nZ. Team Cymru said it observed 21 unique IP addresses running CyberStrikeAI between January 20 and February 26, 2026, with servers primarily hosted in China, Singapore, and Hong Kong. Additional servers related to the tool have been detected in the U.S., Japan, and Switzerland.

The Ed1s0nZ account, besides hosting CyberStrikeAI, has published several other tools that demonstrate their interest in exploitation and jailbreaking AI models - watermark-tool, to add invisible digital watermarks to documents. banana_blackmail, a Golang-based ransomware, PrivHunterAI, a Golang-based tool that uses Kimi, DeepSeek, and GPT models to detect privilege escalation vulnerabilities. ChatGPTJailbreak, which contains a README.md file with prompts to jailbreak OpenAI ChatGPT by tricking it into entering a Do Anything Now (DAN) mode or asking it to act as ChatGPT with Developer Mode enabled. InfiltrateX, a Golang-based scanner for detecting privilege escalation vulnerabilities.

VigilantEye, a Golang-based tool that monitors the disclosure of sensitive information, such as phone numbers and ID card numbers, in databases. It’s configured to send an alert via a WeChat Work bot if a potential data breach is detected. “Further, Ed1s0nZ’s GitHub activities indicate they interact with organisations that support potentially Chinese government state-sponsored cyber operations,” Thomas said. “This includes Chinese private sector firms that have known ties to the Chinese Ministry of State Security (MSS).” One such company the developer has interacted with is Knownsec 404 , a Chinese security vendor that suffered a major leak of more than 12,000 internal documents late last year, exposing the firm’s employee data, government clientele, hacking tools, large volumes of stolen data such as South Korean call logs and information related to Taiwan’s critical infrastructure organizations, and the inner workings of ongoing cyber operations targeting other countries.

“Ostensibly, KnownSec appeared to be just another security company, but this is only a half truth,” DomainTools noted in an analysis published this January, describing it as a “state-aligned cyber contractor” capable of supporting Chinese national security, intelligence, and military objectives. “In reality, […] it has a shadow organization that works for the PLA, MSS, and the organs of the Chinese security state. This leak exposes a company that operates far beyond the role of a typical cybersecurity vendor. Tools like ZoomEye and the Critical Infrastructure Target Library give China a global reconnaissance system that catalogs millions of foreign IPs, domains, and organizations mapped by sector, geography, and strategic value.” Ed1s0nZ has also been observed making active modifications to a README.md file located in an eponymous repository, removing references to them having been honored with the Level 2 Contribution Award to the China National Vulnerability Database of Information Security (CNNVD).

The developer has also claimed that “everything shared here is purely for research and learning.” According to research published by Bitsight last month, China maintains two different vulnerability databases: CNNVD and the Chinese National Vulnerability Database (CNVD). While CNNVD is overseen by the Ministry of State Security, CNVD is controlled by CNCERT. Previous findings from Recorded Future have revealed that CNNVD takes longer to publish vulnerabilities with higher CVSS scores than vulnerabilities with lower ones. “The developer’s recent attempt to scrub references to the CNNVD from their GitHub profile points to an active effort to obscure these state ties, likely to protect the tool’s operational viability as its popularity grows,” Thomas said.

“The adoption of CyberStrikeAI is poised to accelerate, representing a concerning evolution in the proliferation of AI-augmented offensive security tools.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

AI Agents: The Next Wave Identity Dark Matter - Powerful, Invisible, and Unmanaged

The Rise of MCPs in the Enterprise The Model Context Protocol (MCP) is quickly becoming a practical way to push LLMs from “chat” into real work. By providing structured access to applications, APIs, and data, MCP enables prompt-driven AI agents that can retrieve information, take action, and automate end-to-end business workflows across the enterprise. This is already showing up in production through horizontal assistants and custom vertical agents. like Microsoft Copilot, ServiceNow, Zendesk bots, and Salesforce Agentforce, with custom and vertical agents moving fast behind them.

This echoes the recent Gartner “Market Guide for Guardian Agents” report , where analysts note that the rapid enterprise adoption of these AI agents is significantly outpacing the maturity of the governance and policy controls required to manage them. We believe the primary disconnect is that these AI “colleagues” don’t look like humans. They don’t join or leave through HR They don’t submit access requests They don’t retire accounts when projects end They’re often invisible to traditional IAM, and that’s how they become identity dark matter: real identity risk outside the governance fabric. And agentic systems don’t just use access, they hunt for the path of least resistance.

They’re optimized to finish the job with minimal friction: fewer approvals, fewer prompts, fewer blockers. In identity terms, that means they’ll gravitate toward whatever already works, in-app-local accounts, stale service identities, long-lived tokens, API keys, bypass auth paths, and if it works, it gets reused. Team8’s 2025 CISO Village Survey found: Nearly 70% of enterprises already run AI agents (any system that can answer and act) in production . Another 23% are planning deployments in 2026 .

Two-thirds are building them in-house. MCP adoption isn’t a question of if; it’s a question of how fast and wisely. It’s already here, and it’s only accelerating. Complicating this further is the reality of hybrid environments.

Based on the Gartner research, it seems that organizations face significant hurdles in managing these non-human identities because native platform controls and vendor safeguards generally do not extend beyond their own cloud or platform borders. Without an independent oversight mechanism, cross-cloud agent interactions remain entirely ungoverned. The real question is whether your AI agents become trusted teammates or unmanaged identity dark matter ? ​​ How Identity Dark Matter Gets Abused by Agent-AI As autonomous AI agents that can plan and execute multi-step tasks with minimal human input, Agent AI is a powerful assistant but also a major cyber risk.

Interestingly, leading industry analysts seem to expect that the vast majority of unauthorized agent actions will stem from internal enterprise policy violations, such as misguided AI behavior or information oversharing, rather than malicious external attacks. The typical abuse pattern we see is similar, driven by agent automation and shortcut-seeking: Enumerate what exists: Agent crawls apps and integrations, lists users/tokens, discovers “alternate” auth paths. Try what’s easy first: Local accounts, legacy creds, long-lived tokens, anything that avoids a fresh approval. Lock onto “good enough” access: Even low privilege is enough to pivot: read configuration files, pull logs, discover secrets, map organization structure.

Upgrade quietly: Find over-scoped tokens, stale entitlements, or dormant-but-privileged identities and escalate with minimal noise. Operate at machine speed: Thousands of small actions occur across many systems, too fast and too wide for humans to spot early. The real risk here is the scale of impact: one neglected identity becomes a reusable shortcut across the estate. The Dark Matter Risks In addition to abusing identity dark matter, left unchecked, MCP agents (AI Agents that use the MCP protocol to connect to apps, A2A, APIs, and data sources) introduce their own hidden exposures.

Orchid uncovers these exposures every day: Over-permissioned access: Agents get “god mode” so they don’t fail, and then that privilege becomes the default operating state. Untracked usage: Agents can execute sensitive workflows through tools where logs are partial, inconsistent, or not correlated back to a sponsor. Static credentials: Hardcoded tokens don’t just “live forever”, they become shared infrastructure across agents, pipelines, and environments. Regulatory blind spots: Auditors ask, “who approved access, who used it, and what data was touched?” Dark matter makes those answers slow, or impossible.

Privilege drift: Agents accumulate access over time because removing permissions is scarier than granting them, until an attacker inherits the drift. We believe addressing these blind spots aligns with Gartner’s observation that modern AI governance requires identity and access management to tightly converge with information governance. This ensures organizations can dynamically classify data sensitivity and monitor real-time agent behavior instead of relying solely on static credentials. AI agents aren’t just users without badges.

They’re dark matter identities: powerful, invisible, and outside the reach of today’s IAM. And the uncomfortable part: even well-intentioned agents will exploit dark matter. They don’t understand your org chart or your governance intent; they understand what works. If an orphaned account or over-scoped token is the fastest path to completion, it becomes the “efficient” choice.

Principles for Safe MCP Adoption To avoid repeating the mistakes of the past (with orphaned or overprivileged accounts, shadow IT, unmanaged keys, and invisible activity), organizations need to adapt and apply core identity principles to AI agents. Gartner introduced the concept of specialized “guardian” systems, supervisory AI solutions that continuously evaluate, monitor, and enforce boundaries on working agents. We recommend organizations follow 5 core principles as they deploy MCP-based agentic solutions. Pair AI Agents with Human Sponsors: Every agent should be tied to an accountable human operator.

If the human changes roles or leaves, the agent’s access should change with them. We agree with Gartner on the necessity of ownership mapping, ensuring full lineage from creation to deployment is tracked to both the machine and its human owner. Dynamic, Context-Aware Access: AI agents should not hold standing, permanent privileges. Their entitlements should be time-bound, session-aware, and limited to least privilege.

Visibility and Auditability: Gartner has been increasingly calling for organizations to maintain a centralized AI agent catalog that inventories all official, shadow, and third-party agents, alongside comprehensive posture management and tamper-evident audit trails. In our view, every action an AI agent takes should be logged, correlated back to its human sponsor, and made available for review. This ensures accountability and prepares organizations for future compliance scrutiny. Visibility isn’t just “we logged it.” You need to tie actions to data reach: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets.

Otherwise, you can’t distinguish “useful automation” from “silent data movement”. Governance at Enterprise Scale: MCP adoption should extend across both new and legacy systems within a single, consistent governance fabric, so that security, compliance, and infrastructure teams are not working in silos. This is also where Gartner emphasizes the importance of an enterprise-owned supervisory layer, one that ensures consistent controls and reduces the risk of vendor lock-in as MCP adoption expands. Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user within the proper bounds.

The Bigger Picture AI agents pose a unique challenge beyond mere integration. They represent a shift in how work is delegated and executed inside enterprises. Left unmanaged, they will follow the same trajectory as other hidden identities: in-app-local accounts, stale service identities, long-lived tokens, API keys, and bypass auth paths that have become identity dark matter over time. And because LLM-driven agents are optimized for efficiency, least friction and fewest steps, they will naturally gravitate to those ungoverned identities as the fastest path to success.

If an orphaned local admin or an over-scoped token “just works,” the agent will use it, and reuse it. The opportunity is to get ahead of this curve. By treating AI agents as first-class identities from day one (discoverable, governable, and auditable), organizations can harness their potential without creating blind spots. Enterprises that do this will not only reduce their immediate attack surface but also position themselves for the regulatory and operational expectations that are sure to follow.

In practice, most Agent-AI incidents won’t start with a zero-day. They’ll start with an identity shortcut that someone forgot to clean up, then get amplified by automation until it appears to be a systemic breach. The Bottom Line AI agents are here. They are already changing how enterprises operate.

The challenge is not whether to use them, but how to govern them. Safe MCP adoption requires applying the same principles that identity practitioners know well, least privilege, lifecycle management, and auditability, to a new class of non-human identities that follow this protocol. If identity dark matter is the sum of what we can’t see or control, then unmanaged AI agents may become its fastest-growing source. The organizations that act now to bring them into the light will be the ones who can move quickly with AI without sacrificing trust, compliance, or security.

That’s why Orchid Security is building identity infrastructure to eliminate dark matter, and make Agent AI adoption safe to deploy at enterprise scale. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Starkiller Phishing Suite Uses AitM Reverse Proxy to Bypass Multi-Factor Authentication

Cybersecurity researchers have disclosed details of a new phishing suite called Starkiller that proxies legitimate login pages to bypass multi-factor authentication (MFA) protections. It’s advertised as a cybercrime platform by a threat group calling itself Jinkusu, granting customers access to a dashboard that lets them select a brand to impersonate or enter a brand’s real URL. It also lets users choose custom keywords like “login,” “verify,” “security,” or “account,” and integrates URL shorteners such as TinyURL to obscure the destination URL. “It launches a headless Chrome instance – a browser that operates without a visible window – inside a Docker container , loads the brand’s real website, and acts as a reverse proxy between the target and the legitimate site,” Abnormal researchers Callie Baron and Piotr Wojtyla said .

“Recipients are served genuine page content directly through the attacker’s infrastructure, ensuring the phishing page is never out of date. And because Starkiller proxies the real site live, there are no template files for security vendors to fingerprint or blocklist.” This login page proxying technique obviates the need for attackers to update their phishing page templates periodically as the real pages they’re impersonating get updated. Put differently, the container acts as an AitM reverse proxy, forwarding the end user’s inputs entered on the spoofed live page to the legitimate site and returning the site’s responses. Under the hood, every keystroke, form submission, and session token is routed through attacker-controlled infrastructure and is captured for account takeover.

“The platform streamlines phishing operations by centralizing infrastructure management, phishing page deployment, and session monitoring within a single control panel,” Abnormal said. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.” The development comes as Datadog revealed that the 1Phish kit had evolved from a basic credential harvester in September 2025 into a multi-stage phishing kit targeting 1Password users. The updated version of the kit incorporates a pre-phishing fingerprint and validation layer, support for capturing one-time passcodes (OTPs) and recovery codes, and browser fingerprinting logic to filter out bots. “This progression reflects deliberate iteration rather than simple template reuse,” security researcher Martin McCloskey said .

“Each version builds upon the previous one, introducing controls designed to increase conversion rates, reduce automated analysis, and support secondary authentication harvesting.” The findings show that turkey solutions like Starkiller and 1Phish are increasingly turning phishing into SaaS-style workflows, further lowering the skill barrier necessary to pull off such attacks at scale. They also coincide with a sophisticated phishing campaign targeting North American businesses and professionals by abusing the OAuth 2.0 device authorization grant flow to sidestep multi-factor authentication (MFA) and compromise Microsoft 365 accounts. To achieve this, the attacker registers on the Microsoft OAuth application and generates a unique device code , which is then delivered to the victim via a targeted phishing email. “The victim is directed to the legitimate Microsoft domain (microsoft.com/devicelogin) portal to enter an attacker-supplied device code ,” researchers Jeewan Singh Jalal, Prabhakaran Ravichandhiran, and Anand Bodke said .

“This action authenticates the victim and issues a valid OAuth access token to the attacker’s application. The real-time theft of these tokens grants the attacker persistent access to the victim’s Microsoft 365 accounts and corporate data.” In recent months, phishing campaigns have also targeted financial institutions, specifically U.S.-based banks and credit unions, to harvest credentials. The campaign is said to have taken place over two distinct phases, an initial wave beginning in late June 2025 and a more sophisticated set of attacks beginning in mid-November 2025. “The actors began registering [.]co[.]com domains spoofing financial institution websites, presenting credible impersonations of real financial institutions,” BlueVoyant researchers Shira Reuveny and Joshua Green said .

“These [.]co[.]com domains serve as the initial entry point in a refined multi-stage chain.” The domain, when visited from a clickable link in a phishing email, is designed to load a fraudulent Cloudflare CAPTCHA page that mimics the targeted institution. The CAPTCHA is non-functional and creates a deliberate delay before a Base64-encoded script redirects users to the credential harvesting page. In an effort to evade detection and prevent automated scanners from flagging the malicious content, directly accessing the [.]co[.]com domains trigger a redirect to a malformed “www[.]www” URL. “The adversary’s deployment of a more advanced multi-layered evasion chain – incorporating referrer validation, cookie-based access controls, intentional delays, and code obfuscation – effectively creates a more resilient infrastructure that presents barriers for automated security tools and manual analysis,” BlueVoyant said.

Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Accelerate your AI Initiatives

Microsoft Warns OAuth Redirect Abuse Delivers Malware to Government Targets

Microsoft on Monday warned of phishing campaigns that employ phishing emails and OAuth URL redirection mechanisms to bypass conventional phishing defenses implemented in email and browsers. The activity, the company said, targets government and public-sector organizations with the end goal of redirecting victims to attacker-controlled infrastructure without stealing their tokens. It described the phishing attacks as an identity-based threat that takes advantage of OAuth’s standard, by-design behavior rather than exploiting software vulnerabilities or stealing credentials. “OAuth includes a legitimate feature that allows identity providers to redirect users to a specific landing page under certain conditions, typically in error scenarios or other defined flows,” the Microsoft Defender Security Research Team said .

“Attackers can abuse this native functionality by crafting URLs with popular identity providers, such as Entra ID or Google Workspace, that use manipulated parameters or associated malicious applications to redirect users to attacker-controlled landing pages. This technique enables the creation of URLs that appear benign but ultimately lead to malicious destinations.” The starting point of the attack is a malicious application created by the threat actor in a tenant under their control. The application is configured with a redirect URL pointing to a rogue domain that hosts malware. The attackers then distribute an OAuth phishing link that instructs the recipients to authenticate to the malicious application by using an intentionally invalid scope.

The result of this redirection is that users inadvertently download and infect their own devices with malware. The malicious payloads are distributed in the form of ZIP archives, which, when unpacked, result in PowerShell execution, DLL side-loading, and pre-ransom or hands-on-keyboard activity, Microsoft said. The ZIP file contains a Windows shortcut (LNK) that executes a PowerShell command as soon as it’s opened. The PowerShell payload is used to conduct host reconnaissance by running discovery commands.

The LNK file extracts from the ZIP archive an MSI installer, which then drops a decoy document to mislead the victim, while a malicious DLL (“crashhandler.dll”) is sideloaded using the legitimate “steam_monitor.exe” binary. The DLL proceeds to decrypt another file named “crashlog.dat” and executes the final payload in memory, allowing it to establish an outbound connection to an external command-and-control (C2) server. Microsoft said the emails use e-signature requests, Teams recordings, social security, financial, and political themes as lures to trick users into clicking the link. The emails are said to have been sent via mass-sending tools and custom solutions developed in Python and Node.js.

The links are either directly included in the email body or placed within a PDF document. “To increase credibility, actors passed the target email address through the state parameter using various encoding techniques, allowing it to be automatically populated on the phishing page,” Microsoft said. “The state parameter is intended to be randomly generated and used to correlate request and response values, but in these cases it was repurposed to carry encoded email addresses.” While some of the campaigns have been found to leverage the technique to deliver malware, others send users to pages hosted on phishing frameworks such as EvilProxy, which act as an adversary-in-the-middle (AitM) kit to intercept credentials and session cookies. Microsoft has since removed several malicious OAuth applications that were identified as part of the investigation.

Organizations are advised to limit user consent, periodically review application permissions, and remove unused or overprivileged apps. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Google Confirms CVE-2026-21385 in Qualcomm Android Component Exploited

Google on Monday disclosed that a high-severity security flaw impacting an open-source Qualcomm component used in Android devices has been exploited in the wild. The vulnerability in question is CVE-2026-21385 (CVSS score: 7.8), a buffer over-read in the Graphics component. “Memory corruption when adding user-supplied data without checking available buffer space,” Qualcomm said in an advisory, describing it as an integer overflow. The chipmaker said the flaw was reported to it through Google’s Android Security team on December 18, 2025.

Customers were notified of the security defect on February 2, 2026. There are currently no details on how the vulnerability is being exploited in the wild. However, Google acknowledged in its monthly Android security bulletin that “there are indications that CVE-2026-21385 may be under limited, targeted exploitation.” Google’s March 2026 update contains patches for a total of 129 vulnerabilities, including a critical flaw in the System component (CVE-2026-0006) that could lead to remote code execution without requiring any additional privileges or user interaction. In contrast, Google addressed one Android vulnerability in January 2026 and none last month.

Also patched by Google are multiple critical-rated bugs: a privilege escalation bug in Framework (CVE-2026-0047), a denial-of-service (DoS) in System (CVE-2025-48631), and seven privilege escalation flaws in Kernel components (CVE-2024-43859, CVE-2026-0037, CVE-2026-0038, CVE-2026-0027, CVE-2026-0028, CVE-2026-0030, and CVE-2026-0031). The Android security bulletin includes two patch levels – 2026-03-01 and 2026-03-05 – to give Android partners the flexibility to address common vulnerabilities on different devices more quickly. The second patch level includes fixes for Kernel components, as well as those from Arm, Imagination Technologies, MediaTek, Qualcomm, and Unisoc. Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

SloppyLemming Targets Pakistan and Bangladesh Governments Using Dual Malware Chains

The threat activity cluster known as SloppyLemming has been attributed to a fresh set of attacks targeting government entities and critical infrastructure operators in Pakistan and Bangladesh. The activity, per Arctic Wolf, took place between January 2025 and January 2026. It involves the use of two distinct attack chains to deliver malware families tracked as BurrowShell and a Rust-based keylogger. “The use of the Rust programming language represents a notable evolution in SloppyLemming’s tooling, as prior reporting documented the actor using only traditional compiled languages and borrowed adversary simulation frameworks such as Cobalt Strike, Havoc, and the custom NekroWire RAT,” the cybersecurity company said in a report shared with The Hacker News.

SloppyLemming is the moniker assigned to a threat actor that’s known to target government, law enforcement, energy, telecommunications, and technology entities in Pakistan, Sri Lanka, Bangladesh, and China since at least 2022. It’s also tracked under the names Outrider Tiger and Fishing Elephant. Prior campaigns mounted by the hacking crew have leveraged malware families like Ares RAT and WarHawk, which are often attributed to SideCopy and SideWinder, respectively. ArcticWolf’s analysis of the latest attacks has uncovered the use of spear-phishing emails to deliver PDF lures and macro-enabled Excel documents to kick-start the infection chains.

It described the threat actor as operating with moderate capability. The PDF decoys contain URLs designed to lead victims to ClickOnce application manifests, which then deploy a legitimate Microsoft .NET runtime executable (“NGenTask.exe”) and a malicious loader (“mscorsvc.dll”). The loader is launched using DLL side-loading to decrypt and execute a custom x64 shellcode implant codenamed BurrowShell. “BurrowShell is a full-featured backdoor providing the threat actor with file system manipulation, screenshot capture capabilities, remote shell execution, and SOCKS proxy capabilities for network tunneling,” Arctic Wolf said.

“The implant masquerades its command-and-control (C2) traffic as Windows Update service communications and employs RC4 encryption with a 32-character key for payload protection.” The second attack chain employs Excel documents containing malicious macros to drop the keylogger malware, while also incorporating features to conduct port scanning and network enumeration. Further investigation of the threat actor’s infrastructure has identified 112 Cloudflare Workers domains registered during the one-year time period, marking an eight-fold jump from the 13 domains flagged by Cloudflare in September 2024. The campaign’s links to SloppyLemming are based on continued exploitation of Cloudflare Workers infrastructure with government-themed typo-squatting patterns, deployment of the Havoc C2 framework, DLL side-loading techniques, and victimology patterns. It’s worth noting that some aspects of the threat actor’s tradecraft, including the use of ClickOnce-enabled execution, overlap with a recent SideWinder campaign documented by Trellix in October 2025.

“In particular, the targeting of Pakistani nuclear regulatory bodies, defense logistics organizations, and telecommunications infrastructure – alongside Bangladeshi energy utilities and financial institutions – aligns with intelligence collection priorities consistent with regional strategic competition in South Asia,” Arctic Wolf said. “The deployment of dual payloads – the in-memory shellcode BurrowShell for C2 and SOCKS proxy operations, and a Rust-based keylogger for information stealing – suggests the threat actor maintains flexibility to deploy appropriate tools based on target value and operational requirements.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

New Chrome Vulnerability Let Malicious Extensions Escalate Privileges via Gemini Panel

Cybersecurity researchers have disclosed details of a now-patched security flaw in Google Chrome that could have permitted attackers to escalate privileges and gain access to local files on the system. The vulnerability, tracked as CVE-2026-0628 (CVSS score: 8.8), has been described as a case of insufficient policy enforcement in the WebView tag. It was patched by Google in early January 2026 in version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux. “Insufficient policy enforcement in WebView tag in Google Chrome prior to 143.0.7499.192 allowed an attacker who convinced a user to install a malicious extension to inject scripts or HTML into a privileged page via a crafted Chrome extension,” according to a description on the NIST National Vulnerability Database (NVD).

Palo Alto Networks Unit 42 researcher Gal Weizman, who discovered and reported the flaw on November 23, 2025, said the issue could have permitted malicious extensions with basic permissions to seize control of the new Gemini Live panel in Chrome. The side panel, which leverages a new “chrome://glic” URL that uses a WebView component to load the “gemini.google[.]com” web app, can be launched by clicking the Gemini icon located in the Chrome toolbar’s top right corner. Google added Gemini integration to Chrome in September 2025. This attack could have been abused by an attacker to achieve privilege escalation, enabling them to access the victim’s camera and microphone without their permission, take screenshots of any website, and access local files.

The issue has been codenamed Glic Jack , short for Gemini Live in Chrome hijack. The findings highlight an emerging attack vector arising from baking artificial intelligence (AI) and agentic capabilities directly into web browsers to facilitate real-time content summarization, translation, and automated task execution, as the same capabilities could be abused to perform privileged actions. The problem, at its core, is the need for granting these AI agents privileged access to the browsing environment to perform multi-step operations, thereby becoming a double-edged sword when an attacker embeds hidden prompts in a malicious web page, and a victim user is tricked into accessing it via social engineering or some other means. The prompt could instruct the AI assistant to perform actions that would otherwise be blocked by the browser, leading to data exfiltration or code execution.

Even worse, the web page could manipulate the agent to store the instructions in memory , causing it to persist across sessions. Besides the expanded attack surface, Unit 42 said the integration of an AI side panel in agentic browsers brings back classic browser security risks. “By placing this new component within the high-privilege context of the browser, developers could inadvertently create new logical flaws and implementation weaknesses,” Weizman said. “This could include vulnerabilities related to cross-site scripting (XSS), privilege escalation, and side-channel attacks that can be exploited by less-privileged websites or browser extensions.” While browser extensions operate based on a defined set of permissions, successful exploitation of CVE-2026-0628 undermines the browser security model and allows an attacker to run arbitrary code at “gemini.google[.]com/app” via the browser panel and gain access to sensitive data.

“An extension with access to a basic permission set through the declarativeNetRequest API allowed permissions that could have enabled an attacker to inject JavaScript code into the new Gemini panel,” Weizman added. “When the Gemini app is loaded within this new panel component, Chrome hooks it with access to powerful capabilities.” It’s worth noting that the declarativeNetRequest API allows extensions to intercept and change properties of HTTPS web requests and responses. It’s used by ad-blocking extensions to stop issuing requests to load ads on web pages. “Chromium’s interpretation for what went wrong here is that WebView components (with which ‘chrome://glic’ embeds Gemini’s web app) were forgotten from being rejected when considering [declarativeNetRequest] rule appliance,” Weizman said on X.

In other words, all it takes for an attacker is to trick an unsuspecting user into installing a specially crafted extension, which could then inject arbitrary JavaScript code into the Gemini side panel to interact with the file system, take screenshots, access the camera, turn on the microphone – all features necessary for the AI assistant to perform its tasks. “This difference in what type of component loads the Gemini app is the line between by-design behavior and a security flaw,” Unit 42 said. An extension influencing a website is expected. However, an extension influencing a component that is baked into the browser is a serious security risk.” Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

Google Develops Merkle Tree Certificates to Enable Quantum-Resistant HTTPS in Chrome

Google has announced a new program in its Chrome browser to ensure that HTTPS certificates are secure against the future risk posed by quantum computers . “To ensure the scalability and efficiency of the ecosystem, Chrome has no immediate plan to add traditional X.509 certificates containing post-quantum cryptography to the Chrome Root Store ,” the Chrome Secure Web and Networking Team said . “Instead, Chrome, in collaboration with other partners, is developing an evolution of HTTPS certificates based on Merkle Tree Certificates (MTCs), currently in development in the PLANTS working group.” As Cloudflare explains, MTC is a proposal for the next generation of the Public Key Infrastructure (PKI) used to secure the internet that aims to reduce the number of public keys and signatures in the TLS handshake to the bare minimum required. Under this model, a Certification Authority (CA) signs a single ‘Tree Head’ representing potentially millions of certificates, and the ‘certificate’ sent to the browser is a lightweight proof of inclusion in that tree, Google said.

In other words, MTCs facilitate the adoption of post-quantum algorithms without having to incur additional bandwidth associated with classical X.509 certificate chains. The approach, the company added, decouples the security strength of the corresponding cryptographic algorithm from the size of the data transmitted to the user. “By shrinking the authentication data in a TLS handshake to the absolute minimum, MTCs aim to keep the post-quantum web as fast and seamless as today’s internet, maintaining high performance even as we adopt stronger security,” Google said. The tech giant said it’s already experimenting with MTCs with real internet traffic and that it plans to gradually expand the rollout in three distinct phases by the third quarter of 2027 - Phase 1 (In progress) - Google is conducting a feasibility study in collaboration with Cloudflare to evaluate the performance and security of TLS connections relying on MTCs.

Phase 2 (Q1 2027) - Google plans to invite Certificate Transparency (CT) Log operators with at least one “ usable “ log in Chrome before February 1, 2026, to participate in the initial bootstrapping of public MTCs. Phase 3 (Q3 2027) - Google will finalize the requirements for onboarding additional CAs into the new Chrome Quantum-resistant Root Store (CQRS) and corresponding Root Program that only supports MTCs. “We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem,” Google said. By designing for the specific demands of a modern, agile, internet, we can accelerate the adoption of post-quantum resilience for all web users.

Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

⚡ Weekly Recap: SD-WAN 0-Day, Critical CVEs, Telegram Probe, Smart TV Proxy SDK and More

This week is not about one big event. It shows where things are moving. Network systems, cloud setups, AI tools, and common apps are all being pushed in different ways. Small gaps in access control, exposed keys, and normal features are being used as entry points.

The pattern becomes clear only when you see everything together. Faster scans, smarter misuse of trusted services, and steady targeting of high-value sectors. Each story adds context. Reading them all gives a fuller picture of how today’s threat landscape is evolving.

⚡ Threat of the Week Cisco SD-WAN Zero-Day Exploited — A newly disclosed maximum-severity security flaw in Cisco Catalyst SD-WAN Controller (formerly vSmart) and Catalyst SD-WAN Manager (formerly vManage) has come under active exploitation in the wild as part of malicious activity that dates back to 2023. The vulnerability, tracked as CVE-2026-20127 (CVSS score: 10.0), allows an unauthenticated remote attacker to bypass authentication and obtain administrative privileges on an affected system by sending a crafted request. Cisco credited the Australian Signals Directorate’s Australian Cyber Security Centre (ASD-ACSC) for reporting the vulnerability. The networking equipment major is tracking the exploitation and subsequent post-compromise activity under the moniker UAT-8616, describing the cluster as a “highly sophisticated cyber threat actor.” Control Your AI Agents Before They Control You Airia is the governance and orchestration layer for enterprise AI.

Monitor drift, enforce policy, optimize inference cost, and generate audit-ready evidence—so your AI scales securely, compliantly, and profitably. Request a Demo ➝ 🔔 Top News Anthropic Accuses 3 Chinese Firms of Distillation Attacks — Anthropic accused three Chinese AI firms of engaging in concerted “industrial-scale” distillation attack campaigns aimed at extracting information from its model, making it the latest American tech firm to level such claims after OpenAI issued similar complaints. DeepSeek, Moonshot AI, and MiniMax are said to have flooded Claude with large volumes of specially-crafted prompts to elicit responses to train their own proprietary models. Last month, OpenAI submitted an open letter to U.S.

legislators, claiming to have observed activity “indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other U.S. frontier labs, including through new, obfuscated methods.” The disclosure renewed a debate over training data sources and distillation techniques, with some criticizing the company for training its own systems using copyrighted material without permission. “Anthropic is guilty of stealing training data at a massive scale and has had to pay multibillion-dollar settlements for their theft,” xAI CEO Elon Musk said. Google Disrupts UNC2814 GRIDTIDE Campaign — Google disclosed that it worked with industry partners to disrupt the infrastructure of a suspected China-nexus cyber espionage group tracked as UNC2814 that breached at least 53 organizations across 42 countries.

The tech giant described UNC2814 as a prolific, elusive actor that has a history of targeting international governments and global telecommunications organizations across Africa, Asia, and the Americas. Central to the hacking group’s operations is a novel backdoor dubbed GRIDTIDE that abuses Google Sheets API as a communication channel to disguise C2 traffic and facilitate the transfer of raw data and shell commands. Chinese cyber espionage groups have consistently prioritized the telecommunication sector as a target precisely because of the access their networks provide to sensitive data and lawful intercept infrastructure. Thousands of Public Google Cloud API Keys Exposed with Gemini Access — New research has found that Google Cloud API keys, typically designated as project identifiers for billing purposes, could be abused to authenticate to sensitive Gemini endpoints and access private data.

The problem occurs when users enable the Gemini API on a Google Cloud project (i.e., Generative Language API), causing the existing API keys in that project, including those accessible via the website JavaScript code, to gain surreptitious access to Gemini endpoints without any warning or notice. With a valid key, an attacker can access uploaded files, cached data, and even rack up LLM usage charges, Truffle Security said. The issue has since been plugged by Google. UAT-10027 Targets U.S.

Education and Healthcare Sectors — A previously undocumented threat activity cluster known as UAT-10027 has been attributed to an ongoing malicious campaign targeting education and healthcare sectors in the U.S. since at least December 2025. The end goal of the attacks is to deliver a never-before-seen backdoor codenamed Dohdoor. “Dohdoor utilizes the DNS-over-HTTPS (DoH) technique for command-and-control (C2) communications and has the ability to download and execute other payload binaries reflectively,” Cisco Talos said.

Analysis of the campaign has revealed no evidence of data exfiltration to date. Although no final payloads have been observed other than what appears to be the Cobalt Strike Beacon to backdoor into the victim’s environment, it’s believed that UAT-10027’s actions are likely driven by financial gain based on the victimology pattern. Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration — Security vulnerabilities in Anthropic Claude Code could have allowed attackers to remotely execute code on users’ machines and steal API keys by injecting malicious configurations into repositories, and then waiting for an unsuspecting developer to clone and open an untrustworthy project. The vulnerabilities were addressed between September 2025 and January 2026.

“The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository,” Check Point said. “The integration of AI into development workflows brings tremendous productivity benefits, but also introduces new attack surfaces that weren’t present in traditional tools.” ‎️‍🔥 Trending CVEs New vulnerabilities surface daily, and attackers move fast. Reviewing and patching early keeps your systems resilient. Here are this week’s most critical flaws to check first — CVE-2025-40538, CVE-2025-40539, CVE-2025-40540, CVE-2025-40541 (SolarWinds Serv-U), CVE-2026-20127 , CVE-2026-20122, CVE-2026-20126, CVE-2026-20128 (Cisco Catalyst SD-WAN), CVE-2026-25755 (jsPDF), CVE-2025-12543 (HPE Telco Service Activator), CVE-2026-22719, CVE-2026-22720, CVE-2026-22721 (Broadcom VMware Aria Operations), CVE-2026-3061, CVE-2026-3062, CVE-2026-3063 (Google Chrome), CVE-2025-10010 (CryptoPro Secure Disk for BitLocker), CVE-2025-13942, CVE-2025-13943, CVE-2026-1459 (Zyxel), CVE-2025-71210, CVE-2025-71211 (Trend Micro Apex One), CVE-2026-0542 (ServiceNow AI Platform), CVE-2026-24061 (telnetd), CVE-2026-21902 (Juniper Networks Junos OS), CVE-2025-29631, CVE-2025-1242 (Gardyn Home Kit), CVE-2025-15576 (FreeBSD), CVE-2026-26365 (Akamai), CVE-2026-27739 (Angular), and SVE-2025-50109 (Samsung Tizen OS).

🎥 Cybersecurity Webinars Automating Real-World Security Testing to Prove What Actually Works → This webinar explains why one-time security assessments are no longer enough and shows how organizations can automate continuous, real-world testing of their defenses to uncover gaps and measure how well controls hold up against actual attack techniques. When AI Agents Become Your New Attack Surface → This webinar explains that as AI tools turn into autonomous agents that can browse, call APIs, and access internal systems, the security risk expands beyond the model to the entire environment they operate in, requiring stricter access controls, monitoring, and system-level safeguards rather than model testing alone. Quantum Is Coming: Preparing for the End of Today’s Encryption → This webinar explains how future quantum computers could break today’s encryption, why “harvest now, decrypt later” attacks are a real risk, and what practical steps organizations can take now to begin shifting to post-quantum cryptography. 📰 Around the Cyber World UNC6384 Drops New PlugX Variant — IIJ-SECT and LAB52 have detailed new activity from the Chinese cyber espionage group UNC6384 .

The attacks follow a known modus operandi of using STATICPLUGIN, a digitally signed downloader, to deliver updated versions of PlugX using DLL side-loading. The malicious payloads are distributed via phishing emails with meeting invitation lures or through fake software updates. OpenAI Takes Action Against ChatGPT Accounts Used for Harmful Purposes — OpenAI said it took down ChatGPT accounts used for influence operations, phishing, and malware development. This included a possible Chinese intelligence operation in which an individual associated with Chinese law enforcement used the AI tool for covert influence operations against domestic and foreign adversaries.

The company also acted against clusters conducting reconnaissance about U.S. persons and federal building locations, online romance scams, and Russian influence operations across Africa by generating social media posts and long-form commentary articles. “Unusually, this scam network combined manual ChatGPT prompting and an automated AI chatbot to try to entrap its targets,” OpenAI said about the scam operation running out of Cambodia. Some of these scams targeted Indonesian loveseekers.

Other scams used ChatGPT to create content that purported to come from fictitious law firms, as well as impersonate real attorneys and U.S. law enforcement as part of a recovery scam targeting fraud victims. AI-Induced Lateral Movement — New research from Orca Security has highlighted how AI can become a “third dimension” in the world of lateral movement, after network and identity, allowing attackers to expand their reach. “By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry out significant security incidents,” Orca said .

“LLMs don’t truly understand the difference between data and instructions, and when tool output is fed back into the model, it can be interpreted as something to act on. Which opens a window to AI-induced Lateral Movement (AILM) activities.” Russia Launches Probe into Telegram CEO — Russian authorities launched a criminal investigation of Telegram founder and CEO Pavel Durov. He is allegedly charged with promoting and facilitating terrorist activity on the messaging platform by failing to respond to law enforcement takedown requests. Russian officials have accused Durov of choosing a “path of violence and permissiveness” by not cooperating with its law enforcement agencies, according to the Rossiyskaya Gazeta .

The move comes after Russia began restricting access to Telegram in the country in favor of MAX. Last month, Durov called it an “attempt to force its citizens to switch to a state-controlled app built for surveillance and political censorship.” Hacked Prayer App Sends Surrender Messages — According to reports from The Wall Street Journal and WIRED , unidentified hackers seized control of an Iranian prayer app during a joint U.S.-Israeli attack to send messages urging the Iranian military to lay down their weapons and promising amnesty if they surrendered. The messages were sent in the form of push notifications to the BadeSaba Calendar app. It’s currently not clear who is behind the hack.

The app has been downloaded more than 5 million times from the Google Play Store. Following the U.S.-Israel war on Iran, the government shut down all internet access in the country. Smart TVs Turned Into AI Content Scrapers — Several smart TV app makers are deploying a new SDK named Bright SDK that lets users see fewer ads but also stealthily turns their TV into a node in a global proxy network that crawls and scrapes the web. Bright Data, the company behind the SDK, claims to operate more than 150 million residential proxy IP addresses spanning 195 countries.

Multiple Stealer Malware Families Detected — Multiple information stealer families have been detected in the wild. This includes Arkanix , CharlieKirk GRABBER , ComSuon , DarkCloud , MawaStealer , and MioLab (NovaStealer). Kaspersky’s analysis of Arkanix has revealed that it was likely developed as an LLM-assisted experiment, shrinking development time and costs. While Arkanix was promoted on underground forums in October 2025, the malware-as-a-service (MaaS) appears to have been taken down towards the end of 2025.

The findings demonstrate continued demand for off-the-key stealer malware, creating an ecosystem that enables other threat actors to purchase stealer logs for obtaining initial access to targets. “Raw Infostealer logs are meticulously filtered by corporate domain, packaged, and sold to initial access brokers and attackers specifically looking for frictionless entry points into high-value corporate networks,” Hudson Rock said . The development has been complemented by underground networks turning into cybercrime marketplaces, complete with reputation systems, escrow, and specialist vendors, Varonis added. “One operator runs infostealers across thousands of machines.

Another extracts and sorts the credentials. A third sells curated access,” security researcher Daniel Kelley said . “A fourth deploys the ransomware. Each person focuses on what they do best, and the ecosystem has become ruthlessly efficient.” Chilean National Extradited to U.S.

to Face Financial Fraud Crimes — Alex Rodrigo Valenzuela Monje (aka VAL4K), a 24-year-old Chilean national, has been extradited to the U.S. over his alleged role in running a cybercrime operation that involved the trafficking of payment card data. The defendant is accused of trafficking stolen credit card numbers and information for over 26,500 credit cards. “From at least May 2021 to August 2023, Valenzuela Monje operated an illegal online card shop, selling dumps of unauthorized access devices through Telegram channels,” the U.S.

Justice Department said . “He allegedly operated the channels known as MacacoCC Collective and Novato Carding, offering payment card data for virtually all U.S. payment cards.” New FUNNULL Infrastructure Discovered — QiAnXin has flagged new infrastructure associated with FUNNULL , a Philippines-based content delivery network (CDN) sanctioned last year by the U.S. Treasury for facilitating cyber scam operations.

“Previously, their main method was to poison existing public CDN services; now they have evolved to independently develop complete server-side attack suites (RingH23), actively infiltrating CDN nodes, demonstrating a significant improvement in control and technical sophistication,” QiAnXin XLab said. Two independent supply chain infection channels have been identified: the compromise of maccms.la to distribute a malicious PHP backdoor through its update channel, and the compromise of the GoEdge CDN management node to implant an infection module, and deploy the proprietary RingH23 attack suite to all edge nodes via SSH remote commands. The campaign has compromised 10,748 unique IP addresses, predominantly video streaming sites. Spike in Scans for SonicWall Devices — GreyNoise said it detected a spike in scans for SonicWall devices originating from the infrastructure of a known proxy provider.

The activity started on February 22, 2026, and scanned for exposed SonicWall SSL VPNs. A total of 84,142 scanning sessions targeting SonicWall SonicOS infrastructure were observed between February 22 and February 25, 2026. The scanning came from 4,305 unique IP addresses across 20 autonomous systems. “Ninety-two percent of sessions probed a single API endpoint to determine whether SSL VPN is enabled — the prerequisite check before credential attacks,” GreyNoise said .

“A commercial proxy service delivered 32% of campaign volume through 4,102 rotating exit IPs in two surgical bursts totaling 16 hours.” Google Removes 115 Android Apps Tied to Ad Fraud — A new ad fraud operation dubbed Genisys involved hijacking Android devices to run malicious activity in the background. The activity leveraged a set of 115 apps that stealthily opened websites inside hidden browser windows to generate ad display revenue for their creators. More than 500 domains were generated using AI tools to serve the ads. “They appear as generic blogs, news-style sites, and informational properties produced at scale, built not to attract real audiences but to receive and monetize fraudulent traffic,” Integral Ads said.

The apps have since been removed by Google. The findings build on another mobile ad fraud scheme called Arcade in which mobile apps generated hidden in-app browser activity to load websites in the background and convert mobile-origin activity into web traffic. Zerobot Exploits Flaws in n8n and Tenda Routers — A Mirai-based IoT botnet named Zerobot has been observed exploiting vulnerabilities in the n8n AI automation platform ( CVE-2025-68613 ) and Tenda routers ( CVE-2025-7544 ) to expand its reach. The activity was first detected in January 2026.

“Targeting of the n8n vulnerability is particularly interesting: Botnets typically exploit Internet of Things (IoT) devices, such as security cameras, DVRs, and routers, but n8n falls into an entirely different category,” Akamai said . “Although this isn’t entirely new behavior for botnets, this sort of targeting presents a greater danger to organizations by exposing more critical infrastructure to compromise as the n8n exploit could enable lateral movement for a threat actor.” Various ClickFix Campaigns Spotted — Threat hunters disclosed multiple ClickFix campaigns, including one leading to a hands-on-keyboard attack that deployed the Termite ransomware. The attack has been attributed to a group known as Velvet Tempest (DEV-0504). Another ClickFix campaign, codenamed OCRFix , used websites impersonating the Tesseract OCR tool as a launchpad for delivering malware that uses EtherHiding to retrieve the C2 server, send system information, and await further instructions.

A third campaign has been found employing fake GitHub repositories impersonating software companies and leveraging ClickFix to social-engineer victims into installing infostealers, such as SHub Stealer v2.0. GTFire Phishing Scheme Detailed — A phishing campaign dubbed GTFire is abusing Google Firebase to host phishing pages and Google Translate to disguise the malicious URLs and bypass email and web security filters. “By chaining these services together, the attackers create phishing links that appear benign, leverage Google’s reputation, and dynamically redirect victims to brand‑impersonating login pages,” Group-IB said . “Once credentials are submitted and harvested, victims are often redirected back to the legitimate website of the targeted organization, reducing suspicion and delaying incident response.” The campaign is estimated to have harvested thousands of stolen credentials associated with more than a thousand organizations, spanning over a hundred countries and hundreds of industries.

The threat actor behind the operation has been active since at least January 1, 2022. Mexico, the U.S., Spain, India, and Argentina are among the prominent targets. C77L Ransomware Targets Russia — A ransomware operation called C77L has been tied to at least 40 attacks on Russian and Belarusian enterprises since March 2025. The group is assessed to be operating out of Iran.

Initial access to target networks is accomplished via weak passwords for publicly available RDP and VPN endpoints. “The targets of attacks are Windows systems due to their overwhelming predominance in the IT infrastructures of medium and small businesses,” F6 said . RESURGE Malware Can Be Dormant on Infected Ivanti Devices — The U.S. Cybersecurity and Infrastructure Security Agency (CISA) updated its original alert for RESURGE , a piece of malware deployed as part of exploitation activity targeting a now-patched security flaw in Ivanti Connect Secure (ICS) appliances.

The agency said “RESURGE has sophisticated network-level evasion and authentication techniques, leveraging advanced cryptographic methods and forged TLS certificates to facilitate covert communications,” adding “RESURGE can remain latent on systems until a remote actor attempts to connect to the compromised device.” 30 Members of The Com Arrested — A coordinated law enforcement operation led by Europol detained 30 individuals connected to an underground online community known as The Com . The operation, launched in January 2025, has been codenamed Project Compass. An additional 179 members were also identified as part of the investigation. The Com is the name assigned to a loose-knit cybercrime collective that has been linked to online doxxing, harassment, threats of violence, extortion, sexual exploitation, phishing, SIM swapping, ransomware, and other digital crimes.

Europol described The Com as a decentralized extremist network. U.K. Government Cuts Cyber Attack Fix Times by 84% — The U.K. government has claimed it has reduced its backlog of critical vulnerabilities by 75% and reduced cyber attack fix times by 87%.

Serious security weaknesses in public sector websites are fixed six times faster, cutting the average time from nearly two months to just over a week, the U.K. government said in an update published on 26 February. Poland Dismantles Organized Crime Group — Poland’s Central Bureau for Combating Cybercrime (CBZC) dismantled an organized group that used phishing to take control of Facebook accounts and extract BLIK payment codes from victims. Eleven members of an organized criminal group operating in Poland and Germany between May 2022 and May 2024 were identified.

Six suspects have been placed in pretrial detention as part of the investigation, and over 100,000 credentials were seized. The group used “phishing techniques to obtain login details for Facebook accounts, and then gained access to them and used instant messaging to extort BLIK codes from other users of the portal,” CBZC said. Hacker Exploits Clade to Target Mexican Government Sites — An unknown hacker exploited Anthropic’s Claude chatbot to carry out attacks against Mexican government agencies, according to a report by Gambit Security. “Within a month of the initial compromise, ten government bodies and one financial institution were affected, approximately 195 million identities exposed, and roughly 150GB of data exfiltrated: tax records, civil registry files, voter data,” the company said .

“The attacker even built an automated system that forges official government tax certificates using live data. It was orchestrated by an individual actor directing AI to operate as a nation-state-level team of operators and analysts.” The operation ran on more than 1,000 prompts and regularly passed information to OpenAI’s GPT-4.1 for analysis. The breach began in late December 2025 and continued for about a month. Anthropic has since disrupted the activity and banned all of the accounts involved.

The attacks haven’t been attributed to a specific group. 🔧 Cybersecurity Tools Titus → It is an open-source tool from Praetorian that scans code, files, repositories, and traffic to find leaked credentials like API keys and tokens. It uses hundreds of pattern rules and can check whether a detected secret is actually active. You can run it as a command-line tool, use it inside other tools as a Go library, or use it as extensions in Burp Suite or a browser to uncover credential leaks in different workflows.

Sirius → It is an open-source vulnerability scanning platform on GitHub that automates network and system security checks to find weaknesses and risks in infrastructure. It combines community-driven security data with automated tests, runs within containers, and gives operators a unified view of vulnerabilities to prioritize remediation. Disclaimer: These tools are provided for research and educational use only. They are not security-audited and may cause harm if misused.

Review the code, test in controlled environments, and comply with all applicable laws and policies. Conclusion Viewed one by one, these incidents seem contained. Seen together, they show how risk now flows across connected systems that organizations rely on daily. Infrastructure, AI platforms, cloud services, and third-party tools are deeply intertwined, and strain in one area often exposes another.

The takeaway is clarity, not alarm. Adversaries are improving efficiency, scaling access, and operating inside normal processes. Reading through each report helps map that shift and understand how the broader environment is changing. Found this article interesting?

Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.

How to Protect Your SaaS from Bot Attacks with SafeLine WAF

Most SaaS teams remember the day their user traffic started growing fast. Few notice the day bots started targeting them. On paper, everything looks great: more sign-ups, more sessions, more API calls. But in reality, something feels off: Sign-ups increase, but users aren’t activating.

Server costs rise faster than revenue. Logs are filled with repeated requests from strange user agents. If this sounds familiar, it’s not just a sign of popularity. Your app is under constant automated attack, even if no ransom emails have arrived.

Your load balancer sees traffic. Your product team sees “growth”. Your database sees pain. This is where a WAF like SafeLine fits in.

SafeLine is a self-hosted web application firewall (WAF) that sits in front of your app and inspects every HTTP request before it reaches your code. It does not just look for broken packets or known bad IPs. It watches how traffic behaves: what it sends, how fast, in what patterns, and against which endpoints. How SafeLine Works In this article, we’ll show what real attacks look like for a SaaS product, how bots exploit business logic, and how SafeLine can protect your app without adding extra work for your team.

The Attacks SaaS Products Actually See When people say “web attacks”, many think only about SQL injection or XSS. Those still exist, and SafeLine blocks them with a built‑in Semantic Analysis Engine. SafeLine’s Semantic Analysis Engine reads HTTP requests like a security engineer. Instead of just hunting keywords, it understands context, decoding payloads, spotting weird field types, and recognizing attack intent across SQL, JS, NoSQL, and modern frameworks.

Blocks sophisticated bots and zero-days with 99.45% accuracy and no constant rule tweaks needed. Malicious Requests Blocked by SafeLine
But for SaaS, the most painful attacks are not always the most “technical”. They are the ones that bend your business rules. Common examples:
Fake sign‑ups
Automated sign‑up scripts farm free trials, burn invitation codes, or harvest discount coupons.
Credential stuffing
Bots try leaked username/password pairs against your login endpoint until something works. API scraping
Competitors or generic scrapers walk your API, page by page, copying your content or pricing. Abusive automation
One user (or botnet) triggers heavy background jobs, export tasks, or webhook storms that you pay for. Bot traffic spikes
Sudden waves of scripted requests hit the same endpoints, not big enough to be a classic DDoS, but enough to slow everything down.
The tricky part is that all these requests look “normal” at the HTTP level. They are:
Well‑formed
Often over HTTPS
Using your documented API
Why a Self‑Hosted WAF Makes Sense for SaaS
There are many cloud WAF products. They work well for a lot of teams. But SaaS products have some special concerns:
Data control
You may not want every request and response to flow through another company’s cloud.
Latency and routing
Extra external hops can matter for global users. Debugging
When a cloud WAF blocks something, you often see a vague message, not full context. SafeLine takes a different path: It is self‑hosted and runs as a reverse proxy in front of your app. You keep full control over logs and traffic.

You see exactly why a request was blocked, in your own dashboards. For SaaS teams, that means you can: Meet stricter customer or compliance demands about where data flows. Tune rules without opening a support ticket. Treat your WAF configuration as part of your normal infrastructure, not a black‑box service.

How SafeLine Sees and Stops Bot Traffic Bots are not one thing. Some are clumsy scripts; some are almost indistinguishable from real users. SafeLine uses several layers to deal with them. 1.

Understanding traffic, not just signatures SafeLine combines rule‑based checks with semantic analysis of requests. In practice, that means it looks at: Parameters and payloads (for injection attempts, strange encodings, exploit patterns). URL structures and access paths (for scanners, crawlers, and exploit kits). Frequency and distribution of calls (for login abuse, scraping, and subtle flood attacks).

This is what allows it to: Block classic web attacks with a low false positive rate. Detect weird patterns that do not match any single “signature” but clearly are not normal user behavior. 2. Anti‑Bot challenges Some bots can only be stopped by forcing them to prove they are not machines.

SafeLine includes an Anti‑Bot Challenge feature: when it detects suspicious traffic, it can present a challenge that real browsers handle, but bots fail. Key points: Normal human users barely notice it. Basic crawlers, scripts, and abuse tools get blocked or slowed down sharply. You decide where to enable it: sign‑up, login, pricing pages, or specific APIs.

  1. Rate limiting as a safety net For SaaS, “too much of a good thing” is a real problem. One overly eager integration, one faulty script, or one attack can exhaust resources. SafeLine’s rate limiting lets you: Limit how many requests an IP or token can make to specific endpoints per second, minute, or hour.

Protect login, sign‑up, and expensive APIs from brute force and floods. Keep your application stable even under abnormal spikes. This is essential for: Protecting free tiers from abuse. Keeping “unlimited API calls” from turning into “unlimited cloud bills”.

  1. Identity and access controls Some parts of your SaaS should never be public: Internal dashboards Early beta features Region‑specific admin tools SafeLine provides an authentication challenge feature. When enabled, visitors must enter a password you set before they can continue. This is a simple way to: Hide internal or staging environments from scanners and bots.

Reduce the blast radius of misconfigured or forgotten routes. A Simple Story: A SaaS Team vs. Bot Abuse There is a small B2B SaaS product: Less than 10 people on the team. Nginx fronting a set of REST APIs.

Free trials, public sign‑up, and open API docs. At first, numbers look good. Then: Fake sign‑ups climb to 150–200 per day. CPU peaks hit 70% because of login attempts and abuse traffic.

The database grows faster than paying users. When they add SafeLine: They deploy it behind Nginx, as a self‑hosted WAF. They enable bot detection, rate limits on sign‑up and login, and basic abuse rules for new accounts. Within one week: Fake registrations fall below 10 per day.

CPU stabilizes around 40%. Conversion starts to recover, because real users face fewer obstacles. The interesting part is not the numbers. It is what the team did not have to do: They did not design complex in‑app throttling.

They did not maintain custom bot‑blocking code. They did not argue for months about whether they could send traffic to an external inspection service. SafeLine quietly took the first wave of abuse, and the product team focused again on features and customers. How SafeLine Fits into a SaaS Stack From an architecture point of view, SafeLine behaves like a reverse proxy: External traffic → SafeLine → your Nginx / app servers.

This makes it easier to adopt without rewriting your product. You can: Put SafeLine in front of your main web app and API gateway. Slowly route more domains and services through it as you gain confidence. The SafeLine dashboard then becomes your “security console”: You see attack logs: which IP tried what, which rule triggered, what payload was blocked.

You see trends: increased scans, new kinds of payloads, or growing bot patterns. You can adjust rules and protections in a few clicks. Deployment and Ease of Use SafeLine WAF is designed for SaaS operators who may not have dedicated security teams. A deployment typically takes less than 10 minutes.

Below is the one-click deployment command: bash -c “$(curl -fsSLk https://waf.chaitin.com/release/latest/manager.sh)” – –en See the official documentation for detailed instructions: https://docs.waf.chaitin.com/en/GetStarted/Deploy More importantly, SafeLine still provides a free edition for all users worldwide. So once you install it, it’s ready to use right out of the box—no extra costs at all. Only when you need advanced features is a paid license required. After installation, you’ll see a clean interface with a super simple and intuitive configuration experience.

Protect your first app by following this official tutorial: https://docs.waf.chaitin.com/en/GetStarted/AddApplication . Once configured, the WAF operates autonomously while providing detailed visibility into threats and mitigation actions. Looking Ahead: Continuous Security The threat landscape is constantly evolving. Bots are becoming smarter, attacks are increasingly targeted, and SaaS platforms continue to grow in complexity.

To stay ahead, companies must: Monitor traffic behavior continuously Adapt rate-limiting and bot detection rules dynamically Regularly audit logs for unusual activity Ensure sensitive endpoints have layered protections SafeLine’s approach aligns perfectly with these needs, providing a flexible, data-driven security layer that grows with your SaaS business. For those interested in exploring the technology firsthand, visit the SafeLine GitHub Repository or experience the Live Demo . Or you can just go straight to install it and try it for free forever! Found this article interesting?

This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.