2025-08-07 AI创业新闻
Researchers Uncover ECScape Flaw in Amazon ECS Enabling Cross-Task Credential Theft
Cybersecurity researchers have demonstrated an “end-to-end privilege escalation chain” in Amazon Elastic Container Service ( ECS ) that could be exploited by an attacker to conduct lateral movement, access sensitive data, and seize control of the cloud environment. The attack technique has been codenamed ECScape by Sweet Security researcher Naor Haziz, who presented the findings today at the Black Hat USA security conference that’s being held in Las Vegas. “We identified a way to abuse an undocumented ECS internal protocol to grab AWS credentials belonging to other ECS tasks on the same EC2 instance,” Haziz said in a report shared with The Hacker News. “A malicious container with a low‑privileged IAM [Identity and Access Management] role can obtain the permissions of a higher‑privileged container running on the same host.” Amazon ECS is a fully-managed container orchestration service that allows users to deploy, manage, and scale containerized applications, while integrating with Amazon Web Services (AWS) to run container workloads in the cloud.
The vulnerability identified by Sweet Security essentially allows for privilege escalation by allowing a low-privileged task running on an ECS instance to hijack the IAM privileges of a higher-privileged container on the same EC2 machine by stealing its credentials. In other words, a malicious app in an ECS cluster could assume the role of a more privileged task. This is facilitated by taking advantage of a metadata service running at 169.254.170[.]2 that exposes the temporary credentials associated with the task’s IAM role. While this approach ensures that each task gets credentials for its IAM role and they are delivered at runtime, a leak of the ECS agent’s identity could permit an attacker to impersonate the agent and obtain credentials for any task on the host.
The entire sequence is as follows - Obtain the host’s IAM role credentials (EC2 Instance Role) so as to impersonate the agent Discover the ECS control plane endpoint that the agent talks to Gather the necessary identifiers (cluster name/ARN, container instance ARN, Agent version information, Docker version, ACS protocol version, and Sequence number) to authenticate as the agent using the Task Metadata endpoint and ECS introspection API Forge and sign the Agent Communication Service (ACS) WebSocket Request impersonating the agent with the sendCredentials parameter set to “true” Harvest credentials for all running tasks on that instance “The forged agent channel also remains stealthy,” Haziz said. “Our malicious session mimics the agent’s expected behavior – acknowledging messages, incrementing sequence numbers, sending heartbeats – so nothing seems amiss.” “By impersonating the agent’s upstream connection, ECScape completely collapses that trust model: one compromised container can passively collect every other task’s IAM role credentials on the same EC2 instance and immediately act with those privileges.” ECScape can have severe consequences when running ECS tasks on shared EC2 hosts, as it opens the door to cross-task privilege escalation, secrets exposure, and metadata exfiltration. Following responsible disclosure, Amazon has emphasized the need for customers to adopt stronger isolation models where applicable, and make it clear in its documentation that there is no task isolation in EC2 and that “containers can potentially access credentials for other tasks on the same container instance.” As mitigations, it’s advised to avoid deploying high-privilege tasks alongside untrusted or low-privilege tasks on the same instance, use AWS Fargate for true isolation, disable or restrict the instance metadata service (IMDS) access for tasks, limit ECS agent permissions, and set up CloudTrail alerts to detect unusual usage of IAM roles. “The core lesson is that you should treat each container as potentially compromiseable and rigorously constrain its blast radius,” Haziz said.
“AWS’s convenient abstractions (task roles, metadata service, etc.) make life easier for developers, but when multiple tasks with different privilege levels share an underlying host, their security is only as strong as the mechanisms isolating them – mechanisms which can have subtle weaknesses.” The development comes in the wake of several cloud-related security weaknesses that have been reported in recent weeks - A race condition in Google Cloud Build’s GitHub integration that could have allowed an attacker to bypass maintainer review and build un-reviewed code after a “/gcbrun” command is issued by the maintainer A remote code execution vulnerability in Oracle Cloud Infrastructure (OCI) Code Editor that an attacker could use to hijack a victim’s Cloud Shell environment and potentially pivot across OCI services by tricking a victim, already logged into Oracle Cloud, to visit a malicious HTML page hosted on a server by means of a drive-by attack An attack technique called I SPy that exploits a Microsoft first-party application’s Service principal (SP) in Entra ID for persistence and privilege escalation via federated authentication A privilege escalation vulnerability in the Azure Machine Learning service that allows an attacker with only Storage Account access to modify invoker scripts stored in the AML storage account and execute arbitrary code within an AML pipeline, enabling them to extract secrets from Azure Key Vaults, escalate privileges, and gain broader access to cloud resources A scope vulnerability in the legacy AmazonGuardDutyFullAccess AWS managed policy that could allow a full organizational takeover from a compromised member account by registering an arbitrary delegated administrator An attack technique that abuses Azure Arc for privilege escalation by leveraging the Azure Connected Machine Resource Administrator role and as a persistence mechanism by setting up as command-and-control (C2) A case of over-privileged Azure built-in Reader roles and a vulnerability in Azure API that could be chained by an attacker to leak VPN keys and then use the key to gain access to both internal cloud assets and on-premises networks A supply chain compromise vulnerability in Google Gerrit called GerriScary that enabled unauthorized code submissions to at least 18 Google projects, including ChromiumOS ( CVE-2025-1568 , CVSS score: 8.8), Chromium, Dart, and Bazel, by exploiting misconfigurations in the default “addPatchSet” permission, the voting system’s label handling, and a race condition with bot code-submission timings during the code merge process A Google Cloud Platform misconfiguration that exposed the subnetworks used for member exchanges at Internet Exchange Points (IXPs), thereby allowing attackers to potentially abuse Google’s cloud infrastructure to gain unauthorized access to internal IXP LANs. An extension of a Google Cloud privilege escalation vulnerability called ConfusedFunction that can be adapted to other cloud platforms like AWS and Azure using AWS Lambda and Azure Functions, respectively, in addition to extending it to perform environment enumeration “The most effective mitigation strategy to protect your environment from similar threat actor behavior is to ensure that all SAs [Service Account] within your cloud environment adhere to the principle of least privilege and that no legacy cloud SAs are still in use,” Talos said. “Ensure that all cloud services and dependencies are up to date with the latest security patches. If legacy SAs are present, replace them with least-privilege SAs.” Found this article interesting?
Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Fake VPN and Spam Blocker Apps Tied to VexTrio Used in Ad Fraud, Subscription Scams
The malicious ad tech purveyor known as VexTrio Viper has been observed developing several malicious apps that have been published on Apple and Google’s official app storefronts under the guise of seemingly useful applications. These apps masquerade as VPNs, device “monitoring” apps, RAM cleaners, dating services, and spam blockers, DNS threat intelligence firm Infoblox said in an exhaustive analysis shared with The Hacker News. “They released apps under several developer names, including HolaCode, LocoMind, Hugmi, Klover Group, and AlphaScale Media,” the company said . “Available in the Google Play and Apple store, these have been downloaded millions of times in aggregate.” These fake apps, once installed, deceive users into signing up for subscriptions that are difficult to cancel, flood them with ads, and part with personal information like email addresses.
It’s worth noting that LocoMind was previously flagged by Cyjax as part of a phishing campaign serving ads that falsely claim their devices have been damaged. One such Android app is Spam Shield block , which purports to be a spam blocker for push notifications but, in reality, charges users several times after convincing them to enroll in a subscription. “Right away it asks for money, and if you don’t, the ads are so disruptive that I uninstalled it before I was even able to try it,” one user said in a review of the app on the Google Play Store. Another review went: “This app is supposed to be $14.99 a month.
During the month of February I have been billed weekly for $14.99 that comes to $70 monthly/$720 a year. NOT WORTH IT. And having problems trying to uninstall it. They tell you one price and then they turn around and charge you something else.
They’re probably hoping that you won’t see it. Or it will be too late to get a refund. All I want is this junk off of my phone.” How threat actors leverage compromised sites and smartlinks to earn money The new findings lay bare the scale of the multinational criminal enterprise that’s VexTrio Viper, which includes operating traffic distribution services (TDSes) to redirect massive volumes of internet traffic to scams through their advertising networks since 2015, as well as managing payment processors such as Pay Salsa and email validation tools like DataSnap. “VexTrio and their partners are successful in part because their businesses are obfuscated,” the company said.
“But a larger part of their success is likely because they stick to fraud, where they know there is less risk of consequences.” VexTrio is known for running what’s called a commercial affiliate network, serving as an intermediary between malware distributors who have, for example, compromised a collection of WordPress websites with malicious injects (aka publishing affiliates) and threat actors who advertise various fraudulent schemes ranging from sweepstakes to crypto scams (aka advertising affiliates). The TDS is assessed to be created by a shell company called AdsPro Group, with key figures behind the organization from Italy, Belarus, and Russia engaging in fraudulent activity since at least 2004, before expanding their operations to Bulgaria, Moldova, Romania, Estonia, and the Czechia around 2015. In all, over 100 companies and brands have been linked to VexTrio. “Russian organized crime groups began building an empire within ad tech starting in or around 2015,” Dr.
Renée Burton, VP of Infoblox Threat Intel, told The Hacker News. “VexTrio is a key group within this industry, but there are other groups. All types of cybercrime, from dating scams to investment fraud and information stealers use malicious adtech, and it goes largely unnoticed.” But what makes the threat actor notable is that it controls both the publishing and advertising sides of affiliate networks through a vast network of intertwined companies like Teknology, Los Pollos, Taco Loco, and Adtrafico. In May 2024, Los Pollos said it had 200,000 affiliates and over 2 billion unique users every month.
The scams, more broadly, play out in this manner: Unsuspecting users who land on a legitimate-but-infected site are routed through a TDS under VexTrio’s control, which then leads the users to scam landing pages. This is achieved by means of a smartlink that cloaks the final landing page and hinders analysis. Los Pollos and Adtrafico are both cost-per-action (CPA) networks that allow publishing affiliates to earn a commission when a site visitor performs an intended action. This could be accepting a website notification, providing their personal details, downloading an app, or giving credit card information.
It has also been found to be a major spam distributor that reaches out to millions of potential victims, leveraging lookalike domains of popular mail services like SendGrid (“sendgrid[.]rest”) and MailGun (“mailgun[.]fun”) to facilitate the service. Another significant aspect is the use of cloaking services like IMKLO to disguise the real domains and evaluate criteria like the user’s location, their device type, their browser, and then determine the exact nature of content to be delivered. “The security industry, and much of the world, is more focused on malware right now,” Burton said. “This is in some sense victim blaming, in which there is a belief that people who fall for scams somehow deserve to be scammed more.” “So, stealing your credit card information via malware – even when it requires some ridiculous stroke of keys, like the current fake captcha/ ClickFix attacks – is somehow ‘worse’ than if you are conned into giving it up.
Cybersecurity education and greater awareness for treating scams with the same severity as malware are two ways to combat malicious adtech.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
AI Slashes Workloads for vCISOs by 68% as SMBs Demand More – New Report Reveals
As the volume and sophistication of cyber threats and risks grow, cybersecurity has become mission-critical for businesses of all sizes. To address this shift, SMBs have been urgently turning to vCISO services to keep up with escalating threats and compliance demands. A recent report by Cynomi has found that a full 79% of MSPs and MSSPs see high demand for vCISO services among SMBs. How are service providers scaling to meet this demand?
Which business upside can they expect to see? And where does AI fit in? The answers can be found in “The 2025 State of the vCISO Report”. This newly-released report offers a deep dive into the vCISO market evolution and the broader shift toward advanced cybersecurity services.
The bottom line? What used to be a niche offering is now a foundational service, and AI is transforming how it’s delivered. Below, we bring some of the main findings of the report. 319% Growth in vCISO Adoption: MSPs & MSSPs Race to Meet SMB Demand vCISO offerings provide a flexible, cost-effective way for organizations to access high-level cybersecurity expertise without the overhead of a full-time executive.
And with a growing number of attacks alongside the growing awareness of the importance of cybersecurity, it’s no surprise that demand for vCISO services is skyrocketing among SMBs. Demand for vCISO services is outpacing even compliance readiness and cyber insurance support. Figure 1: Demand for Advanced Cybersecurity Services Among SMB Clients In response, adoption of the vCISO offering among MSPs and MSSPs has jumped from 21% in 2024 to 67% in 2025. This is a 319% increase in just one year.
In addition, 50% of the service providers who don’t yet, plan to launch vCISO offerings by year’s end. This adoption curve reflects a clear industry shift, from vCISO being a niche service to becoming a core one. Figure 2: Plans for Offering vCISO Services Real Business Impact: Higher Margins, Better Upsell, More Recurring Revenue The growth in adoption isn’t just driven by client demand. It’s also underpinned by strong business outcomes for providers.
Organizations that offer vCISO services report substantial gains: 41% report an increase in upsell opportunities, leveraging vCISO for additional service offerings 40% see improved operating margins 39% report a measurable expansion in their customer base, including access to new prospects And of course, vCISO services offer significant security benefits to clients. From the service provider angle, leading security expertise elevates them beyond mere temporary vendors to trusted, long-term strategic partners. Adoption Barriers Are Real, But They’re Operational, Not Strategic While enthusiasm among service providers is high, not every provider has yet made the leap into vCISO services. Among those still in planning mode, the report identifies three primary concerns: 35% cite uncertainty around profitability or ROI.
33% highlight high upfront investment requirements. 32% point to a shortage of qualified cybersecurity professionals. Importantly, few providers doubt the market demand or business value of vCISO services. Instead, they’re struggling to implement them efficiently and profitably.
This is where technology and automation come into play. As AI-powered platforms reduce manual effort and enable scalable service delivery, the operational burden becomes far more manageable, opening the door for broader market participation. AI Is Reshaping the vCISO Delivery Model AI is no longer a future consideration. It’s already having a profound impact on how vCISO services are delivered.
According to the report, 81% of MSPs and MSSPs are already using AI or automation within their vCISO workflows, and an additional 15% plan to adopt it within the next year. Figure 3: Use of Automation and AI Tools in vCISO Service Delivery The applications of AI in vCISO services are wide-ranging and impactful: reporting automation and insights, remediation planning, compliance readiness and monitoring, risk and security assessments, task prioritization, and more. The result is a significant reduction in manual workload: an average decrease of 68%, with 42% of providers seeing 81–100% workload reductions in some areas. This allows service providers to support more clients, deliver higher-quality outputs, and improve profit margins, all without expanding headcount.
In effect, AI is enabling the kind of scale and consistency that traditional, human-led delivery models could not sustain. The Road Ahead: AI-Driven Scale, Strategy and Service Differentiation The 2025 State of the vCISO Report paints a clear picture: As service providers continue to invest in automation and intelligent tooling, the vCISO model will shift from resource-heavy to AI-powered and highly efficient. Looking forward, we expect to see: Wider market penetration among MSPs and MSSPs Deeper integration of AI across vCISO services Higher ROI, as service providers implement AI and other technologies in their processes and offerings. For a complete view of trends, benchmarks, and best practices shaping the future of virtual cybersecurity leadership, download the full 2025 State of the vCISO Report .
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Microsoft Launches Project Ire to Autonomously Classify Malware Using AI Tools
Microsoft on Tuesday announced an autonomous artificial intelligence (AI) agent that can analyze and classify software without assistance in an effort to advance malware detection efforts. The large language model (LLM)-powered autonomous malware classification system, currently a prototype, has been codenamed Project Ire by the tech giant. The system “automates what is considered the gold standard in malware classification: fully reverse engineering a software file without any clues about its origin or purpose,” Microsoft said . “It uses decompilers and other tools, reviews their output, and determines whether the software is malicious or benign.” Project Ire, per the Windows maker, is an effort to enable malware classification at scale, accelerate threat response, and reduce the manual efforts that analysts have to undertake in order to examine samples and determine if they are malicious or benign.
Specifically, it uses specialized tools to reverse engineer software, conducting analysis at various levels, ranging from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior. “Its tool-use API enables the system to update its understanding of a file using a wide range of reverse engineering tools, including Microsoft memory analysis sandboxes based on Project Freta (opens in new tab), custom and open-source tools, documentation search, and multiple decompilers,” Microsoft said. Project Freta is a Microsoft Research initiative that enables “discovery sweeps for undetected malware,” such as rootkits and advanced malware, in memory snapshots of live Linux systems during memory audits. The evaluation is a multi-step process - Automated reverse engineering tools identify the file type, its structure, and potential areas of interest The system reconstructs the software’s control flow graph using frameworks like angr and Ghidra The LLM invokes specialized tools through an API to identify and summarize key functions The system calls a validator tool to verify its findings against evidence used to reach the verdict and classify the artifact The summarization leaves a detailed “chain of evidence” log that details how the system arrived at its conclusion, allowing security teams to review and refine the process in case of a misclassification.
In tests conducted by the Project Ire team on a dataset of publicly accessible Windows drivers, the classifier has been found to correctly flag 90% of all files and incorrectly identify only 2% of benign files as threats. A second evaluation of nearly 4,000 “hard-target” files rightly classified nearly 9 out of 10 malicious files as malicious, with a false positive rate of only 4%. “Based on these early successes, the Project Ire prototype will be leveraged inside Microsoft’s Defender organization as Binary Analyzer for threat detection and software classification,” Microsoft said. “Our goal is to scale the system’s speed and accuracy so that it can correctly classify files from any source, even on first encounter.
Ultimately, our vision is to detect novel malware directly in memory, at scale.” The development comes as Microsoft said it awarded a record $17 million in bounty awards to 344 security researchers from 59 countries through its vulnerability reporting program in 2024. A total of 1,469 eligible vulnerability reports were submitted between July 2024 and June 2025, with the highest individual bounty reaching $200,000. Last year, the company paid $16.6 million in bounty awards to 343 security researchers from 55 countries. Found this article interesting?
Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Trend Micro Confirms Active Exploitation of Critical Apex One Flaws in On-Premise Systems
Trend Micro has released mitigations to address critical security flaws in on-premise versions of Apex One Management Console that it said have been exploited in the wild. The vulnerabilities ( CVE-2025-54948 and CVE-2025-54987 ), both rated 9.4 on the CVSS scoring system, have been described as management console command injection and remote code execution flaws. “A vulnerability in Trend Micro Apex One (on-premise) management console could allow a pre-authenticated remote attacker to upload malicious code and execute commands on affected installations,” the cybersecurity company said in a Tuesday advisory. While both shortcomings are essentially the same, CVE-2025-54987 targets a different CPU architecture.
The Trend Micro Incident Response (IR) Team and Jacky Hsieh at CoreCloud Tech have been credited with reporting the two flaws. There are currently no details on how the issues are being exploited in real-world attacks. Trend Micro said it “observed at least one instance of an attempt to actively exploit one of these vulnerabilities in the wild.” Mitigations for Trend Micro Apex One as a Service have already been deployed as of July 31, 2025. A short-term solution for on-premise versions is available in the form of a fix tool.
A formal patch for the vulnerabilities is expected to be released in mid-August 2025. However, Trend Micro pointed out that while the tool fully protects against known exploits, it will disable the ability for administrators to utilize the Remote Install Agent function to deploy agents from the Trend Micro Apex One Management Console. It emphasized that other agent install methods, such as UNC path or agent package, are unaffected. “Exploiting these type of vulnerabilities generally require that an attacker has access (physical or remote) to a vulnerable machine,” the company said.
“In addition to timely application of patches and updated solutions, customers are also advised to review remote access to critical systems and ensure policies and perimeter security is up-to-date.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
CERT-UA Warns of HTA-Delivered C# Malware Attacks Using Court Summons Lures
The Computer Emergency Response Team of Ukraine (CERT-UA) has warned of cyber attacks carried out by a threat actor called UAC-0099 targeting government agencies, the defense forces, and enterprises of the defense-industrial complex in the country. The attacks, which leverage phishing emails as an initial compromise vector, are used to deliver malware families like MATCHBOIL, MATCHWOK, and DRAGSTARE. UAC-0099, first publicly documented by the agency in June 2023, has a history of targeting Ukrainian entities for espionage purposes. Prior attacks have been observed leveraging security flaws in WinRAR software (CVE-2023-38831, CVSS score: 7.8) to propagate a malware called LONEPAGE.
The latest infection chain involves using email lures related to court summons to entice recipients into clicking on links that are shortened using URL shortening services like Cuttly. These links, which are sent via UKR.NET email addresses, point to a double archive file containing an HTML Application (HTA) file. The execution of the HTA payload triggers the launch of an obfuscated Visual Basic Script file that, in turn, creates a scheduled task for persistence and ultimately runs a loader named MATCHBOIL, a C#-based program that’s designed to drop additional malware on the host. This includes a backdoor called MATCHWOK and a stealer named DRAGSTARE.
Also written using the C# programming language, MATCHWOK is capable of executing PowerShell commands and passing the results of the execution to a remote server. DRAGSTARE, on the other hand, is equipped to collect system information, data from web browsers, files matching a specific list of extensions (“.docx”, “.doc”, “.xls”, “.txt”, “.ovpn”, “.rdp”, “.txt”, and “.pdf”) from the “Desktop”, “Documents”, “Downloads” folders, screenshots, and running PowerShell commands received from an attacker-controlled server. The disclosure comes a little over a month after ESET published a detailed report cataloging Gamaredon’s “relentless” spear-phshing attacks against Ukrainian entities in 2024, detailing its use of six new malware tools that are engineered for stealth, persistence, and lateral movement - PteroDespair , a PowerShell reconnaissance tool to collect diagnostic data on previously deployed malware PteroTickle , a PowerShell weaponizer that targets Python applications converted into executables on fixed and removable drives to facilitate lateral movement by injecting code that likely serves PteroPSLoad or another PowerShell downloader PteroGraphin , a PowerShell tool to establish persistence using Microsoft Excel add-ins and scheduled tasks, as well as create an encrypted communication channel for payload delivery, through the Telegraph API PteroStew , a VBScript downloader similar to PteroSand and PteroRisk) that stores its code in alternate data streams associated with benign files on the victim’s system PteroQuark , a VBScript downloader introduced as a new component within the VBScript version of the PteroLNK weaponizer PteroBox , a PowerShell file stealer resembling PteroPSDoor but exfiltrating stolen files to Dropbox “Gamaredon’s spearphishing activities significantly intensified during the second half of 2024,” security researcher Zoltán Rusnák said . “Campaigns typically lasted one to five consecutive days, with emails containing malicious archives (RAR, ZIP, 7z) or XHTML files employing HTML smuggling techniques.” The attacks often result in the delivery of malicious HTA or LNK files that execute embedded VBScript downloaders such as PteroSand , along with distributing updated versions of its existing tools like PteroPSDoor, PteroLNK, PteroVDoor, and PteroPSLoad.
Other notable aspects of the Russian-aligned threat actor’s tradecraft include the use of fast-flux DNS techniques and the reliance on legitimate third-party services like Telegram, Telegraph, Codeberg, and Cloudflare tunnels to obfuscate its command-and-control (C2) infrastructure. “Despite observable capacity limitations and abandoning older tools, Gamaredon remains a significant threat actor due to its continuous innovation, aggressive spearphishing campaigns, and persistent efforts to evade detections,” ESET said. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
AI Is Transforming Cybersecurity Adversarial Testing - Pentera Founder’s Vision
When Technology Resets the Playing Field In 2015 I founded a cybersecurity testing software company with the belief that automated penetration testing was not only possible, but necessary. At the time, the idea was often met with skepticism, but today, with 1200+ of enterprise customers and thousands of users, that vision has proven itself. But I also know that what we’ve built so far is only the foundation of what comes next. We are now witnessing an inflection point with AI in cybersecurity testing that is going to rewrite the rules of what’s possible.
You might not see the change in a month’s time, but in five years the domain is going to be unrecognizable. As the CTO of Pentera, I have a vision for the company: one where any security threat scenario you can imagine, you can test with the speed and intelligence only AI can provide. We have already started to implement the individual pieces of this reality into our platform. This article portrays the full vision I have for Pentera in years to come.
AI isn’t just another optimization layer for red team tools or security dashboards. It represents a change across the entire lifecycle of adversarial testing. It changes how payloads are created, how tests are executed, and how findings are interpreted. It is redefining what our automated security validation platform can do.
Like your cellphone’s touchscreen revolution, AI will become the intuitive interface, the engine behind execution, and the translator that turns raw data into decisions. At Pentera AI is transforming every layer of adversarial testing. Vibe Red Teaming Picture this. You’re a CISO responsible for protecting a hybrid environment: Active Directory on-prem, production apps in Azure, and a vibrant dev team working across containers and SaaS.
You’ve just learned that a contractor’s credentials were accidentally exposed in a GitHub repo . What you want to know isn’t buried in a CVE database or a threat feed, you need to test if that specific access could lead to real damage. So, you open Pentera and simply say: “Check if the credentials john.smith@company.io can be used to access the finance database in production.” No scripts. No workflows.
No playbooks. In seconds, the platform understands your intent, scopes the environment, builds an attack plan, and emulates the adversary, safely and surgically. It doesn’t stop there. It adapts mid-test if your defenses react.
It bypasses detection where possible, pauses when needed, and reevaluates the path based on live evidence. And when it’s done? You get a summary tailored for you; not a dump of raw data. Executives receive a high-level risk briefing.
- Your SOC gets the logs and findings. Your cloud team gets a remediation path. That’s
- Vibe Red Teaming
- where security validation becomes conversational, intelligent, and instantly actionable. It gets better - picture this as well: Imagine that from any security application or agent, for example your SOC you want to test for acceptance of your new Cloud environment.
- Alternatively imagine that your devops team would like to roll your new LLM application model into production. Those management applications, soon to turn agentic, will call the Pentera Attack-testing API and execute those tests as part of their workflow, assuring that any and every action in your infrastructure is inherently secure as from its inception. That’s a
- callable testing sub-agent
- where any security application and any script can call on security validation operations from within and verify the efficacy and correctness of security controls on the fly. Transforming Every Layer of Adversarial Testing To bring this future to life, we’re reimagining the adversarial testing lifecycle around intelligence, infusing AI into every layer of how pentesting and red-teaming exercises are imagined, executed, adapted, and understood.
These pillars form the foundation of our vision for a smarter, more intuitive, more human form of security validation. 1. Agenting the Product: The End of Clicks, the Rise of Conversation In the future, you won’t build tests in a template; you’ll drive them in natural language. And as the test runs, you won’t sit back and wait for results, you’ll shape what happens next.
“Launch an access attempt from the contractor-okta identity group. Check if any accounts in that group can access file shares on 10.10.22.0/24. If access is granted, escalate privileges and attempt credential extraction. If any domain admin credentials are captured, pivot toward prod-db-finance.” And once the test is in motion, you keep steering: “Pause lateral movement.
Focus only on privilege escalation paths from Workstation-203.” “Re-run credential harvesting using memory scraping instead of LSASS injection.” “Drop all actions targeting dev subnets, this scenario is finance only.” This is Vibe Red Teaming in action: No rigid workflows. No clicking through trees of options. No translation between human thought and test logic. You define the scenario.
You direct the flow. You adapt the path. The test becomes an extension of your intent, and your imagination as a tester. Instantly you have the power of red-teaming at your fingertips.
Work is already underway to bring this experience to life, starting with early agentic capabilities that act on natural language input to give you more control over your testing in real-time. 2. API-First Intelligence: Unlocking Granular Control of the Attack We are building an API-first foundation for adversarial testing. Every attack capability - such as credential harvesting, lateral movement, or privilege escalation - will be exposed as an individual backend function.
This allows AI to access and activate techniques directly, without depending on the user interface or predefined workflows. This architecture gives AI the flexibility to engage only what is relevant to the current scenario. It can call specific capabilities in response to what it observes, apply them with precision, and adjust based on the environment in real time. An API-first model also accelerates development.
As soon as a new capability is available in the backend, AI can use it. It knows how to invoke the function, interpret the output, and apply the result as part of the test. There is no need to wait for the UI to catch up. This shift enables faster iteration, greater adaptability, and more efficient use of every new capability.
AI gains the freedom to act with context and control, activating only what is needed, exactly when it is needed. 3. AI for Web Testing: The Web Surface, Weaponized The impact of AI becomes even more visible when you look at how it shapes common web attack techniques. It doesn’t necessarily invent new methods.
It enhances them by applying real context. Pentera has already introduced AI-based web attack surface testing into the platform, including AI-driven payload generation, adaptive testing logic, and deeper system awareness. These capabilities allow the platform to emulate attacker behavior with more precision, speed, and environmental sensitivity than was previously possible. In the future, AI will make this surface testable in ways that aren’t practical today.
When new threat intelligence emerges, the platform will generate relevant payloads and apply them as soon as it encounters a matching system or opportunity. AI will also transform how sensitive data is discovered and used. It will parse terabytes of files, scripts, and databases, not with rigid patterns, but with the awareness of what an attacker is looking for—credentials, tokens, API keys, session identifiers, environment variables, and configuration secrets. At the same time, it will recognize the type of system it is interacting with and determine how that system typically behaves.
This context allows AI to apply what it finds with precision. Credentials will be tested against relevant login flows. Tokens and session artifacts will be injected where they matter. Each step of the test will advance with intent, shaped by an understanding of both the environment and the opportunity within it.
Language, structure, and regional variation have often made meaningful testing difficult or even impossible. AI already enables Pentera to remove that barrier. The platform interprets interface logic across languages and regional conventions without the need to rewrite flows or localize scripts. It recognizes intent and adapts accordingly.
This is the direction we’re building toward. A system that uses intelligence to emulate threats with precision and helps you understand where to focus, what to fix, and how to secure your environments with confidence. 4. Validating the LLM Attack surface AI infrastructure is becoming a core part of how organizations operate.
Large language models (LLMs) process user input, store memory, connect to external tools, and influence decisions across environments. These systems often carry broad permissions and implicit trust, making them a high-value target for attackers. The attack surface is growing. Prompt injection, data leakage, context poisoning, and hidden control flows are already being exploited.
As LLMs are embedded into more workflows, attackers are learning how to manipulate them, extract data, and redirect behavior in ways that evade traditional detection. Pentera’s role is to ensure you can close that gap. We will engage with LLMs through real-world inputs, workflows, and integrations designed to surface misuse. When a model produces an output that can be exploited, the test will continue with intent.
That output will be used to gain access, move laterally, escalate privileges, or trigger actions in connected systems. The objective is to demonstrate how a compromised model can lead to meaningful impact across the environment. This is not just about hardening the model. It’s about validating the security of the entire system around it.
Pentera will give security teams a clear view into how AI infrastructure can be exploited and where they present a risk to the organization. The result is confidence that your AI-enabled systems are not just operational, but secured by design. 5. AI Insights: A Report That Speaks to You Every test ends with a question: What does this mean for me?
We’ve already started answering that with AI-powered reporting available in the platform today . It surfaces key exposure trends, highlights remediation priorities, and provides security teams with a clearer view of how their posture is evolving over time. But that is just the foundation. The vision we are building goes further.
AI won’t just summarize results. It will understand who is reading, why it matters to them, and how to deliver that insight in the most useful way. A security leader sees posture trends across quarters, with risk benchmarks tied to business objectives. An engineer gets clear, actionable findings - no fluff, no digging.
And a boardroom gets a one-page readout that connects security exposure to operational continuity. And the breakthrough is not just in content. It is in communication. The IT team in Mexico sees the report in Spanish.
The regional lead in France reads it in French. No translation delays. No loss of meaning. No need to filter the information through someone else.
The report adapts. It clarifies. It prioritizes. It speaks to your role, your focus, your language.
It’s not documentation. It’s insight delivered like it was written just for you, because it was. 6. AI Support: Testing Without Roadblocks AI will reshape the support experience by reducing friction at every step - from answering common questions to resolving complex technical issues faster.
A conversational chatbot will help users get unstuck in the moment. It will answer straightforward questions about platform usage, test setup, findings navigation, and general how-to guidance. This reduces reliance on documentation or human intervention for common tasks, giving users immediate clarity when they need it. For more involved issues, AI will take on a much deeper role behind the scenes.
Instead of waiting for a ticket to move through multiple support tiers, users will upload logs, screenshots, or error details directly into the support flow. AI will analyze the input, identify known patterns, and generate suggested resolutions automatically. It will determine whether the issue is usage-related, a known product behavior, or a likely bug - and escalate it only when needed, with full context already attached. The outcome is faster resolution, fewer back-and-forth cycles, and a shift in the human role - from triaging every request to reviewing and finalizing solutions.
Customers spend less time blocked, and more time moving forward. Conclusion: From Test to Transformation Vibe Red Teaming is a new experience in security testing. It doesn’t start with configuration or scripting. It starts with intent.
You describe what you want to validate, and the platform translates that into action. AI makes that possible. It turns ideas into tests, adapts in real time, and reflects the conditions of your environment as they evolve. You’re not building scenarios from templates.
You’re directing real validation, on your terms. Built on the foundation of Pentera’s safe-by-design attack techniques , every action is controlled and built to avoid disruption, so teams can test aggressively without ever putting production at risk. This is the foundation for a new model. Testing becomes continuous, expressive, and part of how security teams operate every day.
The barrier to action disappears. Testing keeps pace with the threat. We’re already building toward that future now. Note: This article was written by Dr.
Arik Liberzon, Founder & CTO of Pentera. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
CISA Adds 3 D-Link Vulnerabilities to KEV Catalog Amid Active Exploitation Evidence
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added three old security flaws impacting D-Link Wi-Fi cameras and video recorders to its Known Exploited Vulnerabilities ( KEV ) catalog, based on evidence of active exploitation in the wild. The high-severity vulnerabilities, which are from 2020 and 2022, are listed below - CVE-2020-25078 (CVSS score: 7.5) - An unspecified vulnerability in D-Link DCS-2530L and DCS-2670L devices that could allow for remote administrator password disclosure CVE-2020-25079 (CVSS score: 8.8) - An authenticated command injection vulnerability in the cgi-bin/ddns_enc.cgi component affecting D-Link DCS-2530L and DCS-2670L devices CVE-2020-40799 (CVSS score: 8.8) - A download of code without an integrity check vulnerability in D-Link DNR-322L that could allow an authenticated attacker to execute operating system-level commands on the device There are currently no details on how these shortcomings are being exploited in the wild, although a December 2024 advisory from the U.S. Federal Bureau of Investigation (FBI) warned of HiatusRAT campaigns actively scanning web cameras that are vulnerable to CVE-2020-25078.
It’s worth noting that CVE-2020-40799 remains unpatched due to the affected model reaching end-of-life (EoL) status as of November 2021. Users still relying on DNR-322L are advised to discontinue and replace them. Fixes for the other two flaws were released by D-Link in 2020. In light of active exploitation, it’s essential that Federal Civilian Executive Branch (FCEB) agencies carry out the necessary mitigation steps by August 26, 2025, to secure their networks.
(The story was updated after publication to emphasize that the issues affect D-Link Wi-Fi cameras and video recorders and not routers as previously stated. The error is regretted.) Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
ClickFix Malware Campaign Exploits CAPTCHAs to Spread Cross-Platform Infections
A combination of propagation methods, narrative sophistication, and evasion techniques enabled the social engineering tactic known as ClickFix to take off the way it did over the past year, according to new findings from Guardio Labs. “Like a real-world virus variant, this new ‘ ClickFix ‘ strain quickly outpaced and ultimately wiped out the infamous fake browser update scam that plagued the web just last year,” security researcher Shaked Chen said in a report shared with The Hacker News. “It did so by removing the need for file downloads, using smarter social engineering tactics, and spreading through trusted infrastructure. The result - a wave of infections ranging from mass drive-by attacks to hyper-targeted spear-phishing lures.” ClickFix is the name given to a social engineering tactic where prospective targets are deceived into infecting their own machines under the guise of fixing a non-existent issue or a CAPTCHA verification.
It was first detected in the wild in early 2024. In these attacks, infection vectors as diverse as phishing emails, drive-by downloads, malvertising, and search engine optimization (SEO) poisoning are employed to direct users to fake pages that display the error messages. These messages have one goal: Guide victims to follow a series of steps that cause a covertly copied malicious command to their clipboard to be executed when pasted on the Windows Run dialog box or the Terminal app, in the case of Apple macOS . The nefarious command, in turn, triggers the execution of a multi-stage sequence that results in the deployment of various kinds of malware, such as stealers, remote access trojans, and loaders, underscoring the flexibility of the threat.
The tactic has become so effective and potent that it has led to what Guardio calls a CAPTCHAgeddon, with both cybercriminal and nation-state actors wielding it in dozens of campaigns in a short span of time. ClickFix is a more stealthy mutation of ClearFake , which involves leveraging compromised WordPress sites to serve fake browser update pop-ups that, in turn, deliver stealer malware. ClearFake subsequently went on to incorporate advanced evasion tactics like EtherHiding to conceal the next-stage payload using Binance’s Smart Chain (BSC) contracts. Guardio said the evolution of ClickFix and its success is the result of constant refinement in terms of propagation vectors, the diversification of the lures and messaging, and the different methods used to get ahead of the detection curve, so much so that it ultimately supplanted ClearFake.
“Early prompts were generic, but they quickly became more persuasive, adding urgency or suspicion cues,” Chen said. “These tweaks increased compliance rates by exploiting basic psychological pressure.” Some of the notable ways the attack approach has adapted include the abuse of Google Scripts to host the fake CAPTCHA flows, thereby leveraging the trust associated with Google’s domain, as well as embedding the payload within legitimate-looking file sources like socket.io.min.js. “This chilling list of techniques – obfuscation, dynamic loading, legitimate-looking files, cross-platform handling, third-party payload delivery, and abuse of trusted hosts like Google – demonstrates how threat actors have continuously adapted to avoid detection,” Chen added. “It is a stark reminder that these attackers are not just refining their phishing lures or social engineering tactics but are investing heavily in technical methods to ensure their attacks remain effective and resilient against security measures.” Found this article interesting?
Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Google’s August Patch Fixes Two Qualcomm Vulnerabilities Exploited in the Wild
Google has released security updates to address multiple security flaws in Android, including fixes for two Qualcomm bugs that were flagged as actively exploited in the wild. The vulnerabilities include CVE-2025-21479 (CVSS score: 8.6) and CVE-2025-27038 (CVSS score: 7.5), both of which were disclosed alongside CVE-2025-21480 (CVSS score: 8.6), by the chipmaker back in June 2025. CVE-2025-21479 relates to an incorrect authorization vulnerability in the Graphics component that could lead to memory corruption due to unauthorized command execution in GPU microcode. CVE-2025-27038, on the other hand, use-after-free vulnerability in the Graphics component that could result in memory corruption while rendering graphics using Adreno GPU drivers in Chrome.
There are still no details on how these shortcomings have been weaponized in real-world attacks, but Qualcomm noted at the time that “there are indications from Google Threat Analysis Group that CVE-2025-21479, CVE-2025-21480, CVE-2025-27038 may be under limited, targeted exploitation.” Given that similar flaws in Qualcomm chipsets have been exploited by commercial spyware vendors like Variston and Cy4Gate in the past, it’s suspected that the aforementioned shortcomings may also have been abused in a similar context. The three vulnerabilities have since been added to the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities ( KEV ) catalog, requiring federal agencies to apply the updates by June 24, 2025. Google’s August 2025 patch also resolves two high-severity privilege escalation flaws in Android Framework (CVE-2025-22441 and CVE-2025-48533) and a critical bug in the System component (CVE-2025-48530) that could result in remote code execution when combined with other flaws without requiring any additional privileges or user interaction.
The tech giant has made available two patch levels, 2025-08-01 and 2025-08-05, with the latter also incorporating fixes for closed-source and third-party components from Arm and Qualcomm. Android device users are advised to apply the updates as and when they become available to stay protected against potential threats. Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval
Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution. The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations. “A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target’s machine,” Cursor said in an advisory released last week. “Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt.” MCP is an open-standard developed by Anthropic that allows large language models (LLMs) to interact with external tools, data, and services in a standardized manner.
It was introduced by the AI company in November 2024. CVE-2025-54136, per Check Point, has to do with how it’s possible for an attacker to alter the behavior of an MCP configuration after a user has approved it within Cursor. Specifically, it unfolds as follows - Add a benign-looking MCP configuration (“.cursor/rules/mcp.json”) to a shared repository Wait for the victim to pull the code and approve it once in Cursor Replace the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor Achieve persistent code execution every time the victim opens the Cursor The fundamental problem here is that once a configuration is approved, it’s trusted by Cursor indefinitely for future runs, even if it has been changed. Successful exploitation of the vulnerability not only exposes organizations to supply chain risks, but also opens the door to data and intellectual property theft without their knowledge.
Following responsible disclosure on July 16, 2025, the issue has been addressed by Cursor in version 1.3 released late July 2025 by requiring user approval every time an entry in the MCP configuration file is modified. “The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows,” Check Point said. The development comes days after Aim Labs, Backslash Security, and HiddenLayer exposed multiple weaknesses in the AI tool that could have been abused to obtain remote code execution and bypass its denylist-based protections. They have also been patched in version 1.3.
The findings also coincide with the growing adoption of AI in business workflows, including using LLMs for code generation, broadening the attack surface to various emerging risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, hallucinations, inappropriate responses, and data leakage - A test of over 100 LLMs for their ability to write Java, Python, C#, and JavaScript code has found that 45% of the generated code samples failed security tests and introduced OWASP Top 10 security vulnerabilities. Java led with a 72% security failure rate, followed by C# (45%), JavaScript (43%), and Python (38%). An attack called LegalPwn has revealed that it’s possible to leverage legal disclaimers, terms of service, or privacy policies as a novel prompt injection vector, highlighting how malicious instructions can be embedded within legitimate, but often overlooked, textual components to trigger unintended behavior in LLMs, such as misclassifying malicious code as safe and offering unsafe code suggestions that can execute a reverse shell on the developer’s system. An attack called man-in-the-prompt that employs a rogue browser extension with no special permissions to open a new browser tab in the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract data and compromise model integrity.
This takes advantage of the fact that any browser add-on with scripting access to the Document Object Model (DOM) can read from, or write to, the AI prompt directly. A jailbreak technique called Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to produce otherwise restricted outputs, thereby deceiving the model into breaking its own rules. An attack called MAS hijacking that manipulates the control flow of a multi-agent system (MAS) to execute arbitrary malicious code across domains, mediums, and topologies by weaponizing the agentic nature of AI systems. A technique called Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI model inference pipeline by embedding malicious instructions within the chat template files that execute during the inference phase to compromise outputs.
By positioning the attack between input validation and model output, the approach is both sneaky and bypasses AI guardrails. With GGUF files distributed via services like Hugging Face, the attack exploits the supply chain trust model to trigger the attack. An attacker can target the machine learning (ML) training environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the models, ultimately leading to lateral movement, privilege escalation, as well as training data and model theft and poisoning. A study by Anthropic has uncovered that LLMs can learn hidden characteristics during distillation, a phenomenon called subliminal learning , that causes models to transmit behavioral traits through generated data that appears completely unrelated to those traits, potentially leading to misalignment and harmful behavior.
“As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly,” Pillar Security’s Dor Sarig said. “Modern jailbreaks can propagate through contextual chains, infecting one AI component and leading to cascading logic failures across interconnected systems.” “These attacks highlight that AI security requires a new paradigm, as they bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability lies in the very language and reasoning the model is designed to emulate.” Found this article interesting? Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.
Misconfigurations Are Not Vulnerabilities: The Costly Confusion Behind Security Risks
In SaaS security conversations, “misconfiguration” and “vulnerability” are often used interchangeably. But they’re not the same thing. And misunderstanding that distinction can quietly create real exposure. This confusion isn’t just semantics.
It reflects a deeper misunderstanding of the shared responsibility model, particularly in SaaS environments where the line between vendor and customer responsibility is often unclear. A Quick Breakdown Vulnerabilities are flaws in the codebase of the SaaS platform itself. These are issues only the vendor can patch. Think zero-days and code-level exploits.
Misconfigurations , on the other hand, are user-controlled. They result from how the platform is set up—who has access, what integrations are connected, and what policies are enforced (or not). A misconfiguration might look like a third-party app with excessive access, or a sensitive internal site that is accidentally public. A Shared Model, but Split Responsibilities Most SaaS providers operate under a shared responsibility model.
They secure the infrastructure, deliver commitments on uptime, and provide platform-level protections. In SaaS, this model means the vendor handles the underlying hosting infrastructure and systems, while customers are responsible for how they configure the application, manage access, and control data sharing. It’s up to the customer to configure and use the application securely. This includes identity management, permissions, data sharing policies, and third-party integrations.
These are not optional layers of security. They’re foundational. That disconnect is reflected in the data: 53% of organizations say their SaaS security confidence is based on trust in the vendor, according to the The State of SaaS Security 2025 Report . In reality, assuming vendors are handling everything can create a dangerous blind spot, especially when the customer controls the most breach-prone settings.
Threat Detection Can’t Catch What Was Never Logged Most incidents don’t involve advanced attacks, or even a threat actor triggering an alert. Instead, they originate from configuration or policy issues that go unnoticed. The State of SaaS Security 2025 Report identifies that 41% of incidents were caused by permission issues and 29% by misconfigurations. These risks don’t appear in traditional detection tools (including SaaS threat detection platforms) because they’re not triggered by user behavior.
Instead, they’re baked into how the system is set up. You only see them by analyzing configurations, permissions, and integration settings directly—not through logs or alerts. Here’s what a typical SaaS attack path looks like—starting with access attempts and ending in data exfiltration. Each step can be blocked by either posture controls (prevent) or detected through anomaly and event-driven alerts (detect).
But not every risk shows up in a log file. Some can only be addressed by hardening your environment before the attack even begins. Logs capture actions like logins, file access, or administrative changes. But excessive permissions, unsecured third-party connections, or overexposed data aren’t actions.
They are conditions. If no one interacts with them, they leave no trace in the log files. This gap is not just theoretical. Research into Salesforce’s OmniStudio platform (designed for low-code customization in regulated industries like healthcare, financial services, and government workflows) revealed critical misconfigurations that traditional monitoring tools failed to detect.
These weren’t obscure edge cases. They included permission models that exposed sensitive data by default and low-code components that granted broader access than intended. The risks were real, but the signals were silent. While detection remains critical for responding to active threats, it must be layered on top of a secure posture, not as a substitute for it.
Build a Secure-by-Design SaaS Program The bottom line is this: you can’t detect your way out of a misconfiguration problem. If the risk lives in how the system is set up, detection won’t catch it. Posture management needs to come first. Instead of reacting to breaches, organizations should focus on preventing the conditions that cause them.
That starts with visibility into configurations, permissions, third-party access, shadow AI, and the risky combinations that attackers exploit. Threat detection still matters, not because posture is weak, but because no system is ever bulletproof. AppOmni helps customers combine a strong preventive posture with high-fidelity detection to create a layered defense strategy that stops known risks and catches the unknowns. A Smarter Approach to SaaS Security To build a modern SaaS security strategy, start with what’s actually in your control.
Focus on securing configurations, managing access, and establishing visibility, because the best time to address SaaS risk is before it becomes a problem. Ready to fix the gaps in your SaaS posture? If you want to see where most teams are falling short—and what leading organizations are doing differently—the 2025 State of SaaS Security Report breaks it down. From breach drivers to gaps in ownership and confidence, it’s a revealing look at how posture continues to shape outcomes.
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News , Twitter and LinkedIn to read more exclusive content we post.