The Shadow AI Security Crisis: How 5,000 Vibe-Coded Apps Echo the S3 Bucket Problem

Most enterprise security teams focus on protecting servers, endpoints, and cloud accounts. But a new threat has emerged that bypasses those defenses entirely: shadow AI applications built with no-code or low-code tools. A recent study by Israeli cybersecurity firm RedAccess uncovered a massive gap — 380,000 publicly accessible assets created using vibe-coding platforms like Lovable, Base44, and Replit, along with deployment services such as Netlify. Of those, roughly 5,000 (about 1.3%) contained sensitive corporate data, including patient records, financial information, and internal business documents. This mirrors the infamous S3 bucket exposures of the past, but now driven by AI-generated code. Below, we answer key questions about this emerging crisis.

What exactly did RedAccess discover about vibe-coded apps?

In a sweeping investigation, Israeli cybersecurity firm RedAccess mapped publicly accessible assets created using popular vibe-coding platforms and deployment services. The team identified 380,000 applications, databases, and related infrastructure tied to tools like Lovable, Base44, Replit, and Netlify. Among these, roughly 5,000 — about 1.3% — contained sensitive corporate information that should never have been publicly visible. CEO Dor Zvi explained that the exposures were uncovered during routine shadow AI research for clients. Both Axios and Wired independently verified the findings, confirming that confidential customer interactions, shipping schedules, clinical trial data, and financial records were accessible to anyone who simply stumbled upon the URLs. The scale and nature of the exposure led researchers to compare it to the insecure S3 bucket crisis of the previous decade, where misconfigured cloud storage leaked billions of records. The difference now: these apps are being built by non-technical employees in hours, often without any security oversight.

The Shadow AI Security Crisis: How 5,000 Vibe-Coded Apps Echo the S3 Bucket Problem
Source: venturebeat.com

What types of sensitive data were exposed?

The range of exposed data is alarming and spans multiple industries. RedAccess found a shipping company's application that listed expected vessel arrivals at various ports — potentially valuable intelligence for competitors. An internal health company app revealed active clinical trials across the United Kingdom, including patient-related details. A British cabinet supplier exposed full, unredacted customer service conversations on the open web. A Brazilian bank had internal financial information accessible to anyone who knew the URL. Beyond these, researchers discovered patient conversations at a children's long-term care facility, hospital doctor-patient summaries, incident response logs from a security company, and even ad purchasing strategies from marketing departments. The common thread: these were all internal or confidential systems that were built using vibe-coding platforms and accidentally left publicly accessible due to default settings or lack of security awareness.

Which regulatory frameworks might apply to these exposures?

Depending on jurisdiction and the specific data involved, several regulatory frameworks could be triggered. In the United States, any exposure of protected health information (PHI) — such as patient conversations or doctor summaries — could violate the Health Insurance Portability and Accountability Act (HIPAA), which mandates strict safeguards for medical data. For British organizations, the UK General Data Protection Regulation (UK GDPR) applies, especially given the exposure of clinical trials and customer conversations. Brazil's General Data Protection Law (LGPD) would govern the leaked financial information from the Brazilian bank. Healthcare exposures in particular carry heavy penalties, including fines and mandatory breach notifications. The fact that these exposure were caused by employees using vibe-coding tools rather than formal IT systems does not exempt organizations from compliance obligations. In many cases, the organization itself might not even know the app exists until a security researcher or regulator points it out.

Did the researchers find any phishing sites built with vibe-coding tools?

Yes, and the findings are particularly troubling. RedAccess discovered phishing sites constructed with Lovable that mimicked well-known brands including Bank of America, FedEx, Trader Joe’s, and McDonald’s. These sites used the same vibe-coding infrastructure that powers legitimate applications but were designed to deceive visitors into revealing login credentials or other sensitive information. Lovable, the platform in question, stated that it had begun investigating and removing the offending pages after being notified. However, this underscores a broader risk: because vibe-coding tools make it trivially easy to build and deploy web applications, malicious actors can also abuse them to create convincing fake interfaces at scale. The combination of public-by-default settings and search engine indexing means these phishing sites can gain rapid visibility. For security teams, distinguishing between a genuine business app and a phishing doppelgänger built with the same tools becomes increasingly difficult.

Why are default privacy settings a major factor in this crisis?

Several vibe-coding platforms set applications as publicly accessible by default, requiring users to manually switch them to private. Many users, especially non-technical ones, never change this setting. As CEO Dor Zvi noted, “I don’t think it’s feasible to educate the whole world around security. My mother is vibe coding with Lovable, and no offense, but I don’t think she will think about role-based access.” Once an app is publicly deployed, it often gets indexed by Google and other search engines, making it discoverable to anyone. The mindset of “I built this in a weekend, it’s just a prototype” collides with the reality that it’s running on a live URL connected to a real database. Unlike traditional development where security reviews and deployment gates exist, vibe coding bypasses all of that. The result: thousands of sensitive apps exposed simply because the creator didn’t know about or didn’t bother with privacy settings. Until platforms change their defaults to “private,” this crisis will continue.

Has similar research been done before? What did Escape.tech find?

This is not an isolated finding. In October 2025, security firm Escape.tech scanned 5,600 publicly available vibe-coded applications and uncovered more than 2,000 high-impact vulnerabilities. Their analysis found over 400 exposed secrets, including API keys and access tokens, which could allow attackers to compromise the underlying services. Even more concerning, they identified 175 instances of personal data exposure containing medical records, bank account numbers, and other sensitive information. Every single vulnerability was found in a live production system and could be discovered within hours of scanning. The full report detailed the methodology, emphasizing that these are not test environments but real applications handling real data. Escape.tech subsequently raised an $18 million Series A led by Balderton in March 2026, citing the security gap opened by AI-generated code as a core market thesis. Together with RedAccess's findings, the message is clear: shadow AI and vibe coding are creating a security blind spot that enterprises can no longer ignore.

Tags:

Recommended

Discover More

CSS `contrast()` Filter Goes Live: Web Devs Gain Powerful Color ControlYour Weekly Security Checklist: Protect Against SMS Blasters, OpenEMR Flaws, and Roblox Hacks5 Ways V8 Made JSON.stringify Twice as Fast (And What It Means for Your Code)The FakeWallet Crypto Stealer: Inside the App Store Phishing CampaignA Beginner's Guide to Compiling C Programs from Source