<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>SnailSploit — Security Research</title><description>AI security research, adversarial AI, LLM jailbreaking, prompt injection, CVE disclosures, and offensive security by Kai Aizen.</description><link>https://snailsploit.com/</link><language>en-us</language><item><title>CVE-2026-33693: SSRF in activitypub-federation-rust</title><link>https://snailsploit.com/security-research/cves/cve-2026-33693/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2026-33693/</guid><description>SSRF bypass via 0.0.0.0 in ActivityPub federation library. Missing is_unspecified() check affects Lemmy and 6+ Fediverse projects. CVSS 6.5.</description><pubDate>Mon, 23 Mar 2026 00:00:00 GMT</pubDate></item><item><title>CVE-2026-32885: Path Traversal (ZipSlip) in ddev</title><link>https://snailsploit.com/security-research/cves/cve-2026-32885/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2026-32885/</guid><description>ZipSlip path traversal in ddev. Malicious archives escape extraction directory via Untar/Unzip without path containment. CVSS 6.5.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate></item><item><title>CVE-2026-32809: Symlink Resolution Bypass in ouch</title><link>https://snailsploit.com/security-research/cves/cve-2026-32809/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2026-32809/</guid><description>Unvalidated symlink targets in tar extraction enable arbitrary file read via crafted archives. Affects all tar formats. CVSS 7.4 High.</description><pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Self-Replicating Memory Worm: Persistent Injection with Autonomous Propagation</title><link>https://snailsploit.com/ai-security/self-replicating-memory-worm/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/self-replicating-memory-worm/</guid><description>A single memory edit becomes an autonomous, self-replicating worm — generational escalation, credential harvesting, cross-service pivoting via Notion and MCP, and indefinite persistence across session resets.</description><pubDate>Wed, 11 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Linux Kernel io_uring/zcrx: Race Condition to Double-Free</title><link>https://snailsploit.com/security-research/general/io-uring-zcrx-race-condition/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/io-uring-zcrx-race-condition/</guid><description>Race condition in io_uring zerocopy receive — non-atomic user_refs operations lead to double-free and out-of-bounds write. Linux kernel commit by Kai Aizen, backported to stable.</description><pubDate>Wed, 11 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Adversarial Prompting: The Complete Technical Guide</title><link>https://snailsploit.com/ai-security/adversarial-prompting/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/adversarial-prompting/</guid><description>Every adversarial prompting technique mapped — from role hijacking to multi-turn escalation to indirect injection. How attacks work, why defenses fail, and what the taxonomy looks like from the attacker&apos;s side.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>LLM Jailbreak Techniques: A Technical Taxonomy</title><link>https://snailsploit.com/ai-security/jailbreaking/jailbreak-techniques/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/jailbreak-techniques/</guid><description>Complete taxonomy of LLM jailbreak techniques — role hijacking, multi-turn escalation, context manipulation, encoding exploits, and chain-of-thought abuse.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Prompt Injection Examples: Real Attack Patterns Explained</title><link>https://snailsploit.com/ai-security/prompt-injection/prompt-injection-examples/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/prompt-injection/prompt-injection-examples/</guid><description>Real-world prompt injection examples across direct injection, indirect injection, MCP tool poisoning, and memory attacks.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Memory Injection Through Nested Skills: Autonomous LLM Agent Compromise</title><link>https://snailsploit.com/ai-security/prompt-injection/memory-injection-nested-skills/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/prompt-injection/memory-injection-nested-skills/</guid><description>A novel persistence chain exploiting trust boundaries in LLM agent frameworks — skill injection + memory poisoning = self-healing, autonomous implant.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>CVE-2026-3288: Configuration Injection in ingress-nginx</title><link>https://snailsploit.com/security-research/cves/cve-2026-3288/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2026-3288/</guid><description>Configuration injection via rewrite-target annotation in ingress-nginx. RCE and cluster-wide Secret disclosure. CVSS 8.8.</description><pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Weaponized AI Supply Chain: How Threat Actors Turned LLMs Into Attack Infrastructure</title><link>https://snailsploit.com/ai-security/weaponized-ai-supply-chain/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/weaponized-ai-supply-chain/</guid><description>89% increase in AI-enabled attacks. How threat actors weaponize LLM supply chains — from model poisoning to MCP tool injection.</description><pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate></item><item><title>MCP vs A2A Attack Surface: Every Trust Boundary Mapped</title><link>https://snailsploit.com/ai-security/mcp-vs-a2a-attack-surface/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/mcp-vs-a2a-attack-surface/</guid><description>MCP has 30+ CVEs and real-world breaches. A2A has zero. Side-by-side attack surface comparison with defensive guidance.</description><pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate></item><item><title>The 30% Blind Spot: Why LLM Safety Judges Fail</title><link>https://snailsploit.com/ai-security/rai-judge-blind-spots/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/rai-judge-blind-spots/</guid><description>I built an LLM safety judge. Six iterations, 680+ responses. It passed — while missing 63% of unsafe content.</description><pubDate>Thu, 26 Feb 2026 00:00:00 GMT</pubDate></item><item><title>AATMF v3.1 vs MITRE ATLAS: Which AI Security Framework Wins?</title><link>https://snailsploit.com/ai-security/aatmf-vs-mitre-atlas/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/aatmf-vs-mitre-atlas/</guid><description>MITRE ATLAS: 66 techniques. AATMF v3.1: 240 techniques, 4,980+ prompts, quantitative risk scoring.</description><pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate></item><item><title>LLM Red Teamer&apos;s Playbook: Diagnosing AI Defense Layers</title><link>https://snailsploit.com/ai-security/llm-red-teamers-playbook/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/llm-red-teamers-playbook/</guid><description>A systematic methodology for diagnosing LLM defense layers and selecting bypass techniques that actually work.</description><pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate></item><item><title>AI Breach Detection Gap: The Logs Are Clean. You&apos;re Not.</title><link>https://snailsploit.com/ai-security/ai-breach-detection-gap/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/ai-breach-detection-gap/</guid><description>74% of organizations found AI breaches when they looked. Most are not looking.</description><pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Computational Countertransference: LLM Context Inheritance</title><link>https://snailsploit.com/ai-security/computational-countertransference/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/computational-countertransference/</guid><description>LLMs adopt adversarial states from pasted transcripts. 13-month study reveals context inheritance as an architectural vulnerability.</description><pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate></item><item><title>AI Coding Agent Attack Surface: A Full Taxonomy</title><link>https://snailsploit.com/ai-security/ai-coding-agent-attack-surface/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/ai-coding-agent-attack-surface/</guid><description>AI coding agents trust code comments, README files, and MCP servers the same way humans trust authority.</description><pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate></item><item><title>AI Gateway Threat Model: 8 Attack Vectors</title><link>https://snailsploit.com/ai-security/ai-gateway-threat-model/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/ai-gateway-threat-model/</guid><description>First generalized AI gateway threat model covering 8 unmapped attack vectors. 91K attack sessions analyzed.</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Agentic AI Threat Landscape: Attack Vectors &amp; Defenses</title><link>https://snailsploit.com/ai-security/agentic-ai-threat-landscape/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/agentic-ai-threat-landscape/</guid><description>Full agentic AI threat landscape: prompt injection, MCP tool poisoning, multi-agent infection, memory poisoning.</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate></item><item><title>CVE-2026-1208: CSRF in Friendly Functions for Welcart</title><link>https://snailsploit.com/security-research/cves/cve-2026-1208/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2026-1208/</guid><description>Cross-Site Request Forgery in Friendly Functions for Welcart WordPress plugin. Settings manipulation. CVSS 4.3.</description><pubDate>Fri, 23 Jan 2026 00:00:00 GMT</pubDate></item><item><title>Memory Manipulation: AI Context Poisoning</title><link>https://snailsploit.com/ai-security/jailbreaking/memory-manipulation-attacks/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/memory-manipulation-attacks/</guid><description>How attackers poison AI context windows and memory systems to compromise future interactions.</description><pubDate>Tue, 06 Jan 2026 00:00:00 GMT</pubDate></item><item><title>RAG, Agentic AI, and the New Attack Surface</title><link>https://snailsploit.com/ai-security/rag-agentic-attack-surface/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/rag-agentic-attack-surface/</guid><description>Understanding the expanded attack surface of RAG systems and agentic AI.</description><pubDate>Fri, 17 Oct 2025 00:00:00 GMT</pubDate></item><item><title>AI Social Engineering: Deepfake Voice Detection</title><link>https://snailsploit.com/ai-security/ai-social-engineering-deepfake/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/ai-social-engineering-deepfake/</guid><description>How AI enables sophisticated social engineering through deepfake voices. Detection techniques and organizational defense.</description><pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate></item><item><title>MCP Security Hardening: Production Vulnerability Guide</title><link>https://snailsploit.com/ai-security/prompt-injection/mcp-security-deep-dive/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/prompt-injection/mcp-security-deep-dive/</guid><description>How to secure MCP servers in production AI environments.</description><pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate></item><item><title>Zero-Trust Container Runtime Attestation</title><link>https://snailsploit.com/security-research/general/zero-trust-container-runtime/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/zero-trust-container-runtime/</guid><description>Implementing zero-trust principles in container runtime environments.</description><pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate></item><item><title>MCP Threat Analysis: Attack Chains &amp; Protocol Dissection</title><link>https://snailsploit.com/ai-security/prompt-injection/mcp-threat-analysis/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/prompt-injection/mcp-threat-analysis/</guid><description>Offensive threat analysis of the Model Context Protocol.</description><pubDate>Sun, 18 May 2025 00:00:00 GMT</pubDate></item><item><title>Custom Instruction Backdoor: ChatGPT Prompt Injection</title><link>https://snailsploit.com/ai-security/prompt-injection/custom-instruction-backdoor/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/prompt-injection/custom-instruction-backdoor/</guid><description>Uncovering emergent prompt injection risks through ChatGPT custom instructions.</description><pubDate>Sun, 18 May 2025 00:00:00 GMT</pubDate></item><item><title>AI-Powered Obfuscator Bypasses Detection in 2 Hours</title><link>https://snailsploit.com/writing/ai-obfuscator-detection-bypass/</link><guid isPermaLink="true">https://snailsploit.com/writing/ai-obfuscator-detection-bypass/</guid><description>Building a cloud-based obfuscator using AI that bypasses security detection.</description><pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate></item><item><title>Advanced Container Escapes: Security Deep Dive</title><link>https://snailsploit.com/security-research/general/advanced-container-escapes/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/advanced-container-escapes/</guid><description>Deep technical analysis of container escape techniques.</description><pubDate>Sun, 02 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Inherent AI Vulnerabilities: Technical Deep Dive</title><link>https://snailsploit.com/ai-security/jailbreaking/inherent-ai-vulnerabilities/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/inherent-ai-vulnerabilities/</guid><description>Technical analysis of structural vulnerabilities in AI systems.</description><pubDate>Mon, 10 Feb 2025 00:00:00 GMT</pubDate></item><item><title>RCE &amp; DNS Exfiltration in ChatGPT Canvas</title><link>https://snailsploit.com/security-research/general/chatgpt-canvas-rce-dns-exfiltration/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/chatgpt-canvas-rce-dns-exfiltration/</guid><description>Python Pickle RCE and DNS exfiltration in ChatGPT Code Interpreter sandbox.</description><pubDate>Sat, 01 Feb 2025 00:00:00 GMT</pubDate></item><item><title>The Structural Vulnerabilities of Large Language Models</title><link>https://snailsploit.com/ai-security/structural-vulnerabilities-llms/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/structural-vulnerabilities-llms/</guid><description>Tokenization evasion, parsing limits, and alignment failure modes in production AI.</description><pubDate>Sat, 25 Jan 2025 00:00:00 GMT</pubDate></item><item><title>Evading Endpoint Detection and Response (EDR)</title><link>https://snailsploit.com/security-research/general/edr-evasion-techniques/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/edr-evasion-techniques/</guid><description>Technical analysis of EDR evasion techniques.</description><pubDate>Thu, 16 Jan 2025 00:00:00 GMT</pubDate></item><item><title>CVE-2025-12030: IDOR in ACF to REST API Plugin</title><link>https://snailsploit.com/security-research/cves/cve-2025-12030/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2025-12030/</guid><description>IDOR in ACF to REST API WordPress plugin. Unauthorized data access. CVSS 4.3.</description><pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate></item><item><title>CVE-2025-12163: Stored XSS in OmniPress Plugin</title><link>https://snailsploit.com/security-research/cves/cve-2025-12163/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2025-12163/</guid><description>Stored XSS in OmniPress WordPress plugin via author-level access. CVSS 6.4.</description><pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate></item><item><title>CVE-2025-9776: SQL Injection in CatFolders Plugin</title><link>https://snailsploit.com/security-research/cves/cve-2025-9776/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2025-9776/</guid><description>Authenticated SQL Injection via CSV Import in CatFolders WordPress plugin. CVSS 6.5.</description><pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate></item><item><title>CVE-2025-11171: Missing Auth in Chartify Plugin</title><link>https://snailsploit.com/security-research/cves/cve-2025-11171/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2025-11171/</guid><description>Missing authentication for admin functions in Chartify WordPress plugin. CVSS 5.3.</description><pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate></item><item><title>CVE-2025-11174: Missing Auth in Document Library Lite</title><link>https://snailsploit.com/security-research/cves/cve-2025-11174/</link><guid isPermaLink="true">https://snailsploit.com/security-research/cves/cve-2025-11174/</guid><description>Missing authorization in Document Library Lite exposes sensitive data. CVSS 5.3.</description><pubDate>Mon, 13 Jan 2025 00:00:00 GMT</pubDate></item><item><title>Context Inheritance Exploit: Persistent Jailbreaks</title><link>https://snailsploit.com/ai-security/jailbreaking/context-inheritance-exploit/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/context-inheritance-exploit/</guid><description>Discovering how jailbroken states persist across GPT sessions through context inheritance.</description><pubDate>Sat, 04 Jan 2025 00:00:00 GMT</pubDate></item><item><title>Is AI Inherently Vulnerable? An Offensive Analysis</title><link>https://snailsploit.com/ai-security/jailbreaking/ai-inherent-vulnerability/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/ai-inherent-vulnerability/</guid><description>Examining the fundamental security limitations of large language models.</description><pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate></item><item><title>Embracing AI: Adapt or Die in Cybersecurity</title><link>https://snailsploit.com/writing/embracing-ai-adapt-or-die/</link><guid isPermaLink="true">https://snailsploit.com/writing/embracing-ai-adapt-or-die/</guid><description>Why security professionals must embrace AI or risk irrelevance.</description><pubDate>Fri, 06 Sep 2024 00:00:00 GMT</pubDate></item><item><title>Your Personal Data Is for Sale: New Identity Theft</title><link>https://snailsploit.com/writing/personal-data-identity-theft/</link><guid isPermaLink="true">https://snailsploit.com/writing/personal-data-identity-theft/</guid><description>Investigating the personal data marketplace and its implications for identity theft.</description><pubDate>Wed, 04 Sep 2024 00:00:00 GMT</pubDate></item><item><title>Exploiting Cloud Vulnerabilities: Tools and Techniques</title><link>https://snailsploit.com/security-research/general/cloud-vulnerability-exploitation/</link><guid isPermaLink="true">https://snailsploit.com/security-research/general/cloud-vulnerability-exploitation/</guid><description>Practical guide to cloud security testing across AWS, Azure, and GCP.</description><pubDate>Wed, 10 Jul 2024 00:00:00 GMT</pubDate></item><item><title>Hidden Risks of AI: An Offensive Security Perspective</title><link>https://snailsploit.com/ai-security/hidden-risks-offensive-perspective/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/hidden-risks-offensive-perspective/</guid><description>Emerging AI threat vectors from an offensive security perspective.</description><pubDate>Sat, 08 Jun 2024 00:00:00 GMT</pubDate></item><item><title>ChatGPT Jailbreak via Context Manipulation</title><link>https://snailsploit.com/ai-security/jailbreaking/chatgpt-context-jailbreak/</link><guid isPermaLink="true">https://snailsploit.com/ai-security/jailbreaking/chatgpt-context-jailbreak/</guid><description>Step-by-step walkthrough of jailbreaking ChatGPT using context and social awareness techniques.</description><pubDate>Mon, 27 May 2024 00:00:00 GMT</pubDate></item></channel></rss>