v0.8.0 — 45+ Attack Modules

LLMrecon

Advanced Security Testing for Large Language Models

OWASP LLM Top 10 2025 • OWASP Agentic Top 10 2026 • ML-Powered Optimization

45+
Attack Modules
20
OWASP Categories
5
Platforms
4
Framework Profiles

Capabilities

🛡️

OWASP Compliant

Full LLM Top 10 2025 and Agentic Top 10 2026 coverage with 70 test cases, MITRE ATLAS cross-references, and MAESTRO layer mappings.

2025-2026 Attack Research

Crescendo, Skeleton Key, Many-Shot Jailbreaking, BoN Sampling, MetaBreak, Content Concretization, and more from the latest published research.

🤖

ML-Powered Optimization

Multi-armed bandit algorithms (Thompson Sampling, UCB1, Contextual) for intelligent attack selection and strategy adaptation.

🔍

Defense Detection

Identifies content filters, prompt guards, safety alignments, rate limiting, and output filtering across any LLM provider.

🏢

Enterprise Scale

Redis-backed job queues, distributed execution, connection pooling, load balancing, and real-time monitoring dashboard.

🌐

Multi-Platform

Test models from OpenAI, Anthropic, Google, Ollama, and custom endpoints. Binaries for Linux, macOS, and Windows.

v0.8.0 Attack Surfaces

🗂️ RAG Pipeline

Document injection, vector embedding manipulation, knowledge graph poisoning, cross-encoder reranking attacks.

4 modules

🔌 MCP Protocol

Tool poisoning via description injection, schema manipulation, filesystem boundary escape, supply chain exploitation.

4 modules

🌐 Browser Agents

DOM injection targeting agent perception, navigation hijack, screenshot exfiltration from AI browser agents.

3 modules

🎵 Audio Modality

Audio jailbreaks, speech model exploits, multilingual audio attacks, Best-of-N audio sampling.

4 modules

🧠 Reasoning Models

Autonomous multi-turn jailbreaks, Chain-of-Thought exploitation, reasoning loop resource exhaustion.

3 modules

🎯 Adaptive Bypass

Gradient-based optimization, reinforcement learning optimization, diffusion-based adversarial attacks.

3 modules

🔗 Multi-Agent

Delegation trust chain exploitation, toxic output cascade, recursive task bomb across agent orchestrations.

3 modules

🧩 Skill Injection

Marketplace skill poisoning, typosquatting attacks targeting plugin/skill ecosystems.

2 modules

💀 Agent Persistence

Config/prompt rewrite, credential harvesting, RCE tool chain escalation for persistent agent compromise.

3 modules

OWASP Compliance

LLM Top 10 2025

LLM01 Prompt Injection
LLM02 Sensitive Information Disclosure
LLM03 Supply Chain Vulnerabilities
LLM04 Data and Model Poisoning
LLM05 Improper Output Handling
LLM06 Excessive Agency
LLM07 System Prompt Leakage
LLM08 Vector and Embedding Weaknesses
LLM09 Misinformation
LLM10 Unbounded Consumption

Agentic Top 10 2026

ASI01 Agent Goal Hijack
ASI02 Tool Misuse and Exploitation
ASI03 Identity and Privilege Abuse
ASI04 Agentic Supply Chain Vulnerabilities
ASI05 Unexpected Code Execution (RCE)
ASI06 Memory & Context Poisoning
ASI07 Insecure Inter-Agent Communication
ASI08 Cascading Hallucination Attacks
ASI09 Human-Agent Trust Exploitation
ASI10 Uncontrolled Autonomous Agents

Framework Attack Profiles

Framework Key Attack Vectors Risk Level
OpenClaw 4 tracked CVEs, malicious skill marketplace, queue lane bypass Critical
CrewAI No per-agent RBAC, raw output passing between agents High
LangGraph State manipulation, recursive sub-agent spawning ($38K incident) High
AutoGen Auto-execute code blocks, Docker sandbox escape Critical

Quick Start

# Clone the repository
git clone https://github.com/perplext/LLMrecon.git
cd LLMrecon

# Install dependencies
pip install -r requirements.txt

# Test your local models
python3 llmrecon_2025.py --models llama3:latest

# Show OWASP categories
python3 llmrecon_2025.py --owasp
# Download the latest release
curl -LO https://github.com/perplext/LLMrecon/releases/latest/download/llmrecon-linux-amd64
chmod +x llmrecon-linux-amd64

# Or build from source
go build -o llmrecon ./src/main.go

# Run OWASP compliance scan
./llmrecon scan --provider openai --model gpt-4 --owasp
# Pull from GitHub Container Registry
docker pull ghcr.io/perplext/llmrecon:latest

# Run a scan
docker run --rm ghcr.io/perplext/llmrecon:latest scan \
  --provider openai --model gpt-4 --owasp

# Or build locally
docker build -t llmrecon .
docker run --rm llmrecon --help

Secure Your LLMs

The most comprehensive open-source LLM security testing framework available.