A comprehensive educational platform for learning AI security vulnerabilities through hands-on exploitation of a simulated AI chatbot support system. Features integrated progress tracking, progressive hints, and detailed security analysis to provide a complete learning experience.
- AI Security Awareness: Understand AI-specific vulnerabilities and attack vectors
- Social Engineering: Learn how to identify and exploit social engineering weaknesses in AI systems
- Prompt Injection: Master techniques for manipulating AI behavior through crafted inputs
- Privilege Escalation: Practice escalating privileges in AI-powered systems
- Defensive Strategies: Understand how to protect AI systems from exploitation
This platform contains intentionally vulnerable AI components for educational purposes. It should only be used in controlled environments for learning and research. See our Security Policy for details on responsible use.
New to the lab? Check out our Lab Setup Guide for step-by-step setup instructions.
For detailed deployment options, see DEPLOYMENT.md.
insecure-task-manager/
βββ frontend/ # React TypeScript application with Material UI
βββ backend/ # Express.js TypeScript API server
βββ sandbox/ # Sandboxed file system for command execution
βββ LAB_SETUP_STEPS.md # Quick setup guide for new users
βββ documentation/ # Project documentation
- Node.js 20.19.0+ and npm (for local development)
- OpenAI API key (for LLM integration)
You can use PM2 for process management and starting the application:
-
Quick start with PM2:
git clone <repository-url> cd insecure-task-manager ./scripts/start-pm2.sh
-
The script will automatically:
- Check and install PM2 if not present
- Install project dependencies
- Build both applications
- Configure environment
- Start services with PM2
-
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
-
PM2 Management:
./scripts/start-pm2.sh status # Check process status ./scripts/start-pm2.sh logs # View logs ./scripts/start-pm2.sh monit # Open monitoring dashboard ./scripts/start-pm2.sh stop # Stop all processes ./scripts/start-pm2.sh restart # Restart all processes
For detailed PM2 deployment instructions, see PM2_GUIDE.md.
-
Clone and install dependencies:
npm run install:all
-
Configure environment variables:
# Backend configuration cp backend/.env.example backend/.env # Edit backend/.env and add your OpenAI API key # Frontend configuration (optional) cp frontend/.env.example frontend/.env
-
Start development servers:
npm run dev
This will start:
- Frontend: http://localhost:3000
- Backend API: http://localhost:3001
- Realistic AI Chatbot: Simulates a legitimate support chatbot with hidden vulnerabilities
- Progressive Learning: Guided experience with hints and milestone tracking
- Security Analysis: Real-time analysis of exploitation techniques and defensive measures
- Sandboxed Environment: Safe, isolated environment for practicing attacks
- Comprehensive Logging: Detailed logging for educational review and analysis
- Multiple Deployment Options: PM2, or local development setups
./scripts/start-pm2.sh- Start both applications with PM2./scripts/start-pm2.sh stop- Stop PM2 processes./scripts/start-pm2.sh restart- Restart PM2 processes./scripts/start-pm2.sh status- Show PM2 process status./scripts/start-pm2.sh logs- View PM2 logs./scripts/start-pm2.sh monit- Open PM2 monitoring dashboard
npm run dev- Start both frontend and backend in development modenpm run build- Build both applications for productionnpm run start- Start production backend servernpm run dev:frontend- Start only frontend development servernpm run dev:backend- Start only backend development server
OPENAI_API_KEY- Your OpenAI API key (required)PORT- Backend server port (default: 3001)FRONTEND_URL- Frontend URL for CORS (default: http://localhost:3000)ENABLE_LOGGING- Enable security event logging (default: true)ENABLE_PROGRESS_TRACKING- Enable educational features (default: true)ENABLE_HINT_SYSTEM- Enable hint system (default: true)ENABLE_SECURITY_ANALYSIS- Enable security analysis (default: true)
REACT_APP_API_URL- Backend API URL (default: http://localhost:3001)
This lab implements multiple security layers for educational purposes:
- Input sanitization and validation
- Rate limiting and brute force protection
- Command execution sandboxing
- Comprehensive security event logging
- AI prompt injection prevention
- 8 Milestone Progression: From initial contact to credential discovery
- Point System: Earn 10-125 points for completing milestones
- Real-time Updates: Automatic progress tracking based on security events
- Performance Analytics: Time tracking, hint usage, and success metrics
- Progressive Assistance: Up to 3 levels of hints per milestone
- Context-Aware: Hints tailored to current progress stage
- Optional Usage: Participants choose whether to use hints
- Educational Guidance: Prevents frustration while maintaining challenge
- Exploitation Timeline: Step-by-step breakdown of techniques used
- Vulnerability Analysis: Detailed identification of security weaknesses
- Mitigation Strategies: Recommended defensive measures
- Downloadable Reports: Comprehensive JSON reports for further study
- Custom AWS Credentials: Configure realistic credentials for discovery exercises
- Organization Branding: Customize company names, emails, and documentation URLs
- Flexible Configuration: Easy setup via environment variables
- Educational Realism: Create authentic penetration testing scenarios
- Floating Help Panel: Access progress and hints via help icon (?)
- Security Analysis Panel: Real-time analysis via analytics icon
- Clean Integration: Educational features don't interfere with core challenge
Participants will learn to exploit an AI chatbot through social engineering to gain unauthorized system access and discover hidden credentials. The lab includes comprehensive educational features to enhance learning:
- First Contact (10 pts) - Interact with the support chatbot
- System Reconnaissance (20 pts) - Probe system capabilities
- Role Confusion Attempt (30 pts) - Attempt to confuse chatbot role
- Administrative Impersonation (50 pts) - Convince AI you're an admin
- Privilege Escalation (75 pts) - Gain command execution access
- File System Exploration (90 pts) - Execute system commands
- Credential Discovery (100 pts) - Find hidden AWS credentials
- Lab Completion (125 pts) - Successfully complete the challenge
- AI Security Vulnerabilities: Learn prompt injection and role confusion
- Social Engineering: Practice manipulating AI systems
- Privilege Escalation: Understand access control bypasses
- Security Analysis: Real-time feedback on exploitation techniques
- Defensive Programming: Experience multi-layered security approaches
The application requires several environment variables to be configured. Copy the example files and update them with your values:
# Root level configuration
cp .env.example .env
# Backend configuration
cp backend/.env.example backend/.env
# Frontend configuration (optional)
cp frontend/.env.example frontend/.envFrom .env.example:
OPENAI_API_KEY- Your OpenAI API key (required)OPENAI_MODEL- LLM model to use (default: gpt-3.5-turbo)SESSION_SECRET- Secure session secret (minimum 32 characters)NODE_ENV- Environment mode (development/production)ENABLE_LOGGING- Enable comprehensive logging (true/false)
The application consists of:
- Frontend: React 18+ with Material UI v5 for the user interface
- Backend: Express.js with TypeScript for the API server
- Sandbox: Isolated environment for safe command execution
- LLM Integration: OpenAI API for chatbot functionality
This is an educational platform with intentionally vulnerable components:
- Deploy only in isolated/sandboxed environments
- Never use real credentials or sensitive data
- Follow the guidelines in SECURITY.md
- Use only for authorized educational purposes
We welcome contributions! Please see CONTRIBUTING.md for guidelines on:
- Setting up the development environment
- Code style and standards
- Pull request process
- Security considerations
- Educational content guidelines
- DEPLOYMENT.md - Detailed deployment instructions
- PM2_GUIDE.md - PM2 process management guide
- EDUCATIONAL_FEATURES.md - Educational features overview
- SECURITY.md - Security policy and responsible disclosure
- CONTRIBUTING.md - Contribution guidelines
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: Report bugs and request features via GitHub Issues
- Discussions: Join community discussions via GitHub Discussions
- Security: Report security issues via our Security Policy
- OpenAI for providing the LLM API
- The security research community for inspiration and best practices
- Contributors who help improve this educational platform