An intelligent threat modeling application that uses Large Language Models (LLMs) to automatically generate security threats and their mitigation proposals for Threat Dragon models.
- AI-Powered Threat Generation: Uses state-of-the-art LLMs to analyze system components and generate comprehensive security threats
- Threat Framework Support: Supports STRIDE threat modeling framework, however the code can be adjusted for others as well
- Multi-LLM Support: Tested on OpenAI, Anthropic, Google, Novita, and xAI. As the code uses LiteLLM library, it should work with other models as well.
- Threat Dragon Integration: Works seamlessly with Threat Dragon JSON models
- Smart Filtering: Automatically skips out-of-scope components
- Data Validation: Built-in Pydantic validation for threat data integrity
- Response Validation: Comprehensive validation of AI responses against original models
- Validation Logging: Timestamped validation logs with detailed coverage reports
- Visual Indicators: Automatically adds visual cues (red strokes) to components with threats
- Python 3.8+
- API key for your chosen LLM provider
-
Clone the repository
git clone <repository-url> cd td-ai-modeler
-
Install dependencies
pip install -r requirements.txt
-
Configure environment
cp env.example .env
Edit
.envwith your configuration:# Choose your LLM provider (uncomment one) LLM_MODEL_NAME=openai/gpt-5 OPENAI_API_KEY=your_openai_api_key_here # Input files INPUT_THREAT_SCHEMA_JSON=owasp.threat-dragon.schema.V2.json INPUT_THREAT_MODEL_JSON=your-model.json
-
Prepare input files
- The Threat Dragon schema file is already in
./input/ - Place your threat model JSON file in
./input/
- The Threat Dragon schema file is already in
-
Run the application
python src/main.py
-
Check results
- Updated model with AI-generated threats will be in
./output/ - Validation logs with timestamp will be generated in
./output/logs/
- Updated model with AI-generated threats will be in
| Provider | Model | API Key Variable | Recommended Configuration |
|---|---|---|---|
| Anthropic | anthropic/claude-sonnet-4-5-20250929 |
ANTHROPIC_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=24000 |
| Anthropic | anthropic/claude-opus-4-1-20250805 |
ANTHROPIC_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=24000 |
| Novita | novita/deepseek/deepseek-r1 |
NOVITA_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=16000 |
| Novita | novita/qwen/qwen3-coder-480b-a35b-instruct |
NOVITA_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=24000 |
| Novita | novita/deepseek/deepseek-v3.1-terminus |
NOVITA_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=24000 |
| Local Ollama | ollama/gemma3:27b |
None | # litellm.enable_json_schema_validation = True# temperature = 0.1response_format = AIThreatsResponseListmax_tokens=24000 |
| xAI | xai/grok-4-fast-reasoning-latest |
XAI_API_KEY |
litellm.enable_json_schema_validation = Truetemperature = 0.1response_format = AIThreatsResponseListmax_tokens=24000 |
| xAI | xai/grok-4-latest |
XAI_API_KEY |
litellm.enable_json_schema_validation = Truetemperature = 0.1response_format = AIThreatsResponseListmax_tokens=24000 |
| OpenAI | openai/gpt-5 |
OPENAI_API_KEY |
litellm.enable_json_schema_validation = Truetemperature = 0.1response_format = AIThreatsResponseListmax_tokens=24000 |
| OpenAI | openai/gpt-5-mini |
OPENAI_API_KEY |
litellm.enable_json_schema_validation = Truetemperature = 0.1response_format = AIThreatsResponseListmax_tokens=24000 |
gemini/gemini-2.5-pro |
GOOGLE_API_KEY |
# litellm.enable_json_schema_validation = Truetemperature = 0.1# response_format = AIThreatsResponseListmax_tokens=24000 |
The recommended configuration settings in the table above include several key parameters that can be adjusted in src/ai_client.py:
-
litellm.enable_json_schema_validation: Enables structured JSON validation for supported models. When prefixed with#, this parameter should be commented out (disabled) as the model doesn't support JSON schema validation. -
temperature: Controls the randomness and creativity of AI responses (0.0 = deterministic, 1.0 = very random). Lower values (0.1) provide more consistent, focused responses ideal for threat modeling. -
response_format: Forces the AI to return structured JSON using Pydantic models. When prefixed with#, this parameter should be commented out as the model doesn't support structured output. -
max_tokens: Maximum number of tokens the AI can generate in a single response. Higher values allow for more comprehensive threat descriptions but may increase processing time and costs.
Important: Parameters prefixed with # in the table should be commented out in your configuration, while parameters without # should be uncommented (active).
| Variable | Description | Example |
|---|---|---|
LLM_MODEL_NAME |
LLM model identifier | openai/gpt-5 |
INPUT_THREAT_SCHEMA_JSON |
Threat Dragon schema filename | owasp.threat-dragon.schema.V2.json |
INPUT_THREAT_MODEL_JSON |
Input threat model filename | my-model.json |
The tool supports several advanced configuration options that can be modified in src/ai_client.py:
max_tokens: Maximum tokens in response (default: 24000)timeout: Request timeout in seconds (default: 14400 = 4 hours)
litellm.enable_json_schema_validation: Enable structured JSON validation for supported models (OpenAI, xAI)response_format: Force structured JSON response format using Pydantic models
api_base: Override default API endpoint for custom deployments or local models- Example:
api_base="https://your-custom-endpoint.com"for custom deployments
- Example:
litellm.drop_params: Remove unsupported parameters (default: True)
td-ai-modeler/
├── src/
│ ├── main.py # Main application entry point
│ ├── ai_client.py # LLM integration and threat generation
│ ├── utils.py # File operations and model updates
│ ├── models.py # Pydantic data models
│ └── validator.py # AI response validation
├── input/ # Input files directory
│ ├── owasp.threat-dragon.schema.V2.json
│ └── your-model.json
├── output/ # Generated output directory
│ └── logs/ # Validation logs
├── prompt.txt # AI threat modeling prompt template
├── env.example # Environment configuration template
├── requirements.txt # Python dependencies
└── README.md # This file
- Input Processing: Loads Threat Dragon schema and model files
- AI Threat Generation: Uses LLM to analyze components and generate threats
- Data Validation: Ensures all generated threats have required fields
- Response Validation: Validates AI response completeness and accuracy
- Model Update: Updates the threat model while preserving original formatting
- Visual Updates: Adds red stroke indicators to components with threats
- Validation Logging: Generates detailed validation reports with timestamps
The tool includes comprehensive validation to ensure AI responses are complete and accurate:
- INFO: Elements in scope but missing threats (informational)
- WARNINGS: Out-of-scope elements or quality issues (non-blocking)
- ERRORS: Completely different IDs with no model overlap (blocking)
- Coverage Validation: Ensures all in-scope elements (outOfScope=false) have threats
- ID Validation: Verifies all response IDs correspond to valid model elements
- Quality Validation: Checks that threats include proper mitigation strategies (empty mitigations generate warnings)
- Data Integrity: Validates threat structure and required fields
- Console Summary: Real-time validation results with coverage statistics
- Detailed Logs: Timestamped logs in
./output/logs/directory - Error Reporting: Specific details about missing elements and invalid IDs
- Coverage Metrics: Percentage of in-scope elements with generated threats
- Trust boundary boxes and curves are excluded from validation
- Missing elements are informational, not errors
- Invalid IDs (out of scope) are warnings, not errors
- Only completely different IDs are validation errors
Validation runs automatically during threat generation and creates detailed logs in the ./output/logs/ directory.
- Invalid JSON: The tool automatically attempts to extract JSON from malformed responses
- Timeout Issues: Increase
timeoutvalue inai_client.pyfor large models - Token Limits: Adjust
max_tokensbased on model capabilities
- Missing Elements: Normal for complex models - elements may be out of scope
- Empty Mitigations: Check AI response quality or adjust prompt template
- Out-of-Scope Elements: Elements not in scope but have threats generated
- Invalid IDs: Verify model structure and element IDs
- API Key Errors: Ensure correct environment variables are set
- Model Not Found: Verify model name format matches provider requirements
- Connection Issues: Check
api_baseURL for custom endpoints
- Use
max_tokens=32000for models with higher token limits - Consider using faster models for initial threat generation
- Ensure sufficient hardware (GPU, CPU, RAM)
- Monitor system resources during generation
# Install development dependencies
pip install -r requirements.txt
# Run the application
python src/main.pymain.py: Orchestrates the entire threat modeling processai_client.py: Handles LLM communication and threat generationutils.py: File operations and model manipulation utilitiesmodels.py: Pydantic models for threat data validationvalidator.py: Comprehensive validation of AI responses
Edit prompt.txt to customize threat generation behavior:
- Add specific threat frameworks
- Modify threat categories
- Adjust output format requirements
- Add provider configuration to
env.example - Update provider table in README
- Test with sample threat model
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
- OWASP Threat Dragon for the excellent threat modeling framework
- LiteLLM for seamless multi-LLM support
- Pydantic for robust data validation
For more information about cybersecurity and AI projects, visit my blog at https://infosecotb.com.
Built for security professionals and threat modeling practitioners
