Skip to content

Conversation

@jgadsden
Copy link
Collaborator

@jgadsden jgadsden commented Dec 13, 2025

Summary:
This is an implementation of proposing threats for a given threat model using generative AI for the desktop version

This is a long term initiative that s led by @InfosecOTB from discussion : AI-Powered Threat Modelling Extension

Description for the changelog:
AI tools extension for desktop

Declaration:

  • appropriate unit tests have been created / modified
  • functional tests created / modified for changes in functionality
  • any use of AI has been declared in this pull request

Other info:
This is a draft pull request to allow comment and further refinement
It will close: #1371

@jgadsden jgadsden added enhancement New feature or request discussion labels Dec 13, 2025
@jgadsden jgadsden mentioned this pull request Dec 13, 2025
3 tasks
@lreading
Copy link
Collaborator

Late to the party per usual - how can I help with this one? Do you want me to test on different OS's?

@InfosecOTB
Copy link
Collaborator

InfosecOTB commented Dec 13, 2025

Hello @lreading

Many thanks for helping with this project! You are certainly not late; there is still plenty of work to do.

  1. The code hasn’t been tested on macOS - if you are able, could you check whether it is working correctly?
  2. Although I’ve tested the code on Ubuntu 24.04, I encountered an issue with starting Electron which I had to fix by disabling sandboxing restrictions (a similar issue is reported here: Electron app fails to launch on Ubuntu 24.04 and derivatives due to new sandbox restrictions foundryvtt/foundryvtt#11632). I’m not sure whether this is a problem caused by the AI Tools code or a more general Threat Dragon issue, so it would be great to check this as well.
  3. The code was created and tested on Windows 11, but it still requires review, I believe.

I will add more information on how the code works to the Pull Request description soon, which I hope will help with understanding it. Any comments, especially critical ones, would be highly appreciated.

Once again, thank you for participating.

@InfosecOTB InfosecOTB force-pushed the ai-extension-final branch 2 times, most recently from 6306374 to aaf5551 Compare December 14, 2025 19:16
@InfosecOTB
Copy link
Collaborator

InfosecOTB commented Dec 14, 2025

How this works - summary:

  • User configures AI settings (model, temperature, response-format toggle, optional API base, log level) in AI Settings.
  • Non-secret settings are saved to ai-settings.json under the app userData folder; API key is excluded.
  • API key is stored in the OS credential store via keytar; it’s loaded by Electron main and passed to the AI Settings window (to display) and to Python (for generation).
  • On “Generate Threats & Mitigations”, Electron spawns the bundled venv Python and runs ai-tools/src/main.py.
  • Electron writes three lines to Python stdin: API key, threat model JSON, schema JSON.
  • Python loads settings from --settings-json, builds the prompt from prompt.txt, calls the LLM via LiteLLM, parses the JSON into threats keyed by element id.
  • Python injects threats into cell.data.threats, adds a red-stroke indicator, and (only if present) updates hasOpenThreats; then validates and computes stats + cost.
  • Python writes updated model + metadata to stdout using <<JSON_>> / <<METADATA_>> markers (logs to stderr).
  • Electron parses stdout, sends the updated model to the renderer (store update), and opens the results window with metadata/stats.

This solution is based on the code described in detail in AI-Powered Threat Modeling with OWASP Threat Dragon – Part 2: Generating Threats with Artificial Intelligence.

An AI coding assistant (Cursor IDE) was used during development. The output was reviewed and corrected as needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

discussion enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI-Powered Threat Modelling Extension

4 participants