diff --git a/.actrc b/.actrc new file mode 100644 index 00000000000..e42d9a4a034 --- /dev/null +++ b/.actrc @@ -0,0 +1,5 @@ + # Nektos act runs tests as root. Without this environment variable + # being set, CAPE exits at line 10 of web/web/settings.py, + # and no tests are run. + + --env CAPE_AS_ROOT=1 diff --git a/.github/issue_template.md b/.github/ISSUE_TEMPLATE/bug_report.md similarity index 63% rename from .github/issue_template.md rename to .github/ISSUE_TEMPLATE/bug_report.md index 257757caad5..b62cec5cde2 100644 --- a/.github/issue_template.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,15 +1,28 @@ -## This is opensource and you getting __free__ support so be friendly! -* Free support from doomedraven ended, no whiskey no support. For something he updated the documentation :) +--- +name: Having problem/bug/issue +about: Create a report to help us improve +title: '' +labels: '' +assignees: '' + +--- + +## About accounts on [capesandbox.com](https://capesandbox.com/) +* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username + +## This is open source and you are getting __free__ support so be friendly! # Prerequisites Please answer the following questions for yourself before submitting an issue. - [ ] I am running the latest version +- [ ] I did read the README! - [ ] I checked the documentation and found no answer - [ ] I checked to make sure that this issue has not already been filed - [ ] I'm reporting the issue to the correct repository (for multi-repository projects) -- [ ] I'm have read all configs with all optional parts +- [ ] I have read and checked all configs (with all optional parts) +- [ ] Asked and no solution about my issue with [deepwiki](https://deepwiki.com/kevoreilly/CAPEv2) # Expected Behavior @@ -34,7 +47,7 @@ Please provide detailed steps for reproducing the issue. ## Context -Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. +Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. Operating system version, bitness, installed software versions, test sample details/hash/binary (if applicable). | Question | Answer |------------------|-------------------- diff --git a/.github/actions/python-setup/action.yml b/.github/actions/python-setup/action.yml new file mode 100644 index 00000000000..8c950261bb6 --- /dev/null +++ b/.github/actions/python-setup/action.yml @@ -0,0 +1,32 @@ +name: 'Python setup steps that can be reused' +description: 'Install dependencies, poetry, requirements' +inputs: + python-version: + required: true + description: The python version + +runs: + using: "composite" + steps: + - name: Install dependencies + if: ${{ runner.os == 'Linux' }} + shell: bash + run: | + sudo apt update && sudo apt-get install -y --no-install-recommends libxml2-dev libxslt-dev python3-dev libgeoip-dev ssdeep libfuzzy-dev innoextract unrar upx + + - name: Install poetry + shell: bash + run: PIP_BREAK_SYSTEM_PACKAGES=1 pip install poetry poetry-plugin-export + #- name: Python Poetry Action + # uses: abatilo/actions-poetry@v3.0.1 + + - name: Set up Python ${{ inputs.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ inputs.python-version }} + cache: 'poetry' + + - name: Install requirements + shell: bash + run: | + PIP_BREAK_SYSTEM_PACKAGES=1 poetry install --no-interaction diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 00000000000..5bc323d83fe --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,58 @@ +# Copilot Instructions for CAPEv2 + +## General Architecture +- CAPEv2 is an automated malware analysis platform, based on Cuckoo Sandbox, with extensions for dynamic, static, and network analysis. +- The backend is mainly Python, using SQLAlchemy for the database and Django/DRF for the web API. +- Main components include: + - `lib/cuckoo/core/database.py`: database logic and ORM. + - `web/apiv2/views.py`: REST API endpoints (Django REST Framework). + - `lib/cuckoo/common/`: shared utilities, configuration, helpers. + - `storage/`: analysis results and temporary files. +- Typical flow: sample upload → DB registration → VM assignment → analysis → result storage → API query. + +## Conventions and Patterns +- Heavy use of SQLAlchemy 2.0 ORM, with explicit sessions and nested transactions (`begin_nested`). +- Database models (Sample, Task, Machine, etc.) are always managed via `Database` object methods. +- API endpoints always return a dict with `error`, `data`, and, if applicable, `error_value` keys. +- Validation and request argument parsing is centralized in helpers (`parse_request_arguments`, etc.). +- Integrity errors (e.g., duplicates) are handled with `try/except IntegrityError` and recovery of the existing object. +- Tags are managed as comma-separated strings and normalized before associating to models. +- Code avoids mutable global variables; configuration is accessed via `Config` objects. + +## Developer Workflows +- No Makefile or standard build scripts; dependency management is usually via `poetry` or `pip`. +- For testing, use virtual environments and run scripts manually. +- Typical backend startup is via Django (`manage.py runserver`), and analysis workers are launched separately. +- Database changes require manual migrations (see Alembic comments in `database.py`). + +## Integrations and Dependencies +- Optional integration with MongoDB and Elasticsearch, controlled by configuration (`reporting.conf`). +- The system can use different compression tools (zlib, 7zip) depending on config. +- Sample analysis may invoke external utilities (e.g., Sflock, PE parsers). + +## Key Pattern Examples +- IntegrityError handling example: + ```python + try: + with self.session.begin_nested(): + self.session.add(sample) + except IntegrityError: + sample = self.session.scalar(select(Sample).where(Sample.md5 == file_md5)) + ``` +- API response example: + ```python + return Response({"error": False, "data": result}) + ``` +- Tag assignment example: + ```python + tags = ",".join(set(_tags)) + ``` + +## Key Files +- `lib/cuckoo/core/database.py`: database logic, sample/task registration, machine management. +- `web/apiv2/views.py`: REST endpoints, validation, high-level business logic. +- `lib/cuckoo/common/`: utilities, helpers, configuration. + +--- + +If you introduce new endpoints, helpers, or models, follow the validation, error handling, and standard response patterns. See the files above for implementation examples. diff --git a/.github/workflows/antitemplaters.yml_disabled b/.github/workflows/antitemplaters.yml_disabled new file mode 100644 index 00000000000..867e879c381 --- /dev/null +++ b/.github/workflows/antitemplaters.yml_disabled @@ -0,0 +1,17 @@ +on: + issues: + types: [opened, edited] + +jobs: + auto_close_issues: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v3 + + - name: Automatically close issues that don't follow the issue template + uses: lucasbento/auto-close-issues@v1.0.2 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + issue-close-message: "@${issue.user.login}: hello! :wave:\n\nThis issue is being automatically closed because it does not follow the issue template.\n\n This is open source project!\n\t So please apreciate our time that we sacrify from other thing that we could enjoy, instead of asking boring things over and over." # optional property + closed-issues-label: "🙁 Not following issue template" diff --git a/.github/workflows/auto_answer.yml b/.github/workflows/auto_answer.yml new file mode 100644 index 00000000000..650358392be --- /dev/null +++ b/.github/workflows/auto_answer.yml @@ -0,0 +1,36 @@ +name: Auto Answer Bot (using uv run) + +on: + issues: + types: [opened] + +jobs: + answer: + runs-on: ubuntu-latest + steps: + - name: Checkout repository code + uses: actions/checkout@v4 + + - name: Set up Python with caching + uses: actions/setup-python@v5 + with: + python-version: '3.10' + cache: 'pip' + + - name: Install uv + uses: astral-sh/setup-uv@v6 + with: + enable-cache: true + + - name: Run the answer bot with uv run + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }} + ISSUE_NUMBER: ${{ github.event.issue.number }} + REPO_NAME: ${{ github.repository }} + run: | + cd KnowledgeBaseBot && \ + uv run \ + --with-requirements ../requirements.txt \ + --with-requirements requirements.txt \ + python auto_answer_bot.py diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml deleted file mode 100644 index 00dfb319f83..00000000000 --- a/.github/workflows/codeql-analysis.yml +++ /dev/null @@ -1,71 +0,0 @@ -# For most projects, this workflow file will not need changing; you simply need -# to commit it to your repository. -# -# You may wish to alter this file to override the set of languages analyzed, -# or to provide custom queries or build logic. -name: "CodeQL" - -on: - push: - branches: [master] - pull_request: - # The branches below must be a subset of the branches above - branches: [master] - schedule: - - cron: '0 14 * * 2' - -jobs: - analyze: - name: Analyze - runs-on: ubuntu-latest - - strategy: - fail-fast: false - matrix: - # Override automatic language detection by changing the below list - # Supported options are ['csharp', 'cpp', 'go', 'java', 'javascript', 'python'] - language: ['python'] - # Learn more... - # https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#overriding-automatic-language-detection - - steps: - - name: Checkout repository - uses: actions/checkout@v2 - with: - # We must fetch at least the immediate parents so that if this is - # a pull request then we can checkout the head. - fetch-depth: 2 - - # If this run was triggered by a pull request event, then checkout - # the head of the pull request instead of the merge commit. - - run: git checkout HEAD - if: ${{ github.event_name == 'pull_request' }} - - # Initializes the CodeQL tools for scanning. - - name: Initialize CodeQL - uses: github/codeql-action/init@v1 - with: - languages: ${{ matrix.language }} - # If you wish to specify custom queries, you can do so here or in a config file. - # By default, queries listed here will override any specified in a config file. - # Prefix the list here with "+" to use these queries and those in the config file. - # queries: ./path/to/local/query, your-org/your-repo/queries@main - - # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). - # If this step fails, then you should remove it and run the build manually (see below) - - name: Autobuild - uses: github/codeql-action/autobuild@v1 - - # ℹ️ Command-line programs to run using the OS shell. - # 📚 https://git.io/JvXDl - - # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines - # and modify them (or add more) to build your code if your project - # uses a compiled language - - #- run: | - # make bootstrap - # make release - - - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@v1 diff --git a/.github/workflows/export-requirements.yml b/.github/workflows/export-requirements.yml new file mode 100644 index 00000000000..c54114b5747 --- /dev/null +++ b/.github/workflows/export-requirements.yml @@ -0,0 +1,46 @@ +name: Update requirements.txt file + +on: + push: + branches: [ master, staging ] + paths: + - "pyproject.toml" + - "poetry.lock" + +jobs: + update: + if: ${{ !github.event.act }} # skip during local actions testing + runs-on: ubuntu-latest + timeout-minutes: 5 + strategy: + matrix: + python-version: ["3.10"] + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Install poetry + run: pip install poetry poetry-plugin-export --user + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + check-latest: true + python-version: ${{ matrix.python-version }} + cache: 'poetry' + + - name: Export requirements.txt + run: poetry export --format requirements.txt --output requirements.txt + + - name: Commit changes if any + # Skip this step if being run by nektos/act + if: ${{ !env.ACT }} + run: | + git config user.name "GitHub Actions" + git config user.email "action@github.com" + if output=$(git status --porcelain) && [ ! -z "$output" ]; then + git pull -f + git commit -m "ci: Update requirements.txt" -a + git push + fi diff --git a/.github/workflows/pip-audit.yml b/.github/workflows/pip-audit.yml new file mode 100644 index 00000000000..b47e7aaae6d --- /dev/null +++ b/.github/workflows/pip-audit.yml @@ -0,0 +1,21 @@ +name: PIP audit + +on: + schedule: + - cron: '0 8 * * 1' + +jobs: + test: + runs-on: ubuntu-latest + timeout-minutes: 20 + strategy: + matrix: + python-version: ["3.10"] + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - uses: pypa/gh-action-pip-audit@v1.0.8 + with: + inputs: requirements.txt diff --git a/.github/workflows/python-package-windows.yml b/.github/workflows/python-package-windows.yml new file mode 100644 index 00000000000..5b8daa86df2 --- /dev/null +++ b/.github/workflows/python-package-windows.yml @@ -0,0 +1,43 @@ +name: Python tests on windows + +env: + COLUMNS: 120 + +on: + push: + branches: [ master, staging ] + pull_request: + branches: [ master, staging ] + +jobs: + test: + runs-on: windows-latest + timeout-minutes: 20 + strategy: + matrix: + python-version: ["3.10"] + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + # Use x86 python because of https://github.com/kevoreilly/CAPEv2/issues/168 + with: + python-version: ${{ matrix.python-version }} + cache: 'pip' + architecture: 'x86' + + - name: Install dependencies + run: pip install --upgrade pytest requests + + - name: Run analyzer unit tests + run: | + cd analyzer/windows + pytest -v . + + - name: Run agent unit tests + run: | + cd agent + pytest -v . diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index bc175b34890..1d7cfd310fc 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -1,35 +1,83 @@ -# This workflow will install Python dependencies, run tests and lint with a variety of Python versions -# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions - name: Python package +env: + COLUMNS: 120 + on: push: - branches: [ master ] + branches: [ master, staging ] pull_request: - branches: [ master ] + branches: [ master, staging ] jobs: - build: + test: + runs-on: ubuntu-22.04 # ubuntu-latest + timeout-minutes: 20 + strategy: + matrix: + python-version: ["3.10"] + steps: + - name: Check out repository code + uses: actions/checkout@v4 + with: + submodules: recursive + + - name: Checkout test files repo + uses: actions/checkout@v4 + with: + repository: CAPESandbox/CAPE-TestFiles + path: tests/data/ + + - uses: ./.github/actions/python-setup/ + with: + python-version: ${{ matrix.python-version }} - runs-on: ubuntu-20.04 + - name: Setup 7zz binary + run: | + mkdir -p data/ + wget -q https://github.com/CAPESandbox/community/raw/master/data/7zz -O data/7zz + chmod +x data/7zz + + - name: Install pyattck + run: | + poetry run pip install git+https://github.com/CAPESandbox/pyattck maco + + - name: Run Ruff + run: poetry run ruff check . --output-format=github . + + - name: Run unit tests + run: poetry run python -m pytest --import-mode=append + + # see the mypy configuration in pyproject.toml + - name: Run mypy + run: poetry run mypy + + format: + runs-on: ubuntu-latest + timeout-minutes: 20 strategy: matrix: - python-version: [3.8] + python-version: ["3.10"] + if: ${{ github.ref == 'refs/heads/master' }} steps: - - uses: actions/checkout@v2 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v2 - with: - python-version: ${{ matrix.python-version }} - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install flake8 - - name: Lint with flake8 - run: | - # stop the build if there are Python syntax errors or undefined names - flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics - # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide - flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Set up python + uses: ./.github/actions/python-setup + with: + python-version: ${{ matrix.python-version }} + + - name: Commit changes if any + # Skip this step if being run by nektos/act + if: ${{ !env.ACT }} + run: | + git config user.name "GitHub Actions" + git config user.email "action@github.com" + if output=$(git status --porcelain) && [ ! -z "$output" ]; then + git pull + git add . + git commit -m "style: Automatic code formatting" -a + git push + fi diff --git a/.github/workflows/todo.yml_disabled b/.github/workflows/todo.yml_disabled new file mode 100644 index 00000000000..d5da8d8f1cc --- /dev/null +++ b/.github/workflows/todo.yml_disabled @@ -0,0 +1,14 @@ +name: "ToDo to issue" + +on: + push: + branches: [ master ] + +jobs: + build: + runs-on: "ubuntu-latest" + steps: + - uses: "actions/checkout@master" + - name: "TODO to Issue" + uses: "alstr/todo-to-issue-action@v4.6.8" + id: "todo" diff --git a/.github/workflows/yara-audit.yml b/.github/workflows/yara-audit.yml new file mode 100644 index 00000000000..033a4e0edc1 --- /dev/null +++ b/.github/workflows/yara-audit.yml @@ -0,0 +1,36 @@ +name: YARA tests + +on: + schedule: + - cron: '0 8 * * 1' + +jobs: + test: + runs-on: ubuntu-latest + timeout-minutes: 20 + strategy: + matrix: + python-version: ["3.10"] + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Checkout test files repo + uses: actions/checkout@v4 + with: + repository: CAPESandbox/CAPE-TestFiles + path: tests/data/ + + - uses: ./.github/actions/python-setup/ + with: + python-version: ${{ matrix.python-version }} + + - name: Install dependencies + run: | + sudo bash ./installer/cape2.sh yara + cd $GITHUB_WORKSPACE + bash -c "poetry run ./extra/yara_installer.sh" + + - name: Run unit tests + run: poetry run pytest tests/test_yara.py -s --import-mode=append diff --git a/.gitignore b/.gitignore index c1c65554dbc..6c7756c82d9 100644 --- a/.gitignore +++ b/.gitignore @@ -1,9 +1,26 @@ - +.idea *.DS_Store *.log -*.db +*.db *.sqlite +*.pyc +*.yarc __pycache__/ -env/ +.cache/ +.local/ .env/ -*.pyc +.vscode +env/ +tests/test_objects/ +log/ +storage/ +conf/*.conf + +web/web/secret_key.py +tests/test_bson.bson.compressed +*~ + +installer/cape-config.sh +installer/kvm-config.sh + +docs/book/src/_build \ No newline at end of file diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 00000000000..fe7c79f691d --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "tests/data"] + path = tests/data + url = https://github.com/CAPESandbox/CAPE-TestFiles.git diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 00000000000..8cc8a6fb07d --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,34 @@ +repos: + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v4.3.0 + hooks: + - id: check-json + exclude: tests/zip_compound/files/misconfiguration.json + - id: check-yaml + - id: end-of-file-fixer + - id: trailing-whitespace + - id: fix-byte-order-marker + - id: mixed-line-ending + - id: debug-statements + + # - repo: https://github.com/csachs/pyproject-flake8 + # rev: v0.0.1a4 + # hooks: + # - id: pyproject-flake8 + + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.3.0 + hooks: + - id: ruff + args: [ --fix ] + + - repo: https://github.com/psf/black + rev: 22.3.0 + hooks: + - id: black + + - repo: https://github.com/pycqa/isort + rev: 5.12.0 + hooks: + - id: isort + name: isort (python) diff --git a/.readthedocs.yaml b/.readthedocs.yaml new file mode 100644 index 00000000000..52d6b21e9b6 --- /dev/null +++ b/.readthedocs.yaml @@ -0,0 +1,28 @@ +# Read the Docs configuration file for Sphinx projects +# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details + +# Required +version: 2 + +# Set the OS, Python version and other tools you might need +build: + os: ubuntu-22.04 + tools: + python: "3.12" + +# Build documentation in the "docs/" directory with Sphinx +sphinx: + configuration: docs/book/src/conf.py + # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs + # builder: "dirhtml" + # Fail on all warnings to avoid broken references + # fail_on_warning: true + +# Optionally build your docs in additional formats such as PDF and ePub +# formats: +# - pdf +# - epub + +python: + install: + - requirements: docs/requirements.txt diff --git a/.yara-ci.yml b/.yara-ci.yml index ea65c036334..0f45c6ce69e 100644 --- a/.yara-ci.yml +++ b/.yara-ci.yml @@ -1,8 +1,13 @@ files: accept: - "data/yara/**.yar" - - "analyzer/windows/data/yara/*.yar" + - "analyzer/windows/data/yara/**.yar" false_positives: ignore: - - rule: "CobaltStrikeBeacon" + - rule: "CobaltStrikeBeacon" + - rule: "Emotet" + - rule: "NSIS" + - rule: "UPX" + - rule: "Syscall" + - rule: "FormhookB" diff --git a/CITATION.cff b/CITATION.cff new file mode 100644 index 00000000000..0ff467afb39 --- /dev/null +++ b/CITATION.cff @@ -0,0 +1,21 @@ +cff-version: 1.2.0 +title: "CAPE: Malware Configuration And Payload Extraction" +message: "If you use this software, please cite it as below." +type: software +authors: + - given-names: Kevin + family-names: O'Reilly + - given-names: Andriy + family-names: Brukhovetskyy +url: "https://github.com/kevoreilly/CAPEv2" +version: 2 +abstract: > + CAPEv2: Malware Configuration And Payload Extraction is a + malware sandbox. +keywords: + - malware + - sandbox + - cape + - capev2 + - analysis +license: GPL-3.0 diff --git a/KnowledgeBaseBot/all_texts.json b/KnowledgeBaseBot/all_texts.json new file mode 100644 index 00000000000..fcabc301726 --- /dev/null +++ b/KnowledgeBaseBot/all_texts.json @@ -0,0 +1,3014 @@ +[ + "CAPE Sandbox Book\n\nCAPE Sandbox is an Open Source software for automating analysis of suspicious files. To do so it makes use of custom components that monitor the behavior of the malicious processes while running in an isolated environment.\n\nThis guide will explain how to set up CAPE, use it and customize it.\n\nHaving troubles?\n\nIf you're having troubles you might want to check out the FAQ as it may already have the answers to your questions.\n\nfaq/index\n\nOtherwise you can ask the developers and/or other CAPE users, see Join the discussion .\n\nContents\n\nintroduction/index installation/index usage/index customization/index integrations/index development/index finalremarks/index", + "Customization\n\nThis chapter explains how to customize CAPE. CAPE is written in a modular architecture built to be as customizable as it can, to fit the needs of all users.\n\nauxiliary machinery packages processing signatures reporting", + "Reporting Modules\n\nAfter the raw analysis results have been processed and abstracted by the processing modules and the global container is generated (ref. processing), it is passed over by CAPE to all the reporting modules available, which will make use of it and will make it accessible and consumable in different formats.\n\nGetting Started\n\nAll reporting modules must be placed inside the directory modules/reporting/.\n\nEvery module must also have a dedicated section in the file conf/reporting.conf: for example, if you create a module module/reporting/foobar.py you will have to append the following section to conf/reporting.conf:\n\n[foobar]\nenabled = on\n\nEvery additional option you add to your section will be available to your reporting module in the self.options dictionary.\n\nFollowing is an example of a working JSON reporting module:\n\nimport os\nimport json\nimport codecs\n\nfrom lib.cuckoo.common.abstracts import Report\nfrom lib.cuckoo.common.exceptions import CuckooReportError", + "class JsonDump(Report):\n \"\"\"Saves analysis results in JSON format.\"\"\"\n\n def run(self, results):\n \"\"\"Writes report.\n @param results: Cuckoo results dict.\n @raise CuckooReportError: if fails to write report.\n \"\"\"\n try:\n report = codecs.open(os.path.join(self.reports_path, \"report.json\"), \"w\", \"utf-8\")\n json.dump(results, report, sort_keys=False, indent=4)\n report.close()\n except (UnicodeError, TypeError, IOError) as e:\n raise CuckooReportError(\"Failed to generate JSON report: %s\" % e)\n\nThis code is very simple, it just receives the global container produced by the processing modules, converts it into JSON, and writes it to a file.\n\nThere are a few requirements for writing a valid reporting module:\n\nDeclare your class inheriting from Report.\n\nHave a run() function performing the main operations.\n\nTry to catch most exceptions and raise CuckooReportError to notify the issue.", + "Try to catch most exceptions and raise CuckooReportError to notify the issue.\n\nAll reporting modules have access to some attributes:\n\nself.analysis_path: path to the folder containing the raw analysis results (e.g. storage/analyses/1/)\n\nself.reports_path: path to the folder where the reports should be written (e.g. storage/analyses/1/reports/)\n\nself.conf_path: path to the analysis.conf file of the current analysis (e.g. storage/analyses/1/analysis.conf)\n\nself.options: a dictionary containing all the options specified in the report's configuration section in conf/reporting.conf.", + "Auxiliary Modules\n\nAuxiliary modules define some procedures that need to be executed in parallel to every single analysis process. All auxiliary modules should be placed under the modules/auxiliary/ directory.\n\nThe skeleton of a module would look something like this:\n\nfrom lib.cuckoo.common.abstracts import Auxiliary\n\nclass MyAuxiliary(Auxiliary):\n\n def start(self):\n # Do something.\n\n def stop(self):\n # Stop the execution.\n\nThe function start() will be executed before starting the analysis machine and effectively executing the submitted malicious file, while the stop() function will be launched at the very end of the analysis process, before launching the processing and reporting procedures.\n\nFor example, an auxiliary module provided by default in CAPE is called sniffer.py and takes care of executing tcpdump in order to dump the generated network traffic.\n\nAuxiliary Module Configuration", + "Auxiliary Module Configuration\n\nAuxiliary modules can be \"configured\" before being started. This allows data to be added at runtime, whilst also allowing for the configuration to be stored separately from the CAPE python code.\n\nPrivate Auxiliary Module Configuration\n\nPrivate auxiliary module configuration is stored outside the auxiliary class, in a module under the same name as the auxiliary module. This is useful when managing configuration of auxiliary modules separately if desired, for privacy reasons or otherwise.\n\nHere is a configuration module example that installs some software prior to the auxiliary module starting:\n\n# data/auxiliary/example.py\nimport subprocess\nimport logging\nfrom pathlib import Path\n\nlog = logging.getLogger(__name__)\nBIN_PATH = Path.cwd() / \"bin\"", + "log = logging.getLogger(__name__)\nBIN_PATH = Path.cwd() / \"bin\"\n\n\ndef configure(aux_instance):\n # here \"example\" refers to modules.auxiliary.example.Example\n if not aux_instance.enabled:\n return\n msi = aux_instance.options.get(\"example_msi\")\n if not msi:\n return\n msi_path = BIN_PATH / msi\n if not msi_path.exists():\n log.warning(\"missing MSI %s\", msi_path)\n return\n cmd = [\"msiexec\", \"/i\", msi_path, \"/quiet\"]\n try:\n log.info(\"Executing msi package...\")\n subprocess.check_output(cmd)\n log.info(\"Installation succesful\")\n except subprocess.CalledProcessError as exc:\n log.error(\"Installation failed: %s\", exc)\n return", + "Analysis Packages\n\nAs explained in ../usage/packages, analysis packages are structured Python classes that describe how CAPE's analyzer component should conduct the analysis procedure for a given file inside the guest environment.\n\nAs you already know, you can create your packages and add them along with the default ones. Designing new packages is very easy and requires just a minimal understanding of programming and the Python language.\n\nGetting started\n\nAs an example we'll take a look at the default package for analyzing generic Windows executables (located at analyzer/windows/packages/exe.py):\n\nfrom lib.common.abstracts import Package\n\nclass Exe(Package):\n \"\"\"EXE analysis package.\"\"\"\n\n def start(self, path):\n args = self.options.get(\"arguments\")\n return self.execute(path, args)\n\nIt seems easy, thanks to all methods inherited by the Package object. Let's have a look at some of the main methods an analysis package inherits from the Package object:", + "from lib.api.process import Process\nfrom lib.common.exceptions import CuckooPackageError\n\nclass Package:\n def start(self):\n raise NotImplementedError\n\n def check(self):\n return True\n\n def execute(self, path, args):\n dll = self.options.get(\"dll\")\n free = self.options.get(\"free\")\n suspended = True\n if free:\n suspended = False\n\n p = Process()\n if not p.execute(path=path, args=args, suspended=suspended):\n raise CuckooPackageError(\"Unable to execute the initial process, \"\n \"analysis aborted.\")\n\n if not free and suspended:\n p.inject(dll)\n p.resume()\n p.close()\n return p.pid\n\n def finish(self):\n if self.options.get(\"procmemdump\"):\n for pid in self.pids:\n p = Process(pid=pid)\n p.dump_memory()\n return True\n\nstart()", + "start()\n\nIn this function you have to place all the initialization operations you want to run. This may include running the malware process, launching additional applications, taking memory snapshots, and more.\n\ncheck()\n\nThis function is executed by CAPE every second while the malware is running. You can use this function to perform any kind of recurrent operation.\n\nFor example, if in your analysis you are looking for just one specific indicator to be created (e.g. a file) you could place your condition in this function and if it returns False, the analysis will terminate straight away.\n\nThink of it as \"should the analysis continue or not?\".\n\nFor example:\n\ndef check(self):\n if os.path.exists(\"C:\\\\config.bin\"):\n return False\n else:\n return True\n\nThis check() function will cause CAPE to immediately terminate the analysis whenever C:\\config.bin is created.\n\nexecute()\n\nWraps the malware execution and deals with DLL injection.\n\nfinish()", + "execute()\n\nWraps the malware execution and deals with DLL injection.\n\nfinish()\n\nThis function is simply called by CAPE before terminating the analysis and powering off the machine. By default, this function contains an optional feature to dump the process memory of all the monitored processes.\n\nOptions\n\nEvery package has automatically access to a dictionary containing all user-specified options (see ../usage/submit).\n\nSuch options are made available in the attribute self.options. For example, let's assume that the user specified the following string at submission:\n\nfoo=1,bar=2\n\nThe analysis package selected will have access to these values:\n\nfrom lib.common.abstracts import Package\n\nclass Example(Package):\n\n def start(self, path):\n foo = self.options[\"foo\"]\n bar = self.options[\"bar\"]\n\n def check():\n return True\n\n def finish():\n return True\n\nThese options can be used for anything you might need to configure inside your package.", + "These options can be used for anything you might need to configure inside your package.\n\nPackage Configuration\n\nAnalysis packages can be \"configured\" before being started. Package configuration comes in two forms:\n\nPublic configuration\n\nPrivate configuration\n\nPublic configuration is stored within the analysis package class itself. Private configuration is stored externally, as data added to CAPE at runtime, separate from the CAPE Python code.\n\nPublic Package Configuration\n\nPublic package configuration is stored directly in the analysis package itself. This form of configuration is useful when configuring the host execution environment before the analysis is started.\n\nFor example, here is a alternative PDF package with lowered security settings:\n\nfrom lib.common.abstracts import Package\nfrom lib.common.exceptions import CuckooPackageError\nfrom lib.common.registry import *\n\n\nclass PDFLS(Package):\n \"\"\"PDF analysis package, with lowered security settings.\"\"\"", + "class PDFLS(Package):\n \"\"\"PDF analysis package, with lowered security settings.\"\"\"\n\n PATHS = [\n (\"ProgramFiles\", \"Adobe\", \"Acrobat DC\", \"Acrobat\", \"Acrobat.exe\"),\n ]\n\n def __init__(self, options=None, config=None):\n \"\"\"@param options: options dict.\"\"\"\n if options is None:\n options = {}\n self.config = config\n self.options = options\n self.options[\"pdf\"] = \"1\"\n\n def configure(self, target):\n rootkey, subkey = \"HKEY_CURRENT_USER\", r\"SOFTWARE\\Adobe\\Adobe Acrobat\\DC\"\n set_regkey(rootkey, fr\"{subkey}\\Privileged\", \"bProtectedMmode\", REG_DWORD, 0)\n set_regkey(rootkey, fr\"{subkey}\\JSPrefs\", \"bEnableJS\", REG_DWORD, 1)\n set_regkey(rootkey, fr\"{subkey}\\JSPrefs\", \"bEnableGlobalSecurity\", REG_DWORD, 0)\n\n def start(self, path):\n reader = self.get_path_glob(\"Acrobat.exe\")\n return self.execute(reader, f'\"{path}\"', path)\n\nPrivate Package Configuration", + "Private Package Configuration\n\nPrivate package configuration is stored outside the analysis package class, in a module under the same name as the analysis package. This is useful when managing configuration of package capabilities separately is desired, for privacy reasons or otherwise.\n\nFor example, here is a private package configuration for exe analysis that disables ASLR for the target being analyzed:\n\n# data/packages/exe.py\nimport lief\n\ndef configure(package, target):\n # here \"package\" refers to modules.packages.exe.Exe\n if package.options.get(\"disable-aslr\"):\n pe_binary = lief.parse(target)\n old_flags = pe_binary.optional.header.dll_characteristics\n # unset DYNAMIC_BASE\n new_flags = (old_flags & ~lief.PE.OptionalHeader.DLL_CHARACTERISTICS.DYNAMIC_BASE)\n pe_binary.optional_header.dll_characteristics = new_flags\n pe_binary.write(target)\n\nProcess API", + "Process API\n\nThe Process class provides access to different process-related features and functions. You can import it into your analysis packages with:\n\nfrom lib.api.process import Process\n\nYou then initialize an instance with:\n\np = Process()\n\nIn case you want to open an existing process instead of creating a new one, you can specify multiple arguments:\n\npid: PID of the process you want to operate on.\n\nh_process: handle of a process you want to operate on.\n\nthread_id: thread ID of a process you want to operate on.\n\nh_thread: handle of the thread of a process you want to operate on.\n\nThis class implements several methods that you can use in your scripts.\n\nMethods\n\nProcess.open()\n\nOpens an handle to a running process. Returns True or False in case of success or failure of the operation.\n\nExample Usage:\n\np = Process(pid=1234)\np.open()\nhandle = p.h_process\n\nProcess.exit_code()", + "Example Usage:\n\np = Process(pid=1234)\np.open()\nhandle = p.h_process\n\nProcess.exit_code()\n\nReturns the exit code of the opened process. If it wasn't already done before, exit_code() will perform a call to open() to acquire an handle to the process.\n\nExample Usage:\n\np = Process(pid=1234)\ncode = p.exit_code()\n\nProcess.is_alive()\n\nCalls exit_code() and verify if the returned code is STILL_ACTIVE, meaning that the given process is still running. Returns True or False.\n\nExample Usage:\n\np = Process(pid=1234)\nif p.is_alive():\n print(\"Still running!\")\n\nProcess.get_parent_pid()\n\nReturns the PID of the parent process of the opened process. If it wasn't already done before, get_parent_pid() will perform a call to open() to acquire an handle to the process.\n\nExample Usage:\n\np = Process(pid=1234)\nppid = p.get_parent_pid()\n\nProcess.execute(path [, args=None[, suspended=False]])\n\nExecutes the file at the specified path. Returns True or False in case of success or failure of the operation.", + "Example Usage:\n\np = Process()\np.execute(path=\"C:\\\\WINDOWS\\\\system32\\\\calc.exe\", args=\"Something\", suspended=True)\n\nProcess.resume()\n\nResumes the opened process from a suspended state. Returns True or False in case of success or failure of the operation.\n\nExample Usage:\n\np = Process()\np.execute(path=\"C:\\\\WINDOWS\\\\system32\\\\calc.exe\", args=\"Something\", suspended=True)\np.resume()\n\nProcess.terminate()\n\nTerminates the opened process. Returns True or False in case of success or failure of the operation.\n\nExample Usage:\n\np = Process(pid=1234)\nif p.terminate():\n print(\"Process terminated!\")\nelse:\n print(\"Could not terminate the process!\")\n\nProcess.inject([dll[, apc=False]])\n\nInjects a DLL (by default \"dll/capemon.dll\") into the opened process. Returns True or False in case of success or failure of the operation.\n\nExample Usage:\n\np = Process()\np.execute(path=\"C:\\\\WINDOWS\\\\system32\\\\calc.exe\", args=\"Something\", suspended=True)\np.inject()\np.resume()\n\nProcess.dump_memory()", + "Process.dump_memory()\n\nTakes a snapshot of the given process' memory space. Returns True or False in case of success or failure of the operation.\n\nExample Usage:\n\np = Process(pid=1234)\np.dump_memory()", + "Machinery Modules\n\nMachinery modules define how CAPE should interact with your virtualization software (or potentially even with physical disk imaging solutions). Since we decided to not enforce any particular vendor, from release 0.4 you can use your preferred solution and, in case it's not supported by default, write a custom Python module that defines how to make CAPE use it.\n\nEvery machinery module should be located inside modules/machinery/.\n\nA basic machinery module would look like this:\n\nfrom lib.cuckoo.common.abstracts import Machinery\nfrom lib.cuckoo.common.exceptions import CuckooMachineError\n\nclass MyMachinery(Machinery):\n def start(self, label):\n try:\n revert(label)\n start(label)\n except SomethingBadHappens as e:\n raise CuckooMachineError(\"OPS!\")\n\n def stop(self, label):\n try:\n stop(label)\n except SomethingBadHappens as e:\n raise CuckooMachineError(\"OPS!\")", + "The only requirements for Cuckoo are that:\n\nThe class inherits from Machinery.\n\nYou have a start() and stop() functions.\n\nYou raise CuckooMachineError when something fails.\n\nAs you understand, the machinery module is a core part of a CAPE setup, therefore make sure to spend enough time debugging your code and make it solid and resistant to any unexpected error.\n\nConfiguration\n\nEvery machinery module should come with a dedicated configuration file located in conf/.conf. For example, for modules/machinery/kvm.py we have a conf/kvm.conf.\n\nThe configuration file should follow the default structure:\n\n[kvm]\n# Specify a comma-separated list of available machines to be used. For each\n# specified ID you have to define a dedicated section containing the details\n# on the respective machine. (E.g. cape1,cape2,cape3)\nmachines = cape1\n\n[cape1]\n# Specify the label name of the current machine as specified in your\n# libvirt configuration.\nlabel = cape1", + "# Specify the operating system platform used by current machine\n# [windows/darwin/linux].\nplatform = windows\n\n# Specify the IP address of the current machine. Make sure that the IP address\n# is valid and that the host machine is able to reach it. If not, the analysis\n# will fail.\nip = 192.168.122.105\n\nThe main section is called [] with a machines field containing a comma-separated list of machines IDs.\n\nFor each machine, you should specify a label, a platform, and its ip.\n\nThese fields are required by CAPE to use the already embedded initialize() function that generates the list of available machines.\n\nIf you plan to change the configuration structure you should override the initialize() function (inside your module, no need to modify CAPE's core code). You can find its original code in the Machinery abstract inside lib/cuckoo/common/abstracts.py.\n\nLibVirt", + "LibVirt\n\nStarting with Cuckoo 0.5 developing new machinery modules based on LibVirt is easy. Inside lib/cuckoo/common/abstracts.py you can find LibVirtMachinery that already provides all the functionality for a LibVirt module. Just inherit this base class and specify your connection string, as in the example below:\n\nfrom lib.cuckoo.common.abstracts import LibVirtMachinery\n\nclass MyMachinery(LibVirtMachinery):\n # Set connection string.\n dsn = \"my:///connection\"\n\nThis works for all the virtualization technologies supported by LibVirt. Just remember to check if your LibVirt package (if you are using one, for example from your Linux distribution) is compiled with the support for the technology you need.\n\nYou can check it with the following command:\n\n$ virsh -V\nVirsh command line tool of libvirt 0.9.13\nSee web site at http://libvirt.org/", + "$ virsh -V\nVirsh command line tool of libvirt 0.9.13\nSee web site at http://libvirt.org/\n\nCompiled with support for:\n Hypervisors: QEmu/KVM LXC UML Xen OpenVZ VMWare Test\n Networking: Remote Daemon Network Bridging Interface Nwfilter VirtualPort\n Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM\n Miscellaneous: Nodedev AppArmor Secrets Debug Readline Modular\n\nIf you don't find your virtualization technology in the list of Hypervisors, you will need to recompile LibVirt with the specific support for the missing one.", + "Processing Modules\n\nCAPE's processing modules are Python scripts that let you define custom ways to analyze the raw results generated by the sandbox and append some information to a global container that will be later used by the signatures and the reporting modules.\n\nYou can create as many modules as you want, as long as they follow a predefined structure that we will present in this chapter.\n\nGlobal Container\n\nAfter an analysis is completed, CAPE will invoke all the processing modules available in the modules/processing/ directory. Any additional module you decide to create must be placed inside that directory.\n\nEvery module should also have a dedicated section in the file conf/processing.conf: for example, if you create a module module/processing/foobar.py you will have to append the following section to conf/processing.conf:\n\n[foobar]\nenabled = on", + "[foobar]\nenabled = on\n\nEvery module will then be initialized and executed and the data returned will be appended in a data structure that we'll call global container.\n\nThis container is simply just a big Python dictionary that includes the abstracted results produced by all the modules classified by their identification key.\n\nCAPE already provides a default set of modules that will generate a standard global container. It's important for the existing reporting modules (HTML report etc.) that these default modules are not modified, otherwise, the resulting global container structure would change and the reporting modules wouldn't be able to recognize it and extract the information used to build the final reports.\n\nGetting started\n\nTo make them available to CAPE, all processing modules must be placed inside the folder at modules/processing/.\n\nA basic processing module could look like this:\n\nfrom lib.cuckoo.common.abstracts import Processing\n\nclass MyModule(Processing):", + "from lib.cuckoo.common.abstracts import Processing\n\nclass MyModule(Processing):\n\n def run(self):\n self.key = \"key\"\n data = do_something()\n return data\n\nYou can also specify an order value, which allows you to run the available processing modules in an ordered sequence. By default, all modules are set with an order value of 1 and are executed in alphabetical order.\n\nIf you want to change this value your module would look like this:\n\nfrom lib.cuckoo.common.abstracts import Processing\n\nclass MyModule(Processing):\n order = 2\n\n def run(self):\n self.key = \"key\"\n data = do_something()\n return data\n\nYou can also manually disable a processing module by setting the enabled attribute to False:\n\nfrom lib.cuckoo.common.abstracts import Processing\n\nclass MyModule(Processing):\n enabled = False\n\n def run(self):\n self.key = \"key\"\n data = do_something()\n return data", + "def run(self):\n self.key = \"key\"\n data = do_something()\n return data\n\nThe processing modules are provided with some attributes that can be used to access the raw results for the given analysis:\n\nself.analysis_path: path to the folder containing the results (e.g. storage/analysis/1)\n\nself.log_path: path to the analysis.log file.\n\nself.conf_path: path to the analysis.conf file.\n\nself.file_path: path to the analyzed file.\n\nself.dropped_path: path to the folder containing the dropped files.\n\nself.logs_path: path to the folder containing the raw behavioral logs.\n\nself.shots_path: path to the folder containing the screenshots.\n\nself.pcap_path: path to the network pcap dump.\n\nself.memory_path: path to the full memory dump, if created.\n\nself.pmemory_path: path to the process memory dumps, if created.\n\nWith these attributes, you should be able to easily access all the raw results stored by CAPE and perform your analytic operations on them.", + "As a last note, a good practice is to use the CuckooProcessingError exception whenever the module encounters an issue you want to report to CAPE. This can be done by importing the class like this:\n\nfrom lib.cuckoo.common.exceptions import CuckooProcessingError\nfrom lib.cuckoo.common.abstracts import Processing\n\nclass MyModule(Processing):\n\n def run(self):\n self.key = \"key\"\n\n try:\n data = do_something()\n except SomethingFailed:\n raise CuckooProcessingError(\"Failed\")\n\n return data", + "Signatures\n\nBy taking advantage of CAPE's customizability, you can write signatures which will then by run against analysis results. These signatures can be used to identify a predefined pattern that represents a malicious behavior or an indicator that you're interested in.\n\nThese signatures are very useful to give context to the analyses. They simplify the interpretation of the results and assist with automatically identifying malware samples of interest.\n\nYou can find signatures created by the CAPE administrators and other CAPE users on the Community repository.\n\nGetting Started\n\nCreating a signature is a very simple process but requires a decent understanding of Python programming.\n\nFirst things first, all signatures must be located inside the modules/signatures/ directory.\n\nThe following is a basic example signature:\n\nfrom lib.cuckoo.common.abstracts import Signature", + "The following is a basic example signature:\n\nfrom lib.cuckoo.common.abstracts import Signature\n\nclass CreatesExe(Signature):\n name = \"creates_exe\"\n description = \"Creates a Windows executable on the filesystem\"\n severity = 2\n categories = [\"generic\"]\n authors = [\"CAPE Developers\"]\n minimum = \"0.5\"\n\n def run(self):\n return self.check_file(pattern=\".*\\\\.exe$\",\n regex=True)\n\nAs you can see the structure of the signature is really simple and consistent with the other CAPE modules. Note that on line 12 a helper function is used. These helper functions assist with signature-writing and we highly recommend becoming familiar with what helper functions are available to you (found in the [Signature class](https://github.com/kevoreilly/CAPEv2/blob/master/lib/cuckoo/common/abstracts.py)) before you start writing signatures. Some documentation for Helpers can be found below.", + "In the example above, the helper function is used to walk through all of the accessed files in the summary and check if there are any files ending with \"*.exe*\". If there is at least one, then the helper function will return True; otherwise it will return False. When a signature returns True, that means that the signature matched.\n\nIf the signature matches, a new entry in the \"signatures\" section will be added to the global container self.results as follows:\n\n\"signatures\": [\n {\n \"severity\": 2,\n \"description\": \"Creates a Windows executable on the filesystem\",\n \"alert\": false,\n \"references\": [],\n \"data\": [\n {\n \"file_name\": \"C:\\\\d.exe\"\n }\n ],\n \"name\": \"creates_exe\"\n }\n]\n\nWe could rewrite the exact same signature by accessing the global container directly, rather than through the helper function `check_file`:\n\nfrom lib.cuckoo.common.abstracts import Signature", + "from lib.cuckoo.common.abstracts import Signature\n\nclass CreatesExe(Signature):\n name = \"creates_exe\"\n description = \"Creates a Windows executable on the filesystem\"\n severity = 2\n categories = [\"generic\"]\n authors = [\"Cuckoo Developers\"]\n minimum = \"0.5\"\n\n def run(self):\n for file_path in self.results[\"behavior\"][\"summary\"][\"files\"]:\n if file_path.endswith(\".exe\"):\n return True\n\n return False\n\nIf you access the global container directly, you must know its structure, which can be observed in the JSON report of your analyses.\n\nCreating your new signature\n\nTo help you better understand the process of creating a signature, we are going to create a very simple one together and walk through the steps and the available options. For this purpose, we're going to create a signature that checks whether the malware analyzed opens a mutex named \"i_am_a_malware\".", + "The first thing to do is to import the dependencies, create a skeleton, and define some initial attributes. These are the attributes that you can currently set:\n\nname: an identifier for the signature.\n\ndescription: a brief description of what the signature represents.\n\nseverity: a number identifying the severity of the events matched (generally between 1 and 3).\n\nconfidence: a number between 1 and 100 that represents how confident the signature writer is that this signature will not be raised as a false positive.\n\nweight: a number used for calculating the malscore of a submission. This attribute acts as a multiplier of the product of severity and confidence.\n\ncategories: a list of categories that describe the type of event being matched (for example \"*banker*\", \"*injection*\" or \"*anti-vm*\"). For a list of all categories, see Categories.\n\nfamilies: a list of malware family names, in case the signature specifically matches a known one.", + "families: a list of malware family names, in case the signature specifically matches a known one.\n\nauthors: a list of people who authored the signature.\n\nreferences: a list of references (URLs) to give context to the signature.\n\nenabled: if set to False the signature will be skipped.\n\nalert: if set to True can be used to specify that the signature should be reported (perhaps by a dedicated reporting module).\n\nminimum: the minimum required version of CAPE to successfully run this signature.\n\nmaximum: the maximum required version of CAPE to successfully run this signature.\n\nttps: a list of MITRE ATT&CK IDs applicable to this signature.\n\nmbcs: a list of MITRE Malware Behavior Catalog IDs applicable to this signature.\n\nIn our example, we will create the following skeleton:\n\nfrom lib.cuckoo.common.abstracts import Signature", + "from lib.cuckoo.common.abstracts import Signature\n\nclass BadBadMalware(Signature): # We initialize the class by inheriting Signature.\n name = \"badbadmalware\" # We define the name of the signature\n description = \"Creates a mutex known to be associated with Win32.BadBadMalware\" # We provide a description\n severity = 3 # We set the severity to maximum\n categories = [\"trojan\"] # We add a category\n families = [\"badbadmalware\"] # We add the name of our fictional malware family\n authors = [\"Me\"] # We specify the author\n minimum = \"0.5\" # We specify that in order to run the signature, the user will need at least CAPE 0.5\n\ndef run(self):\n return\n\nThis is a perfectly valid signature. It doesn't do anything yet, so now we need to define the conditions for the signature to be matched.\n\nSince we want to match a particular mutex name, we use the helper function `check_mutex`:\n\nfrom lib.cuckoo.common.abstracts import Signature", + "from lib.cuckoo.common.abstracts import Signature\n\nclass BadBadMalware(Signature):\n name = \"badbadmalware\"\n description = \"Creates a mutex known to be associated with Win32.BadBadMalware\"\n severity = 3\n categories = [\"trojan\"]\n families = [\"badbadmalware\"]\n authors = [\"Me\"]\n minimum = \"0.5\"\n\ndef run(self):\n return self.check_mutex(\"i_am_a_malware\")\n\nIt's as simple as that! Now our signature will return True if the analyzed malware was observed opening the specified mutex.\n\nIf you want to be more explicit and directly access the global container, you could translate the previous signature in the following way:\n\nfrom lib.cuckoo.common.abstracts import Signature\n\nclass BadBadMalware(Signature):\n name = \"badbadmalware\"\n description = \"Creates a mutex known to be associated with Win32.BadBadMalware\"\n severity = 3\n categories = [\"trojan\"]\n families = [\"badbadmalware\"]\n authors = [\"Me\"]\n minimum = \"0.5\"", + "def run(self):\n for mutex in self.results[\"behavior\"][\"summary\"][\"mutexes\"]:\n if mutex == \"i_am_a_malware\":\n return True\n\n return False\n\nEvented Signatures\n\nSince version 1.0, CAPE provides a way to write more high-performance signatures. In the past, every signature was required to loop through the whole collection of API calls collected during the analysis. This was necessarily causing some performance issues when such a collection would be large.\n\nCAPE now supports both the old model as well as what we call \"evented signatures\". The main difference is that with this new format, all the signatures will be executed in parallel and a callback function called on_call() will be invoked for each signature within one single loop through the collection of API calls.\n\nAn example signature using this technique is the following:\n\nfrom lib.cuckoo.common.abstracts import Signature", + "from lib.cuckoo.common.abstracts import Signature\n\nclass SystemMetrics(Signature):\n name = \"generic_metrics\"\n description = \"Uses GetSystemMetrics\"\n severity = 2\n categories = [\"generic\"]\n authors = [\"CAPE Developers\"]\n minimum = \"1.0\"\n\n # Evented signatures need to implement the \"on_call\" method\n evented = True\n\n # Evented signatures can specify filters that reduce the amount of\n # API calls that are streamed in. One can filter Process name, API\n # name/identifier and category. These should be sets for faster lookup.\n filter_processnames = set()\n filter_apinames = set([\"GetSystemMetrics\"])\n filter_categories = set()\n\n # This is a signature template. It should be used as a skeleton for\n # creating custom signatures, therefore is disabled by default.\n # The on_call function is used in \"evented\" signatures.\n # These use a more efficient way of processing logged API calls.\n enabled = False", + "def stop(self):\n # In the stop method one can implement any cleanup code and\n # decide one last time if this signature matches or not.\n # Return True in case it matches.\n return False\n\n # This method will be called for every logged API call by the loop\n # in the RunSignatures plugin. The return value determines the \"state\"\n # of this signature. True means the signature matched and False means\n # it can't match anymore. Both of which stop streaming in API calls.\n # Returning None keeps the signature active and will continue.\n def on_call(self, call, process):\n # This check would in reality not be needed as we already make use\n # of filter_apinames above.\n if call[\"api\"] == \"GetSystemMetrics\":\n # Signature matched, return True.\n return True\n\n # continue\n return None", + "# continue\n return None\n\nThe inline comments are already self-explanatory. You can find many more examples of both evented and traditional signatures in our community repository.\n\nMatches\n\nStarting from version 1.2, signatures can log exactly what triggered the signature. This allows users to better understand why this signature is present in the log, and to be able to better focus malware analysis.\n\nTwo helpers have been included to specify matching data.\n\nSignature.add_match(process, type, match)\n\nAdds a match to the signature. Can be called several times for the same signature.\n\nExample Usage, with a single element:\n\nself.add_match(None, \"url\", \"http://malicious_url_detected.com\")\n\nExample Usage, with a more complex signature, needing several API calls to be triggered:\n\nself.signs = []\nself.signs.append(first_api_call)\nself.signs.append(second_api_call)\nself.add_match(process, 'api', self.signs)\n\nSignature.has_matches()", + "Signature.has_matches()\n\nChecks whether the current signature has any matching data registered. Returns True in case it does, otherwise returns False.\n\nThis can be used to easily add several matches for the same signature. If you want to do so, make sure that all the api calls are scanned by making sure that on_call never returns True. Then, use on_complete with has_matches so that the signature is triggered if any match was previously added.\n\nExample Usage, from the network_tor signature:\n\ndef on_call(self, call, process):\n if self.check_argument_call(call,\n pattern=\"Tor Win32 Service\",\n api=\"CreateServiceA\",\n category=\"services\"):\n self.add_match(process, \"api\", call)\n\ndef on_complete(self):\n return self.has_matches()\n\nHelpers", + "def on_complete(self):\n return self.has_matches()\n\nHelpers\n\nAs anticipated, from version 0.5 the Signature base class also provides some helper methods that simplify the creation of signatures and avoid the need for you having to access the global container directly (at least most of the times).\n\nFollowing is a list of available methods.\n\nSignature.check_file(pattern[, regex=False])\n\nChecks whether the malware opened or created a file matching the specified pattern. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_file(pattern=\".*\\.exe$\", regex=True)\n\nSignature.check_key(pattern[, regex=False])\n\nChecks whether the malware opened or created a registry key matching the specified pattern. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_key(pattern=\".*CurrentVersion\\\\Run$\", regex=True)\n\nSignature.check_mutex(pattern[, regex=False])", + "Signature.check_mutex(pattern[, regex=False])\n\nChecks whether the malware opened or created a mutex matching the specified pattern. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_mutex(\"mutex_name\")\n\nSignature.check_api(pattern[, process=None[, regex=False]])\n\nChecks whether Windows function was invoked. Returns True in case it was, otherwise returns False.\n\nExample Usage:\n\nself.check_api(pattern=\"URLDownloadToFileW\", process=\"AcroRd32.exe\")\n\nSignature.check_argument(pattern[, name=Name[, api=None[, category=None[, process=None[, regex=False]]]])\n\nChecks whether the malware invoked a function with a specific argument value. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_argument(pattern=\".*CAPE.*\", category=\"filesystem\", regex=True)\n\nSignature.check_ip(pattern[, regex=False])\n\nChecks whether the malware contacted the specified IP address. Returns True in case it did, otherwise returns False.\n\nExample Usage:", + "Example Usage:\n\nself.check_ip(\"123.123.123.123\")\n\nSignature.check_domain(pattern[, regex=False])\n\nChecks whether the malware contacted the specified domain. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_domain(pattern=\".*capesandbox.com$\", regex=True)\n\nSignature.check_url(pattern[, regex=False])\n\nChecks whether the malware performed an HTTP request to the specified URL. Returns True in case it did, otherwise returns False.\n\nExample Usage:\n\nself.check_url(pattern=\"^.+\\/load\\.php\\?file=[0-9a-zA-Z]+$\", regex=True)\n\nCategories\n\nYou can put signatures into categories to facilitate grouping or sorting. You can create your own category if you wish, but it is easier for other users if you associate a signature with a category that already exists. Here is a list of all categories available:\n\n`account`: Adds or manipulates an administrative user account.\n\n`anti-analysis`: Constructed to conceal or obfuscate itself to prevent analysis.", + "`anti-analysis`: Constructed to conceal or obfuscate itself to prevent analysis.\n\n`anti-av`: Attempts to conceal itself from detection by antivirus.\n\n`anti-debug`: Attempts to detect if it is being debugged.\n\n`anti-emulation`: Detects the presence of an emulator.\n\n`anti-sandbox`: Attempts to detect if it is in a sandbox.\n\n`anti-vm`: Attempts to detect if it is being run in virtualized environment.\n\n`antivirus`: Antivirus hit. File is infected.\n\n`banker`: Designed to gain access to confidential information stored or processed through online banking.\n\n`bootkit`: Manipulates machine configurations that would affect the boot of the machine.\n\n`bot`: Appears to be a bot or exhibits bot-like behaviour.\n\n`browser`: Manipulates browser-settings in a suspicious way.\n\n`bypass`: Attempts to bypass operating systems security controls (firewall, amsi, applocker, UAC, etc.)\n\n`c2`: Communicates with a server controlled by a malicious actor.", + "`c2`: Communicates with a server controlled by a malicious actor.\n\n`clickfraud`: Manipulates browser settings to allow for insecure clicking.\n\n`command`: A suspicious command was observed.\n\n`credential_access`: Uses techniques to access credentials.\n\n`credential_dumping`: Uses techniques to dump credentials.\n\n`cryptomining`: Facilitates mining of cryptocurrency.\n\n`discovery`: Uses techniques for discovery information about the system, the user, or the environment.\n\n`dns`: Uses suspicious DNS queries.\n\n`dotnet`: .NET code is used in a suspicious manner.\n\n`downloader`: Trojan that downloads installs files.\n\n`dropper`: Trojan that drops additional malware on an affected system.\n\n`encryption`: Encryption algorithms are used for obfuscating data.\n\n`evasion`: Techniques are used to avoid detection.\n\n`execution`: Uses techniques to execute harmful code or create executables that could run harmful code.\n\n`exploit`: Exploits an known software vulnerability or security flaw.", + "`exploit`: Exploits an known software vulnerability or security flaw.\n\n`exploit_kit`: Programs designed to crack or break computer and network security measures.\n\n`generic`: Basic operating system objects are used in suspicious ways.\n\n`infostealer`: Collects and disseminates information such as login details, usernames, passwords, etc.\n\n`injection`: Input is not properly validated and gets processed by an interpreter as part of a command or query.\n\n`keylogger`: Monitoring software detected.\n\n`lateral`: Techniques used to move through environment and maintain access.\n\n`loader`: Download and execute additional payloads on compromised machines.\n\n`locker`: Prevents access to system data and files.\n\n`macro`: A set of commands that automates a software to perform a certain action, found in Office macros.\n\n`malware`: The file uses techniques associated with malicious software.\n\n`martians`: Command shell or script process was created by unexpected parent process.", + "`martians`: Command shell or script process was created by unexpected parent process.\n\n`masquerading`: The name or location of an object is manipulated to evade defenses and observation.\n\n`network`: Suspicious network traffic was observed.\n\n`office`: Makes API calls not consistent with expected/standard behaviour.\n\n`packer`: Compresses, encrypts, and/or modifies a malicious file's format.\n\n`persistence`: Technique used to maintain presence in system(s) across interruptions that could cut off access.\n\n`phishing`: Techniques were observed that attempted to obtain information from the user.\n\n`ransomware`: Designed to block access to a system until a sum of money is paid.\n\n`rat`: Designed to provide the capability of covert surveillance and/or unauthorized access to a target.\n\n`rootkit`: Designed to provide continued privileged access to a system while actively hiding its presence.\n\n`static`: A suspicious characteristic was discovered during static analysis.", + "`static`: A suspicious characteristic was discovered during static analysis.\n\n`stealth`: Leverages/modifies internal processes and settings to conceal itself.\n\n`trojan`: Presents itself as legitimate in attempt to infiltrate a system.\n\n`virus`: Malicious software program.\n\nTroubleshooting\n\nNo signatures\n\nWhenever you submit a sample for analysis, when it finishes you should be able to inspect the identified signatures. If you see the No signatures message, you might need to download or update them. Example from the web interface:\n\nimage\n\nIf no signatures are showing when executing a given report, you must use the utils/community.py tool so as to download them:\n\n$ sudo -u cape poetry run python3 utils/community.py -waf\n\nIf the execution of the script does not end successfully, make sure you solve it. For example:", + "If the execution of the script does not end successfully, make sure you solve it. For example:\n\nInstalling REPORTING\nFile \"/opt/CAPEv2/modules/reporting/__init__.py\" installed\nFile \"/opt/CAPEv2/modules/reporting/elasticsearchdb.py\" installed\nTraceback (most recent call last):\n File \"/opt/CAPEv2/utils/community.py\", line 257, in \n main()\n File \"/opt/CAPEv2/utils/community.py\", line 252, in main\n install(enabled, args.force, args.rewrite, args.file, args.token)\n File \"/opt/CAPEv2/utils/community.py\", line 180, in install\n open(filepath, \"wb\").write(t.extractfile(member).read())\nPermissionError: [Errno 13] Permission denied: '/opt/CAPEv2/modules/reporting/elasticsearchdb.py'\n\nhappened because elasticsearchdb.py did not belong to cape:cape but to root:root.\n\nAfter chowning it to cape:cape, the script finished successfully. You should now see in the report page something similar to this:\n\nimage\n\nErrors/warnings in the logs", + "image\n\nErrors/warnings in the logs\n\nIf you ever face errors or warnings in the logs related to the signatures module (like Signature spawns_dev_util crashing after update)), chances are high you must update the signatures you are working with. To do so, just run the community` utility like so:\n\n$ sudo -u cape poetry run python3 community.py -waf -cr", + "Development\n\nThis chapter explains how to write CAPE's code and how to contribute.\n\ndevelopment_notes code_style current_module_improvement", + "Development examples\n\nCurtain\n\nfrom modules.processing.curtain import deobfuscate\nblob = \"\"\"here\"\"\"\nprint(deobfuscate(blob))\n\nSuricata name detection\n\nimport os, sys\nCUCKOO_ROOT = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"..\")\nsys.path.append(CUCKOO_ROOT)\n\nfrom lib.cuckoo.common.suricata_detection import get_suricata_family\n# Signature example: \"ET MALWARE Sharik/Smoke CnC Beacon 11\"\nprint(get_suricata_family(signature_string))", + "Development Notes\n\nGit branches\n\nCAPE Sandbox source code is available in our official git repository.\n\nUp until version 1.0, we used to coordinate all ongoing development in a dedicated \"development\" branch and we've been exclusively merging pull requests in such branch. Since version 1.1 we moved development to the traditional \"master\" branch and we make use of GitHub's tags and release system to reference development milestones in time.\n\nRelease Versioning\n\nCAPE releases are named using three numbers separated by dots, such as 1.2.3, where the first number is the release, the second number is the major version, and the third number is the bugfix version. The testing stage from git ends with \"-beta\" and the development stage with \"-dev\".\n\nWarning", + "Warning\n\nIf you are using a \"beta\" or \"dev\" stage, please consider that it's not meant to be an official release, therefore we don't guarantee its functioning and we don't generally provide support. If you think you encountered a bug there, make sure that the nature of the problem is not related to your misconfiguration and collect all the details to be notified to our developers. Make sure to specify which exact version you are using, eventually with your current git commit id.\n\nTicketing system\n\nTo submit bug reports or feature requests, please use GitHub's Issue tracking system.\n\nContribute\n\nTo submit your patch just create a Pull Request from your GitHub fork. If you don't know how to create a Pull Request take a look at GitHub help.", + "Coding Style\n\nTo contribute code to the project, you must diligently follow the style rules describe in this chapter. Having a clean and structured code is very important for our development lifecycle, and not compliant code will most likely be rejected.\n\nEssentially CAPE's code style is based on PEP 8 - Style Guide for Python Code and PEP 257 -- Docstring Conventions.\n\nFormatting\n\nCopyright header\n\nAll source code files must start with the following copyright header:\n\n# Copyright (C) 2010-2015 X.\n# This file is part of CAPE Sandbox - https://capesandbox.com\n# See the file 'docs/LICENSE' for copying permission.\n\nIndentation\n\nThe code must have a 4-spaces-tabs indentation. Since Python enforces the indentation, make sure to configure your editor properly or your code might cause malfunctioning.\n\nMaximum Line Length\n\nLimit all lines to a maximum of 132 characters.\n\nBlank Lines", + "Maximum Line Length\n\nLimit all lines to a maximum of 132 characters.\n\nBlank Lines\n\nSeparate the class definition and the top-level function with one blank line. Methods definitions inside a class are separated by a single blank line:\n\nclass MyClass:\n \"\"\"Doing something.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize\"\"\"\n pass\n\n def do_it(self, what):\n \"\"\"Do it.\n @param what: do what.\n \"\"\"\n pass\n\nUse blank lines in functions, sparingly, to isolate logic sections. Import blocks are separated by a single blank line, import blocks are separated from classes by one blank line.\n\nImports\n\nImports must be on separate lines. If you're importing multiple objects from a package, use a single line:\n\nfrom lib import a, b, c\n\nNOT:\n\nfrom lib import a\nfrom lib import b\nfrom lib import c\n\nAlways specify explicitly the objects to import:\n\nfrom lib import a, b, c\n\nNOT:\n\nfrom lib import *\n\nStrings\n\nStrings must be delimited by double quotes (\").", + "NOT:\n\nfrom lib import *\n\nStrings\n\nStrings must be delimited by double quotes (\").\n\nPrinting and Logging\n\nWe discourage the use of print(): if you need to log an event please use Python's logging which is already initialized by CAPE.\n\nIn your module add:\n\nimport logging\nlog = logging.getLogger(__name__)\n\nAnd use the log handle, for more details refer to the Python documentation.\n\nIn case you need to print a string to standard output, use the print() function:\n\nprint(\"foo\")\n\nNOT the statement:\n\nprint \"foo\"\n\nChecking for keys in data structures\n\nWhen checking for a key in a data structure, use the clause \"in\" for example:\n\nif \"bar\" in foo:\n do_something(foo[\"bar\"])\n\nExceptions\n\nCustom exceptions must be defined in the lib/cuckoo/common/exceptions.py file or the local module if the exception should not be global.\n\nThe following is the current CAPE exceptions chain:", + "The following is the current CAPE exceptions chain:\n\n.-- CuckooCriticalError\n| |-- CuckooStartupError\n| |-- CuckooDatabaseError\n| |-- CuckooMachineError\n| `-- CuckooDependencyError\n|-- CuckooOperationalError\n| |-- CuckooAnalysisError\n| |-- CuckooProcessingError\n| `-- CuckooReportError\n`-- CuckooGuestError\n\nBeware that the use of CuckooCriticalError and its child exceptions will cause CAPE to terminate.\n\nNaming\n\nCustom exception names must start with \"Cuckoo\" and end with \"Error\" if it represents an unexpected malfunction.\n\nException handling\n\nWhen catching an exception and accessing its handle, use as e:\n\ntry:\n foo()\nexcept Exception as e:\n bar()\n\nNOT:\n\ntry:\n foo()\nexcept Exception, something:\n bar()\n\nIt's a good practice to use \"e\" instead of \"e.message\".\n\nDocumentation\n\nAll code must be documented in docstring format, see PEP 257 -- Docstring Conventions. Additional comments may be added in logical blocks to make the code easier to understand.", + "Automated testing\n\nWe believe in automated testing to provide high-quality code and avoid dumb bugs. When possible, all code must be committed with proper unit tests. Particular attention must be placed when fixing bugs: it's good practice to write unit tests to reproduce the bug. All unit tests and fixtures are placed in the tests folder in the CAPE root. We adopted pytest as the unit testing framework.\n\nGithub actions\n\nAutomated tests run as github actions ; see the .github directory.\n\nYou may wish to run github actions locally. A tool that may help is Nektos act. One of the installation options for act is as a plugin for the github CLI, and the actions are then triggered by gh act.\n\nAs input for act it's often helpful to create a simulated github event, and save it as an input file.\n\nExample:\n\n{\n \"act\": true,\n \"repository\" : {\n \"default_branch\": \"master\"\n }\n}\n\nSo to run the actions that normally are triggered by a push event:", + "So to run the actions that normally are triggered by a push event:\n\ngh act -s GITHUB_TOKEN=\"$(gh auth token)\" --eventpath /tmp/github-event.json\n\nand to run the actions that are scheduled:\n\ngh act schedule -s GITHUB_TOKEN=\"$(gh auth token)\" --eventpath /tmp/github-event.json\n\nWe created a file .actrc containing --env CAPE_AS_ROOT=1 because act runs the tests as root, and otherwise the tests would exit saying you cannot run CAPE as root.\n\nPoetry and pre-commit hooks\n\nAfter cloning the git repository, the first commands that you should do:\n\npoetry install\npoetry run pre-commit install\n\nThis will install the pre-commit hooks, ensuring that all files have to conform to black and isort.", + "FAQ\n\nFrequently Asked Questions:\n\nanalyze_urls\n\nesxi_reqs\n\ntroubles_upgrade\n\ntroubles_problem\n\nGeneral Questions\n\nCan I analyze URLs with CAPE?\n\nYes you can. But modern browsers has a lot of problems\n\nWhat I need to use CAPE with VMware ESXi?\n\nTo run with VMware vSphere Hypervisor (or ESXi) CAPE levareges on libvirt. Libivirt is currently using VMware API to take control over virtual machines, although these API are available ony in licensed version. In VMware vSphere free edition, these API are read only, so you are unable to use CAPE with it. For the minimum license needed, please have a look at VMware website.\n\nTroubleshooting\n\nAfter upgrade CAPE stops to work\n\nProbably you upgraded it in a wrong way. It's not a good practice to rewrite the files due to CAPE's complexity and quick evolution.\n\nPlease follow the upgrade steps described in ../installation/upgrade.\n\nCAPE stumbles and produces some error I don't understand", + "CAPE stumbles and produces some error I don't understand\n\nCAPE is a young and still evolving project, it's possible that you encounter some problems while running it, but before you rush into sending emails to everyone make sure you read what follows.\n\nCAPE is not meant to be a point-and-click tool: it's designed to be a highly customizable and configurable solution for somewhat experienced users and malware analysts.\n\nIt requires you to have a decent understanding of your operating systems, Python, the concepts behind virtualization and sandboxing. We try to make it as easy to use as possible, but you have to keep in mind that it's not a technology meant to be accessible to just anyone.\n\nThat being said, if a problem occurs you have to make sure that you did everything you could before asking for time and effort from our developers and users. We just can't help everyone, we have limited time and it has to be dedicated to the development and fixing of actual bugs.", + "We have extensive documentation, read it carefully. You can't just skip parts of it.\n\nWe have a mailing list archive, search through it for previous threads where your same problem could have been already addressed and solved.\n\nWe have a Community platform for asking questions, use it.\n\nWe have lot of users producing content on Internet, Google it.\n\nSpend some of your own time trying fixing the issues before asking ours, you might even get to learn and understand CAPE better.\n\nLong story short: use the existing resources, put some efforts into it and don't abuse people.\n\nIf you still can't figure out your problem, you can ask help on our online communities (see ../finalremarks/index). Make sure when you ask for help to:\n\nUse a clear and explicit title for your emails: \"I have a problem\", \"Help me\" or \"CAPE error\" are NOT good titles.\n\nExplain in details what you're experiencing. Try to reproduce several times your issue and write down all steps to achieve that.", + "Use no-paste services and link your logs, configuration files and details on your setup.\n\nEventually provide a copy of the analysis that generated the problem.\n\nCheck and restore current snapshot with KVM\n\nIf something goes wrong with virtual machine it's best practice to check current snapshot status. You can do that with the following:\n\n$ virsh snapshot-current \"\"\n\nIf you got a long XML as output your current snapshot is configured and you can skip the rest of this chapter; anyway if you got an error like the following your current snapshot is broken:\n\n$ virsh snapshot-current \"\"\nerror: domain '' has no current snapshot\n\nTo fix and create a current snapshot first list all machine's snapshots:\n\n$ virsh snapshot-list \"\"\n Name Creation Time State\n ------------------------------------------------------------\n 1339506531 2012-06-12 15:08:51 +0200 running\n\nChoose one snapshot name and set it as current:", + "Choose one snapshot name and set it as current:\n\n$ snapshot-current \"\" --snapshotname 1339506531\nSnapshot 1339506531 set as current\n\nNow the virtual machine state is fixed.\n\nCheck and restore current snapshot with VirtualBox\n\nIf something goes wrong with virtual it's best practice to check the virtual machine status and the current snapshot. First of all check the virtual machine status with the following:\n\n$ VBoxManage showvminfo \"\" | grep State\nState: powered off (since 2012-06-27T22:03:57.000000000)\n\nIf the state is \"powered off\" you can go ahead with the next check, if the state is \"aborted\" or something else you have to restore it to \"powered off\" before:\n\n$ VBoxManage controlvm \"\" poweroff\n\nWith the following check the current snapshots state:\n\n$ VBoxManage snapshot \"\" list --details\n Name: s1 (UUID: 90828a77-72f4-4a5e-b9d3-bb1fdd4cef5f)\n Name: s2 (UUID: 97838e37-9ca4-4194-a041-5e9a40d6c205) *", + "If you have a snapshot marked with a star \"*\" your snapshot is ready, anyway you have to restore the current snapshot:\n\n$ VBoxManage snapshot \"\" restorecurrent\n\nUnable to bind result server error\n\nAt CAPE startup if you get an error message like this one:\n\n2014-01-07 18:42:12,686 [root] CRITICAL: CuckooCriticalError: Unable to bind result server on 192.168.56.1:2042: [Errno 99] Cannot assign requested address\n\nIt means that CAPE is unable to start the result server on the IP address written in cuckoo.conf (or in machinery_conf if you are using the resultserver_ip option inside). This usually happen when you start CAPE without bringing up the virtual interface associated with the result server IP address. You can bring it up manually, it depends from one virtualization software to another, but if you don't know how to do, a good trick is to manually start and stop an analysis virtual machine, this will bring virtual networking up.", + "Introduction\n\nThis is an introductory chapter to CAPE Sandbox. It explains some basic malware analysis concepts, what CAPE is, and how it can fit into malware analysis.\n\nsandboxing what license", + "Sandboxing\n\nAs defined by Wikipedia, \"*in computer security, a sandbox is a security mechanism for separating running programs. It is often used to execute untested code, or untrusted programs from unverified third-parties, suppliers, untrusted users and untrusted websites.*\".\n\nThis concept applies to malware analysis' sandboxing too: our goal is to run an unknown and untrusted application or file inside an isolated environment and get information about what the file does.\n\nMalware sandboxing is a practical application of the dynamical analysis approach: instead of statically analyzing the binary file, it gets executed and monitored in real-time.\n\nThis approach obviously has pros and cons, but it's a valuable technique to obtain additional details on the malware, such as its network behavior. Therefore it's a good practice to perform both static and dynamic analysis while inspecting a malware, to gain a deeper understanding of it.", + "Simple as it is, CAPE is a tool that allows you to perform sandboxed malware analysis.\n\nUsing a Sandbox\n\nBefore starting to install, configure and use CAPE, you should take some time to think about what you want to achieve with it and how.\n\nSome questions you should ask yourself:\n\nWhat kind of files do I want to analyze?\n\nWhat volume of analyses do I want to be able to handle?\n\nWhich platform do I want to use to run my analysis?\n\nWhat kind of information do I want about the file?\n\nThe creation of the isolated environment (the virtual machine) is probably the most critical and important part of a sandbox deployment: it should be done carefully and with proper planning.\n\nBefore getting your hands on the virtualization product of your choice, you should already have a design plan that defines:\n\nWhich operating system, language, and patching level to use.\n\nWhich software to install and which versions (particularly important when analyzing exploits).", + "Which software to install and which versions (particularly important when analyzing exploits).\n\nConsider that automated malware analysis is not deterministic and its success might depend on a trillion of factors: you are trying to make a malware run in a virtualized system as it would do on a native one, which could be tricky to achieve and may not always succeed. Your goal should be both to create a system able to handle all the requirements you need as well as try to make it as realistic as possible.\n\nFor example, you could consider leaving some intentional traces of normal usage, such as browsing history, cookies, documents, images, etc. If a malware sample is designed to operate, manipulate or steal such files you'll be able to notice it.", + "Virtualized operating systems usually carry a lot of traces with them which makes them very easily detectable. Even if you shouldn't overestimate this problem, you might want to take care of this and try to hide as many virtualization traces as possible. There is a lot of literature on the Internet regarding virtualization detection techniques and countermeasures.\n\nOnce you finished designing and preparing the prototype of the system you want, you can proceed with creating and deploying it. You will be always in time to change things or slightly fix them, but remember that good planning at the beginning always means fewer troubles in the long run.", + "License\n\nCuckoo/CAPE Sandbox license is shipped with Cuckoo/CAPE and contained in the \"LICENSE\" file under the base project directory.\n\nDisclaimer\n\nCuckoo/CAPE is distributed as it is, in the hope that it will be useful, but without any warranty nor the implied merchantability or fitness for a particular purpose.\n\nWhatever you do with this tool is uniquely your own responsibility.\n\nCuckoo Foundation\n\nThe Cuckoo Foundation is a non-profit organization incorporated as a Stichting in the Netherlands and it's mainly dedicated to support of the development and growth of Cuckoo Sandbox, an open source malware analysis system, and the surrounding projects and initiatives.\n\nThe Foundation operates to secure financial and infrastructure support for our software projects and coordinates the development and contributions from the community.", + "What is CAPE?\n\nCAPE is an open-source malware sandbox.\n\nA sandbox is used to execute malicious files in an isolated enviornment whilst instrumenting their dynamic behaviour and collecting forensic artefacts.\n\nCAPE was derived from Cuckoo v1 which features the following core capabilities on the Windows platform:\n\nBehavioral instrumentation based on API hooking\n\nCapture of files created, modified and deleted during execution\n\nNetwork traffic capture in PCAP format\n\nMalware classification based on behavioral and network signatures\n\nScreenshots of the desktop taken during the execution of the malware\n\nFull memory dumps of the target system\n\nCAPE complements Cuckoo's traditional sandbox output with several key additions:\n\nAutomated dynamic malware unpacking\n\nMalware classification based on YARA signatures of unpacked payloads\n\nStatic & dynamic malware configuration extraction\n\nInteractive desktop\n\nSome History", + "Static & dynamic malware configuration extraction\n\nInteractive desktop\n\nSome History\n\nCuckoo Sandbox started as a Google Summer of Code project in 2010 within The Honeynet Project. It was originally designed and developed by Claudio Guarnieri, the first beta release was published in 2011. In January 2014, Cuckoo v1.0 was released.\n\n2015 was a pivotal year, with a significant fork in Cuckoo's history. Development of the original monitor and API hooking method was halted in the main Cuckoo project. It was replaced by alternative monitor using a restructuredText-based signature format compiled via Linux toolchain, created by Jurriaan Bremer.\n\nAround the same time, a fork called cuckoo-modified was created by Brad 'Spender' Spengler continuing development of the original monitor with significant improvements including 64-bit support and importantly introducting Microsoft's Visual Studio compiler. .. _ `Cuckoo-modified`: https://github.com/spender-sandbox/cuckoo-modified", + "During that same year development of a dynamic command-line configuration and payload extraction tool called CAPE was begun at Context Information Security by Kevin O'Reilly. The name was coined as an acronym of 'Config And Payload Extraction' and the original research focused on using API hooks provided by Microsoft's Detours library to capture unpacked malware payloads and configuration. However, it became apparent that API hooks alone provide insufficient power and precision to allow for unpacking of payloads or configs from arbitrary malware.\n\nFor this reason research began into a novel debugger concept to allow malware to be precisely controlled and instrumented whilst avoiding use of Microsoft debugging interfaces, in order to be as stealthy as possible. This debugger was integrated into the proof-of-concept Detours-based command-line tool, combining with API hooks and resulting in very powerful capabilities.", + "When initial work showed that it would be possible to replace Microsoft Detours with cuckoo-modified's API hooking engine, the idea for CAPE Sandbox was born. With the addition of the debugger, automated unpacking, YARA-based classification and integrated config extraction, in September 2016 at 44con, CAPE Sandbox was publicly released for the first time: .. _ `CAPE CTXIS`: https://github.com/ctxis/CAPE\n\nIn the summer of 2018 the project was fortunate to see the beginning of huge contributions from Andriy 'doomedraven' Brukhovetskyy, a long-time Cuckoo contributor. In 2019 he began the mammoth task of porting CAPE to Python 3 and in October of that year CAPEv2 was released: .. _ `CAPEv2 upstream`: https://github.com/kevoreilly/CAPEv2", + "CAPE has been continuously developed and improved to keep pace with advancements in both malware and operating system capabilities. In 2021, the ability to program CAPE's debugger during detonation via dynamic YARA scans was added, allowing for dynamic bypasses to be created for anti-sandbox techniques. Windows 10 became the default operating system, and other significant additions include interactive desktop, AMSI (Anti-Malware Scan Interface) payload capture, 'syscall hooking' based on Microsoft Nirvana and debugger-based direct/indirect syscall countermeasures.\n\nUse Cases\n\nCAPE is designed to be used both as a standalone application as well as to be integrated into larger frameworks, thanks to its extremely modular design.\n\nIt can be used to analyze:\n\nGeneric Windows executables\n\nDLL files\n\nPDF documents\n\nMicrosoft Office documents\n\nURLs and HTML files\n\nPHP scripts\n\nCPL files\n\nVisual Basic (VB) scripts\n\nZIP files\n\nJava JAR\n\nPython files\n\nAlmost anything else", + "CPL files\n\nVisual Basic (VB) scripts\n\nZIP files\n\nJava JAR\n\nPython files\n\nAlmost anything else\n\nThanks to its modularity and powerful scripting capabilities, there's no limit to what you can achieve with CAPE!\n\nFor more information on customizing CAPE, see the ../customization/index chapter.\n\nArchitecture\n\nCAPE Sandbox consists of central management software which handles sample execution and analysis.\n\nEach analysis is launched in a fresh and isolated virtual machine. CAPE's infrastructure is composed of a Host machine (the management software) and a number of Guest machines (virtual machines for analysis).\n\nThe Host runs the core component of the sandbox that manages the whole analysis process, while the Guests are the isolated environments where the malware samples get safely executed and analyzed.\n\nThe following picture explains CAPE's main architecture:\n\nimage\n\nThe recommended setup is GNU/Linux (Ubuntu LTS preferably) as the Host and Windows 10 21H2 as a Guest.\n\nObtaining CAPE", + "Obtaining CAPE\n\nCAPE can be downloaded from the official git repository, where the stable and packaged releases are distributed or can be cloned from our official git repository.\n\nWarning\n\nIt is very likely that documentation is not up-to-date, but for that we try to keep a changelog.", + "Usage\n\nThis chapter explains how to use CAPE.\n\nstart internals submit web api dist cluster_administration packages results clean rooter utilities performance monitor interactive_desktop patterns_replacement", + "Analysis Results\n\nOnce an analysis is completed, several files are stored in a dedicated directory. All the analyses are stored under the directory storage/analyses/ inside a subdirectory named after the incremental numerical ID that represents the analysis task in the database.\n\nFollowing is an example of an analysis directory structure:\n\n.\n|-- analysis.conf\n|-- analysis.log\n|-- binary\n|-- dump.pcap\n|-- memory.dmp\n|-- files\n| |-- 1234567890\n| `-- dropped.exe\n|-- logs\n| |-- 1232.raw\n| |-- 1540.raw\n| `-- 1118.raw\n|-- reports\n| |-- report.html\n| |-- report.json\n| |-- report.maec-4.0.1.xml\n| `-- report.metadata.xml\n`-- shots\n |-- 0001.jpg\n |-- 0002.jpg\n |-- 0003.jpg\n `-- 0004.jpg\n\nanalysis.conf\n\nThis is a configuration file automatically generated by CAPE to give its analyzer some details about the current analysis. It's generally of no interest to the end-user, as it's used internally by the sandbox.\n\nanalysis.log", + "analysis.log\n\nThis is a log file generated by the analyzer that contains a trace of the analysis execution inside the guest environment. It will report the creation of processes, files, and eventual errors that occurred during the execution.\n\ndump.pcap\n\nThis is the network dump generated by tcpdump or any other corresponding network sniffer.\n\nmemory.dmp\n\nIn case you enabled it, this file contains the full memory dump of the analysis machine.\n\nfiles/\n\nThis directory contains all the files the malware operated on and that CAPE was able to dump.\n\nlogs/\n\nThis directory contains all the raw logs generated by CAPE's process monitoring.\n\nreports/\n\nThis directory contains all the reports generated by CAPE as explained in the ../installation/host/configuration chapter.\n\nshots/\n\nThis directory contains all the screenshots of the guest's desktop taken during the malware execution.", + "Distributed CAPE\n\nThis works under the main server web interface, so everything is transparent for the end user, even if they were analyzed on another server(s).\n\nDeploy each server as a normal server and later just register it as a worker on the master server where dist.py is running.\n\nDependencies\n\nThe distributed script uses a few Python libraries which can be installed through the following command (on Debian/Ubuntu):\n\n$ poetry run pip install flask flask-restful flask-sqlalchemy requests\n\nStarting the Distributed REST API\n\nThe Distributed REST API requires a few command line options in order to run:\n\n$ cd /opt/CAPEv2/web && poetry run python manage.py runserver_plus 0.0.0.0:8000 --traceback --keep-meta-shutdown\n\nRESTful resources\n\nFollowing are all RESTful resources. Also, make sure to check out the quick-usage section which documents the most commonly used commands.", + "Resource Description GET node_root_get Get a list of all enabled CAPE nodes . POST node_root_post Register a new CAPE node. GET node_get Get basic information about a node. PUT node_put Update basic information of a node. DELETE node_delete Disable (not completely remove!) a node.\n\nGET /node\n\nReturns all enabled nodes. For each node its associated name, API url, and machines are returned:\n\n$ curl http://localhost:9003/node\n{\n \"nodes\": {\n \"localhost\": {\n \"machines\": [\n {\n \"name\": \"cuckoo1\",\n \"platform\": \"windows\",\n \"tags\": [\n \"\"\n ]\n }\n ],\n \"name\": \"localhost\",\n \"url\": \"http://0:8000/apiv2/\"\n }\n }\n}\n\nPOST /node", + "POST /node\n\nRegister a new CAPE node by providing the name and the URL. Optionally the apikey if auth is enabled, You might need to enable list_exitnodes and machinelist in custom/conf/api.conf if your Node API is using htaccess authentication:\n\n$ curl http://localhost:9003/node -F name=master -F url=http://localhost:8000/apiv2/ -F apikey=apikey -F enabled=1\n{\n \"machines\": [\n {\n \"name\": \"cape1\",\n \"platform\": \"windows\",\n \"tags\": []\n }\n ],\n \"name\": \"localhost\"\n}\n\nGET /node/\n\nGet basic information about a particular CAPE node:\n\n$ curl http://localhost:9003/node/localhost\n{\n \"name\": \"localhost\",\n \"url\": \"http://localhost:8000/apiv2/\"\n}\n\nPUT /node/\n\nUpdate basic information of a CAPE node:\n\n$ curl -XPUT http://localhost:9003/node/localhost -F name=newhost \\\n -F url=http://1.2.3.4:8000/apiv2/\nnull\n\nAdditional Arguments:", + "Additional Arguments:\n\n* enabled\n False=0 or True=1 to activate or deactivate worker node\n* exitnodes\n exitnodes=1 - Update exit nodes list, to show on main web UI\n* apikey\n apikey for authorization\n\nDELETE /node/\n\nDisable a CAPE node, therefore not having it process any new tasks, but keep its history in the Distributed's database:\n\n$ curl -XDELETE http://localhost:9003/node/localhost\nnull\n\nQuick usage\n\nFor practical usage the following few commands will be most interesting.\n\nRegister a CAPE node - a CAPE REST API running on the same machine in this case:\n\n$ curl http://localhost:9003/node -F name=master -F url=http://localhost:8000/apiv2/\nMaster server must be called master, the rest of names we don't care\n\nDisable a CAPE node:\n\n$ curl -XDELETE http://localhost:9003/node/\n\nor:\n\n$ curl -XPUT http://localhost:9003/node/localhost -F enable=0\nnull\n\nor:\n\n$ ./dist.py --node NAME --disable", + "or:\n\n$ ./dist.py --node NAME --disable\n\nGet the report of a task should be requested throw master node integrated /api/\n\nProposed setup\n\nThe following description depicts a Distributed CAPE setup with two CAPE machines, a master and a worker. In this setup the first machine, the master, also hosts the Distributed CAPE REST API.\n\nConfiguration settings\n\nOur setup will require a couple of updates about the configuration files.\n\nNote about VMs tags in hypervisor conf as kvm.conf:\n\nIf you have ``x64`` and ``x86`` VMs:\n* ``x64`` VMs should have both ``x64`` and ``x86`` tags. Otherwise only ``x64`` tag\n* ``x86`` VMs should have only ``x86`` tag.\n* You can use any other tags, just to work properly you need those two.\n* Probably will be improved in future for better solution\n\ncustom/conf/cuckoo.conf\n\nOptional: Update tmppath to something that holds enough storage to store a few hundred binaries. On some servers or setups /tmp may have a limited amount of space and thus this wouldn't suffice.", + "Update connection to use something that is not sqlite3. Preferably PostgreSQL. SQLite3 doesn't support multi-threaded applications that well and this will give errors at random if used. Neither support database schema upgrade.\n\ncustom/conf/processing.conf\n\nYou may want to disable some processing modules, such as virustotal.\n\ncustom/conf/reporting.conf\n\nDepending on which report(s) are required for integration with your system it might make sense to only make those report(s) that you're going to use. Thus disable the other ones.\n\ncustom/conf/distributed.conf\n\nCheck also \"[distributed]\" section, where you can set the database, path for samples, and a few more values. Do not use sqlite3! Use PostgreSQL database for performance and thread safe.\n\nUpdate db to use something that is not sqlite3. Preferably PostgreSQL. SQLite3 doesn't support multi-threaded applications that well and this will give errors at random if used. Neither support database schema upgrade.\n\nRegister CAPE nodes", + "Register CAPE nodes\n\nAs outlined in quick-usage the CAPE nodes have to be registered with the Distributed CAPE script\n\nwithout htaccess:\n\n$ curl http://localhost:9003/node -F name=master -F url=http://localhost:8000/apiv2/\n\nwith htaccess:\n\n$ curl http://localhost:9003/node -F name=worker -F url=http://1.2.3.4:8000/apiv2/ \\\n -F username=user -F password=password\n\nHaving registered the CAPE nodes all that's left to do now is to submit tasks and fetch reports once finished. Documentation on these commands can be found in the quick-usage section.\n\nVM Maintenance\n\nOccasionally you might want to perform maintenance on VMs without shutting down your whole node. To do this, you need to remove the VM from being used by CAPE in its execution, preferably without having to restart the ./cuckoo.py daemon.\n\nFirst, get a list of available VMs that are running on the worker:\n\n$ poetry run python dist.py --node NAME\n\nSecondly, you can remove VMs from being used by CAPE with:", + "Secondly, you can remove VMs from being used by CAPE with:\n\n$ poetry run python dist.py --node NAME --delete-vm VM_NAME\n\nWhen you are done editing your VMs you need to add them back to be used by cuckoo. The easiest way to do that is to disable the node, so no more tasks get submitted to it:\n\n$ poetry run python dist.py --node NAME --disable\n\nWait for all running VMs to finish their tasks, and then restart the workers ./cuckoo.py, this will re-insert the previously deleted VMs into the Database from custom/conf/virtualbox.conf.\n\nUpdate the VM list on the master:\n\n$ poetry run python dist.py --node NAME\n\nAnd enable the worker again:\n\n$ poetry run python dist.py --node NAME --enable\n\nGood practice for production\n\nThe number of retrieved threads can be configured in reporting.conf\n\nInstallation of \"uwsgi\":\n\n# nginx is optional\n# apt install uwsgi uwsgi-plugin-python3 nginx\n\nIt's better if you run \"web\" and \"dist.py\" as uwsgi application. To run your api with config just execute as:", + "# Web UI is started by systemd as cape-web.service\n$ uwsgi --ini /opt/CAPEv2/uwsgi/capedist.ini\n\nTo add your application to auto start after boot, copy your config file to:\n\ncp /opt/CAPEv2/uwsgi/capedist.ini /etc/uwsgi/apps-available/cape_dist.ini\nln -s /etc/uwsgi/apps-available/cape_dist.ini /etc/uwsgi/apps-enabled\n\nservice uwsgi restart\n\nOptimizations:\n\nIf you have many workers is recommended\n UWSGI:\n set processes to be able handle number of requests dist + dist2 + 10\n DB:\n set max connection number to be able handle number of requests dist + dist2 + 10\n\nDistributed Mongo setup\n\nSet one mongo as master and the rest just point to it, in this example cuckoo_dist.fe is our master server. Depending on your hardware you may prepend the next command before mongod:\n\n$ numactl --interleave=all\n\nThese commands should be executed only on the master:", + "$ numactl --interleave=all\n\nThese commands should be executed only on the master:\n\n# create config server instance with the \"cuckoo_config\" replica set\n# Preferably to execute few config servers on different shards\n/usr/bin/mongod --configsvr --replSet cuckoo_config --bind_ip_all\n\n# initialize the \"cuckoo_config\" replica set\nmongosh --port 27019\n\nExecute in mongo console:\n rs.initiate({\n _id: \"cuckoo_config\",\n configsvr: true,\n members: [\n { _id: 0, host: \"192.168.1.13:27019\" },\n ]\n })\n\nThis should be started on all nodes including master:\n\n# start shard server\n/usr/bin/mongod --shardsvr --bind_ip 0.0.0.0 --port 27017 --replSet rs0\n\nAdd clients, execute on master mongo server:\n\n# start mongodb router instance that connects to the config server\nmongos --configdb cuckoo_config/192.168.1.13:27019 --port 27020 --bind_ip_all", + "# in another terminal\nmongosh\nrs.initiate( {\n _id : \"rs0\",\n members: [\n { _id: 0, host: \"192.168.1.x:27017\" },\n { _id: 1, host: \"192.168.1.x:27017\" },\n { _id: 2, host: \"192.168.1.x:27017\" },\n ]\n})\n\n# Check which node is primary and change the prior if is incorrect\n# https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/\ncfg = rs.conf()\ncfg.members[0].priority = 0.5\ncfg.members[1].priority = 0.5\ncfg.members[2].priority = 1\nrs.reconfig(cfg, {\"force\": true})\n\n# Add arbiter only\nrs.addArb(\"192.168.1.51:27017\")\n\n# Add replica set member, secondary\nrs.add({\"host\": \"192.168.1.50:27017\", \"priority\": 0.5})\n\n# add shards\nmongosh --port 27020\n\nExecute in mongo console:\n sh.addShard( \"rs0/192.168.1.13:27017\")\n sh.addShard( \"rs0/192.168.1.44:27017\")\n sh.addShard( \"rs0/192.168.1.55:27017\")\n sh.addShard( \"rs0/192.168.1.62:27017\")\n\nWhere 192.168.1.(2,3,4,5) is our CAPE workers:", + "Where 192.168.1.(2,3,4,5) is our CAPE workers:\n\nmongo\nuse cuckoo\n# 5 days, last number is days\ndb.analysis.insert({\"name\":\"tutorials point\"})\ndb.calls.insert({\"name\":\"tutorials point\"})\ndb.analysis.createIndex ( {\"_id\": \"hashed\" })\ndb.calls.createIndex ( {\"_id\": \"hashed\"})\n\ndb.analysis.createIndex ( {\"createdAt\": 1 }, {expireAfterSeconds:60*60*24*5} )\ndb.calls.createIndex ( {\"createdAt\": 1}, {expireAfterSeconds:60*60*24*5} )\n\nmongosh --port 27020\nsh.enableSharding(\"cuckoo\")\nsh.shardCollection(\"cuckoo.analysis\", { \"_id\": \"hashed\" })\nsh.shardCollection(\"cuckoo.calls\", { \"_id\": \"hashed\" })\n\nTo see stats on master:\n\nmongos using\nmongosh --host 127.0.0.1 --port 27020\nsh.status()\n\nModify cape reporting.conf [mongodb] to point all mongos in reporting.conf to host = 127.0.0.1 port = 27020\n\nTo remove shard node:\n\nTo see all shards:\ndb.adminCommand( { listShards: 1 } )\n\nThen:\nuse admin\ndb.runCommand( { removeShard: \"SHARD_NAME_HERE\" } )\n\nIf you need extra help, check this:", + "If you need extra help, check this:\n\nSee any of these files on your system:\n\n$ /etc/uwsgi/apps-available/README\n$ /etc/uwsgi/apps-enabled/README\n$ /usr/share/doc/uwsgi/README.Debian.gz\n$ /etc/default/uwsgi\n\nAdministration and some useful commands:\n\nhttps://docs.mongodb.com/manual/reference/command/nav-sharding/\n$ mongosh --host 127.0.0.1 --port 27020\n$ use admin\n$ db.adminCommand( { listShards: 1 } )\n\n$ mongosh --host 127.0.0.1 --port 27019\n$ db.adminCommand( { movePrimary: \"cuckoo\", to: \"shard0000\" } )\n$ db.adminCommand( { removeShard : \"shard0002\" } )\n\n$ # required for post movePrimary\n$ db.adminCommand(\"flushRouterConfig\")\n$ mongosh --port 27020 --eval 'db.adminCommand(\"flushRouterConfig\")' admin\n\n$ use cuckoo\n$ db.analysis.find({\"shard\" : \"shard0002\"},{\"shard\":1,\"jumbo\":1}).pretty()\n$ db.calls.getShardDistribution()\n\nTo migrate data ensure:\n$ sh.setBalancerState(true)\n\nUser authentication and roles:", + "To migrate data ensure:\n$ sh.setBalancerState(true)\n\nUser authentication and roles:\n\n# To create ADMIN\nuse admin\ndb.createUser(\n {\n user: \"ADMIN_USERNAME\",\n pwd: passwordPrompt(), // or cleartext password\n roles: [ { role: \"userAdminAnyDatabase\", db: \"admin\" }, \"readWriteAnyDatabase\" ]\n }\n)\n\n# To create user to read/write on specific database\nuse cuckoo\ndb.createUser(\n {\n user: \"WORKER_USERNAME\",\n pwd: passwordPrompt(), // or cleartext password\n roles: [ { role: \"readWrite\", db: \"cuckoo\" }, { role : \"dbAdmin\", db : \"cuckoo\" }]\n }\n)\n\n\n# To enable auth in ``/etc/mongod.conf``, add next lines\nsecurity:\n authorization: enabled\n\nNFS data fetching:\n\nNice comparison between NFS, SSHFS, SMB\nhttps://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html\n\nTo configure NFS on the main server (NFS calls it client)\n\nTo install new service for fstab utils run as root:", + "To install new service for fstab utils run as root:\n\nln -s /opt/CAPEv2/systemd/cape-fstab.service /lib/systemd/system/cape-fstab.service\nsystemctl daemon-reload\nsystemctl enable cape-fstab.service\nsystemctl start cape-fstab.service\n\nFollowing steps about folder creation, entry in fstab are automated on 30.01.2023. See utils/fstab.py\n\nCAPE worker(s) (NFS calls it servers):\n\nInstall NFS:\n * sudo apt install nfs-kernel-server\n * systemctl enable nfs-kernel-server\n\nRun `id cape`:\n * to get uid and gid to place inside of the file, Example:\n * `uid=997(cape) gid=1005(cape) groups=1005(cape),1002(libvirt),1003(kvm),1004(pcap)`\n\nAdd entry to `/etc/exports`\n /opt/CAPEv2 (rw,no_subtree_check,all_squash,anonuid=,anongid=)\nExample:\n /opt/CAPEv2 192.168.1.1(rw,no_subtree_check,all_squash,anonuid=997,anongid=1005)\n\nRun command on worker:\n exportfs -rav\n\nOnline:", + "Interactive session\n\nInstallation\n\nWarning\n\nDoesn't support cluster mode.\n\nTo install dependencies please run:\n\n$ sudo ./installer/cape2.sh guacamole\n\nNew services added:\n\n$ systemctl status guacd.service\n$ systemctl status guac-web.service\n\nWeb server configuration\n\nEnable and configure guacamole in conf/web.conf and restart cape-web.service and guacd.service:\n\n$ systemctl restart cape-web guacd.service\n\nThen configure NGINX. See best_practices_for_production for details.\n\nVirtual machine configuration\n\nAt the moment we support only KVM and we don't have plans to support any other hypervisor.\n\nTo enable support for remote session you need to add a VNC display to your VM, otherwise it won't work.\n\nHaving troubles?\n\nTo test if your guacamole working correctly you can use this code\n\nWarning\n\nIf you have opened VM in virt-manager you won't be able to get it via browser. Close virt-manager VM view and refresh tab in browser.", + "from uuid import uuid3, NAMESPACE_DNS\nfrom base64 import urlsafe_b64encode as ub64enc\nsid = uuid3(NAMESPACE_DNS, \"0000\").hex[:16]\nip = \"\" # Example 192.168.2.2\nvm_name = \"\" # example win10\nsd = ub64enc(f\"{sid}|{vm_name}|{ip}\".encode(\"utf8\")).decode(\"utf8\")\nprint(sd)\n\n# Open in your browser https:///guac/0000/\n\nStart your VM and once it finish booting, open that url in browser to ensure that remote session working just fine.\n\nIf that doesn't work, check logs:\n\n$ systemctl status guacd or journalctl -u guacd\n$ cat /opt/CAPEv2/web/guac-server.log\n\nKnown problems and solution steps:\n\nEnsure that CAPE loads on port 80 (later you can enable TLS/SSL). Sometime config instead of sites-enabled/cape.conf should be conf.d/default.conf.\n\nOnce verified that it works with http, move to https.\n\nYou can try websocket test client.\n\nTry another browser.", + "Utilities\n\nCAPE comes with a set of pre-built utilities to automate several common tasks. You can find them under the \"utils\" folder. There more utilities than documented\n\nCleanup utility\n\nUse CAPE-clean instead which also takes care of cleaning sample and task information from MySQL and PostgreSQL databases. This utility will also delete all data from the configured MongoDB or ElasticSearch databases.\n\nSubmission Utility\n\nSubmits samples to analysis. This tool is already described in submit.\n\nWeb Utility\n\nCAPE's web interface. This tool is already described in submit.\n\nProcessing Utility\n\nRun the results processing engine and optionally the reporting engine (run all reports) on an already available analysis folder, in order to not re-run the analysis if you want to re-generate the reports for it. This is used mainly in debugging and developing CAPE. For example if you want run again the report engine for analysis number 1:\n\n$ ./utils/process.py -r 1", + "$ ./utils/process.py -r 1\n\nIf you want to re-generate the reports:\n\n$ ./utils/process.py --report 1\n\nFollowing are the usage options:\n\n$ ./utils/process.py -h\n\nusage: process.py [-h] [-c] [-d] [-r] [-s] [-p PARALLEL] [-fp] [-mc MAXTASKSPERCHILD] [-md] [-pt PROCESSING_TIMEOUT] id\n\npositional arguments:\n id ID of the analysis to process (auto for continuous processing of unprocessed tasks).", + "optional arguments:\n -h, --help show this help message and exit\n -c, --caperesubmit Allow CAPE resubmit processing.\n -d, --debug Display debug messages\n -r, --report Re-generate report\n -s, --signatures Re-execute signatures on the report\n -p PARALLEL, --parallel PARALLEL\n Number of parallel threads to use (auto mode only).\n -fp, --failed-processing\n reprocess failed processing\n -mc MAXTASKSPERCHILD, --maxtasksperchild MAXTASKSPERCHILD\n Max children tasks per worker\n -md, --memory-debugging\n Enable logging garbage collection related info\n -pt PROCESSING_TIMEOUT, --processing-timeout PROCESSING_TIMEOUT\n Max amount of time spent in processing before we fail a task\n\nAs best practice we suggest to adopt the following configuration if you are running CAPE with many virtual machines:", + "Run a stand alone process.py in auto mode (you choose the number of parallel threads)\n\nThis could increase the performance of your system because the reporting is not yet demanded to CAPE.\n\nCommunity Download Utility\n\nThis utility downloads signatures from CAPE Community Repository and installs specific additional modules in your local setup. Following are the usage options:\n\n$ ./utils/community.py -h\n\nusage: community.py [-h] [-a] [-s] [-p] [-m] [-r] [-f] [-w] [-b BRANCH]", + "$ ./utils/community.py -h\n\nusage: community.py [-h] [-a] [-s] [-p] [-m] [-r] [-f] [-w] [-b BRANCH]\n\noptional arguments:\n -h, --help show this help message and exit\n -a, --all Download everything\n -s, --signatures Download Cuckoo signatures\n -p, --processing Download processing modules\n -m, --machinemanagers\n Download machine managers\n -r, --reporting Download reporting modules\n -f, --force Install files without confirmation\n -w, --rewrite Rewrite existing files\n -b BRANCH, --branch BRANCH\n Specify a different branch\n\nExample: install all available signatures:\n\n$ ./utils/community.py --signatures --force\n\nDatabase migration utility", + "$ ./utils/community.py --signatures --force\n\nDatabase migration utility\n\nThis utility is developed to migrate your data between CAPE's release. It's developed on top of the Alembic framework and it should provide data migration for both SQL database and Mongo database. This tool is already described in ../installation/upgrade.\n\nStats utility\n\nThis is a really simple utility which prints some statistics about processed samples:\n\n$ ./utils/stats.py\n\n1 samples in db\n1 tasks in db\npending 0 tasks\nrunning 0 tasks\ncompleted 0 tasks\nrecovered 0 tasks\nreported 1 tasks\nfailed_analysis 0 tasks\nfailed_processing 0 tasks\nroughly 32 tasks an hour\nroughly 778 tasks a day\n\nMachine utility\n\nThe machine.py utility is designed to help you automatize the configuration of virtual machines in CAPE. It takes a list of machine details as arguments and write them in the specified configuration file of the machinery module enabled in cuckoo.conf. Following are the available options:", + "$ ./utils/machine.py -h\nusage: machine.py [-h] [--debug] [--add] [--ip IP] [--platform PLATFORM]\n [--tags TAGS] [--interface INTERFACE] [--snapshot SNAPSHOT]\n [--resultserver RESULTSERVER]\n vmname\n\npositional arguments:\n vmname Name of the Virtual Machine.\n\noptional arguments:\n -h, --help show this help message and exit\n --debug Debug log in case of errors.\n --add Add a Virtual Machine.\n --ip IP Static IP Address.\n --platform PLATFORM Guest Operating System.\n --tags TAGS Tags for this Virtual Machine.\n --interface INTERFACE\n Sniffer interface for this machine.\n --snapshot SNAPSHOT Specific Virtual Machine Snapshot to use.\n --resultserver RESULTSERVER\n IP:Port of the Result Server.", + "Web interface\n\nCAPE provides a full-fledged web interface in the form of a Django application. This interface will allow you to submit files, browse through the reports as well as search across all the analysis results.\n\ncape2.sh adds systemd daemon called cape-web.service which listen on all interfaces:\n\n$ /lib/systemd/system/cape-web.service\n\nTo modify that you need to edit that file and change from 0.0.0.0 to your IP. You need to restart daemon to reload after change it:\n\n$ systemctl daemon-reload\n\nIf you get migration-related WARNINGS when launching the cape-web service, you should execute:\n\n$ poetry run python3 manage.py migrate\n\nNote\n\nIn order to improve performance, it is recommended to move from SQLite to PostgreSQL.\n\nConfiguration", + "Configuration\n\nThe web interface pulls data from a Mongo database or ElasticSearch, so having either the MongoDB or ElasticSearchDB reporting modules enabled in reporting.conf is mandatory for this interface. If that's not the case, the application won't start and it will raise an exception. Also, currently, Django only supports having one of the database modules enabled at a time.\n\nEnable web interface auth\n\nTo enable web authentication you need to edit conf/web.conf -> web_auth -> enabled = yes, after that you need to create your django admin user by running following command from web folder:\n\n$ poetry run python manage.py createsuperuser\n\nFor more security tips see Exposed to internet section.\n\nEnable/Disable REST API Endpoints\n\nBy default, there are multiple REST API endpoints that are disabled. To enable them, head to the API configuration file", + "For example, to enable the machines/list endpoint, you must find the [machinelist] header in the configuration file just mentioned and set the enabled field to yes.\n\nRestart the CAPE web service for the changes to take effect:\n\n$ systemctl restart cape-web\n\nUsage\n\nTo start the web interface, you can simply run the following command from the web/ directory:\n\n$ poetry run python3 manage.py runserver_plus --traceback --keep-meta-shutdown\n\nIf you want to configure the web interface as listening for any IP on a specified port (by default the web interface is deployed at localhost:8000), you can start it with the following command (replace PORT with the desired port number):\n\n$ poetry run python3 manage.py runserver_plus 0.0.0.0:8000 --traceback --keep-meta-shutdown", + "$ poetry run python3 manage.py runserver_plus 0.0.0.0:8000 --traceback --keep-meta-shutdown\n\nYou can serve CAPE's web interface using WSGI interface with common web servers: Apache, Nginx, Unicorn, and so on. Devs are using Nginx + Uwsgi. Please refer both to the documentation of the web server of your choice as well as Django documentation.\n\nSubscription\n\nSubscription allows you to control which users what can do what.\n\nRight now we support:\n\nRequest - Limits per second/minute/hour using django-ratelimit extensions\n\nReports - Allow or block downloading reports for specific users. Check conf/web.conf to enable this feature.\n\nTo extend the capabilities of control what users can do check Django migrations a primer.\n\nIn few words you need to add new fields to models.py and run poetry run python3 manage.py makemigrations\n\nExposed to internet", + "Exposed to internet\n\nTo get rid of many bots/scrappers so we suggest deploying this amazing project Nginx Ultimate bad bot blocker, follow the README for installation steps\n\nEnable web auth with captcha in conf/web.conf properly to avoid any brute force.\n\nEnable ReCaptcha. You will need to set RECAPTCHA_PUBLIC_KEY and RECAPTCHA_PRIVATE_KEY keys in web/web/local_settings.py\n\nYou might need to \"Verify\" and set as \"Stuff user\" to your admin in the Django admin panel and add your domain to Sites in Django admin too\n\nBest practices for production\n\nGunicorn + NGINX is the recommended way of serving the CAPE web UI.\n\nGunicorn\n\nFirst, configure the cape-web service to use Gunicorn\n\nModify /lib/systemd/system/cape-web.service so the ExecStart setting is set to:\n\nExecStart=/usr/bin/python3 -m poetry run gunicorn web.wsgi -w 4 -t 200 --capture-output --enable-stdio-inheritance\n\nRun\n\nsudo systemctl daemon-reload\nsudo service cape-web restart\n\nNGINX", + "Run\n\nsudo systemctl daemon-reload\nsudo service cape-web restart\n\nNGINX\n\nNext, install NGINX and configure it to be a reverse proxy to Gunicorn.\n\nsudo apt install nginx\n\nCreate a configuration file at /etc/nginx/conf.d/cape. You might need to add include /etc/nginx/conf.d/*.conf; to http section inside of /etc/nginx/nginx.conf.\n\nReplace www.capesandbox.com with your actual hostname.\n\nserver {\n listen 80;\n server_name www.capesandbox.com;\n client_max_body_size 101M;\n proxy_connect_timeout 75;\n proxy_send_timeout 200;\n proxy_read_timeout 200;\n\n\n location ^~ /.well-known/acme-challenge/ {\n default_type \"text/plain\";\n root /var/www/html;\n break;\n }\n\n location = /.well-known/acme-challenge/ {\n return 404;\n }", + "location = /.well-known/acme-challenge/ {\n return 404;\n }\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Remote-User $remote_user;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n\n location /static/ {\n alias /opt/CAPEv2/web/static/;\n }\n\n location /static/admin/ {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Remote-User $remote_user;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }", + "location /guac {\n proxy_pass http://127.0.0.1:8008;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_buffering off;\n proxy_http_version 1.1;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $http_connection;\n }\n\n location /recordings/playback/recfile {\n alias /opt/CAPEv2/storage/guacrecordings/;\n autoindex off;\n }\n}\n\nNow enable the nginx configuration by executing the following:\n\nrm -f /etc/nginx/conf.d/default\nln -s /etc/nginx/conf.d/cape /etc/nginx/conf.d/default\n\nIf you want to block users from changing their own email addresses, add the following location directive inside of the server directive:\n\nlocation /accounts/email/ {\n return 403;\n}", + "location /accounts/email/ {\n return 403;\n}\n\nIf you want to block users from changing their own passwords, add the following location directive inside of the server directive:\n\nlocation /accounts/email/ {\n return 403;\n}\n\nThe recording files written by guacd are only readable by the cape user and other members of the cape group, so in order for NGINX to read and serve the recordings the www-data user must be added to the cape group.\n\nsudo usermod www-data -G cape\n\nThen restart NGINX\n\nsudo service nginx restart\n\nWarning\n\nThe CAPE Guacamole Django web application is currently separate from the main CAPE Django web application, and does not support any authentication. Anyone who can connect to the web server access can Guacamole consoles and recordings, if they know the CAPE analysis ID and Guacamole session GUID.\n\nNGINX can be configured to require HTTP basic authentication for all CAPE web applications, as an alternative to the Django authentication system.", + "Install the apache2-utils package, which contains the htpasswd utility.\n\nsudo apt install apache2-utils\n\nUse the htpasswd file to create a new password file and add a first user, such as cape.\n\nsudo htpasswd -c /opt/CAPEv2/web/.htpasswd cape\n\nUse the same command without the -c option to add another user to an existing password file.\n\nSet the proper file permissions.\n\nsudo chown root:www-data /opt/CAPEv2/web/.htpasswd\nsudo chmod u=rw,g=r,o= /opt/CAPEv2/web/.htpasswd\n\nAdd the following lines to the NGINX configuration, just below the client_max_body_size line.\n\nauth_basic \"Authentication required\";\nauth_basic_user_file /opt/CAPEv2/web/.htpasswd;\n\nThen restart NGINX\n\nsudo service nginx restart\n\nLet's Encrypt certificates", + "Then restart NGINX\n\nsudo service nginx restart\n\nLet's Encrypt certificates\n\nIf you would like to install a free Let's Encrypt certificate on your NGINX server, follow these steps, replacing capesandbox.com with your actual hostname. Use cape2.sh to install dependencies. But also ensure that instruction are up to date with this https://certbot.eff.org/\n\nInstall certbot.\n\nsudo cape2.sh letsencrypt\n\nRequest the certificate\n\nsudo certbot certonly --webroot -w /var/www/html -d www.capesandbox.com -d capesandbox.com\n\nInstall the certificate. When prompted, select the \"Attempt to reinstall this existing certificate\" option.\n\nsudo certbot --nginx -d www.capesandbox.com -d capesandbox.com\n\nSome extra security TIP(s)\n\nModSecurity tutorial - rejects requests\n\nFail2ban tutorial - ban hosts\n\nFail2ban + CloudFlare - how to ban on CloudFlare aka CDN firewall level\n\nExample of cloudflare action ban:", + "Example of cloudflare action ban:\n\n# Author: Mike Andreasen from https://guides.wp-bullet.com\n# Adapted Source: https://github.com/fail2ban/fail2ban/blob/master/config/action.d/cloudflare.conf\n# Referenced from: https://www.normyee.net/blog/2012/02/02/adding-cloudflare-support-to-fail2ban by NORM YEE\n#\n# To get your Cloudflare API key: https://www.cloudflare.com/my-account, you should use GLOBAL KEY!\n\n[Definition]\n\n# Option: actionstart\n# Notes.: command executed once at the start of Fail2Ban.\n# Values: CMD\n#\nactionstart =\n\n# Option: actionstop\n# Notes.: command executed once at the end of Fail2Ban\n# Values: CMD\n#\nactionstop =\n\n# Option: actioncheck\n# Notes.: command executed once before each actionban command\n# Values: CMD\n#\nactioncheck =", + "# Option: actionban\n# Notes.: command executed when banning an IP. Take care that the\n# command is executed with Fail2Ban user rights.\n# Tags: IP address\n# number of failures\n# unix timestamp of the ban time\n# Values: CMD\n\nactionban = curl -s -X POST \"https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules\" -H \"X-Auth-Email: \" -H \"X-Auth-Key: \" -H \"Content-Type: application/json\" --data '{\"mode\":\"block\",\"configuration\":{\"target\":\"ip\",\"value\":\"\"},\"notes\":\"Fail2ban\"}'\n\n# Option: actionunban\n# Notes.: command executed when unbanning an IP. Take care that the\n# command is executed with Fail2Ban user rights.\n# Tags: IP address\n# number of failures\n# unix timestamp of the ban time\n# Values: CMD\n#", + "actionunban = curl -s -X DELETE \"https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules/$( \\\n curl -s -X GET \"https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules?mode=block&configuration_target=ip&configuration_value=&page=1&per_page=1&match=all\" \\\n -H \"X-Auth-Email: \" \\\n -H \"X-Auth-Key: \" \\\n -H \"Content-Type: application/json\" | awk -F\"[,:}]\" '{for(i=1;i<=NF;i++){if($i~/'id'\\042/){print $(i+1)}}}' | tr -d '\"' | head -n 1)\" \\\n -H \"X-Auth-Email: \" \\\n -H \"X-Auth-Key: \" \\\n -H \"Content-Type: application/json\"\n\n[Init]\n\n# Option: cfuser\n# Notes.: Replaces in actionban and actionunban with cfuser value below\n# Values: Your CloudFlare user account\n\ncfuser = put-your-cloudflare-email-here", + "cfuser = put-your-cloudflare-email-here\n\n# Option: cftoken\n# Notes.: Replaces in actionban and actionunban with cftoken value below\n# Values: Your CloudFlare API key\ncftoken = put-your-API-key-here\n\nExample of fail2ban rule to ban by path:\n\n# This will ban any host that trying to access /api/ for 3 times in 1 minute\n# Goes to /etc/fail2ban/filters.d/nginx-cape-api.conf\n[Definition]\nfailregex = ^ -.*\"(GET|POST|HEAD) /api/.*HTTP.*\"\nignoreregex =\n\n# goes to /etc/fail2ban/jail.local\n[cape-api]\nenabled = true\nport = http,https\nfilter = nginx-cape-api\nlogpath = /var/log/nginx/access.log\nmaxretry = 3\nfindtime = 60\nbantime = -1\n# Remove cloudflare line if you don't use it\naction = iptables-multiport\n cloudflare", + "# This will ban any host that trying to brute force login or unauthorized requests for 5 times in 1 minute\n# Goes to /etc/fail2ban/filters.d/filter.d/nginx-cape-login.conf\n[Definition]\nfailregex = ^ -.*\"(GET|POST|HEAD) /accounts/login/\\?next=.*HTTP.*\"\nignoreregex =\n\n# goes to /etc/fail2ban/jail.local\n[cape-login]\nenabled = true\nport = http,https\nfilter = nginx-cape-login\nlogpath = /var/log/nginx/access.log\nmaxretry = 5\nfindtime = 60\nbantime = -1\n# Remove cloudflare line if you don't use it\naction = iptables-multiport\n cloudflare\n\nTo check banned hosts:\n\n$ sudo fail2ban-client status cape-api\n\nTroubleshooting\n\nLogin error: no such column: users_userprofile.reports\n\nimage\n\nThis error usually appears after updating CAPEv2 and one or more changes have been made to the database schema. To solve it, you must use the web/manage utility like so:\n\n$ sudo -u cape poetry run python3 manage.py migrate\n\nThe output should be similar to:", + "$ sudo -u cape poetry run python3 manage.py migrate\n\nThe output should be similar to:\n\n$ sudo -u cape poetry run python3 manage.py migrate\nCAPE parser: No module named Nighthawk - No module named 'Crypto'\nMissed dependency flare-floss: poetry run pip install -U flare-floss\nOperations to perform:\n Apply all migrations: account, admin, auth, authtoken, contenttypes, openid, sessions, sites, socialaccount, users\nRunning migrations:\n Applying users.0002_reports... OK\n\nAfter the OK, the web service should be back to normal (no need to restart cape-web.service).\n\nNo such table: auth_user\n\nWhen executing:\n\n$ poetry run python manage.py createsuperuser\n\nan error like django.db.utils.OperationalError: no such table: auth_user may be raised. In order to solve it just execute the web/manage.py utility with the migrate option:\n\n$ sudo -u cape poetry run python3 web/manage.py migrate\n\nSlow web/API searches when using MongoDB as backend", + "Slow web/API searches when using MongoDB as backend\n\nCheck server lack of resources as memory ram, cpu or even slow hard drive.\n\nPossible issue is the lack of proper indexes.\n\nList your MongoDB indexes:\n\ndb.analysis.getIndexes()\n\nTest your query with explaination. Replace with your search patterns:\n\ndb.analysis.find({\"target.file.name\": \"\"}).explain(\"executionStats\")\n\nPay attention to stage value:\n\nexecutionStages: {\n stage: 'COLLSCAN', # <--- Full collection scan instead of index usage\n\nIf you expect it to search in index, expected output should be like this:\n\nexecutionStages: {\n stage: 'FETCH',\n \n inputStage: {\n stage: 'IXSCAN', # <--- Index usage\n\nHow to delete index\n\ndb.collection.dropIndexes(\"\")", + "CAPE Rooter\n\nThe CAPE Rooter is a new concept, providing root access for various commands to CAPE (which itself generally speaking runs as non-root). This command is currently only available for Ubuntu and Debian-like systems.\n\nIn particular, the rooter helps CAPE out with running network-related commands to provide per-analysis routing options. For more information on that, please refer to the routing document. CAPE and the rooter communicate through a UNIX socket for which the rooter makes sure that CAPE can reach it.\n\nIts usage is as follows:\n\n$ python3 rooter.py -h\nusage: rooter.py [-h] [-g GROUP] [--systemctl SYSTEMCTL] [--iptables IPTABLES] [--iptables-save IPTABLES_SAVE] [--iptables-restore IPTABLES_RESTORE] [--ip IP] [-v] [socket]\n\npositional arguments:\nsocket Unix socket path", + "positional arguments:\nsocket Unix socket path\n\noptional arguments:\n-h, --help show this help message and exit\n-g GROUP, --group GROUP\n Unix socket group\n--systemctl SYSTEMCTL\n Systemctl wrapper script for invoking OpenVPN\n--iptables IPTABLES Path to iptables\n--iptables-save IPTABLES_SAVE\n Path to iptables-save\n--iptables-restore IPTABLES_RESTORE\n Path to iptables-restore\n--ip IP Path to ip\n-v, --verbose Enable verbose logging\n\nWhen executing the rooter utility, it will default to the cuckoo group.\n\nimage\n\nYou must specify the user of the UNIX socket. As recommended in the installation, it should be the cape user. You can do so by executing the following command:\n\n$ sudo python3 utils/rooter.py -g cape\n\nHowever, if you're running CAPE under a user other than cape, you will have to specify this to the rooter as follows:", + "$ sudo python3 utils/rooter.py -g \n\nThe other options are fairly straightforward - you can specify the paths to specific Linux commands. By default, one shouldn't have to do this though, as the rooter takes the default paths for the various utilities as per a default setup.\n\nVirtualenv\n\nSince the rooter must be run as root user, there are some slight complications when using a virtualenv to run CAPE. More specifically, when running sudo python3 utils/rooter.py, the $VIRTUAL_ENV environment variable will not be passed along, due to which Python will not be executed from the same virtualenv as it would have been normally.\n\nTo resolve this one simply has to execute the cape binary from the virtualenv session directly. E.g., if your virtualenv is located at ~/venv, then running the rooter command could be done as follows:\n\n$ sudo ~/venv/bin/cape rooter\n\nCAPE Rooter Usage", + "$ sudo ~/venv/bin/cape rooter\n\nCAPE Rooter Usage\n\nUsing the CAPE Rooter is pretty easy. If you know how to start it, you're good to go. Even though CAPE talks with the CAPE Rooter for each analysis with a routing option other than routing_none, the CAPE Rooter does not keep any state or attach to any CAPE instance in particular.\n\nIt is therefore that once the CAPE Rooter has been started you may leave it be - the CAPE Rooter will take care of itself from that point onwards, no matter how often you restart your CAPE instance.", + "CAPE advanced administration\n\nWIP YET!\n\nEverything is easy when you have one server. But when you have many servers or even cluster some parts become more complicated. And when you run your private fork due to custom parts of CAPE, is where the challenge start. For that reason I wrote admin/admin.py. With this utility script you can do a lot of different interesting things that @doomedraven faced with his CAPE clusters. Just to mention some:\n\nServers in different networks that requires different SSH pivoting.\n\nDeploy 1 or N modified number of files (to be pushed to repo) or that was merged by another person and you need to deploy it after git pull.\n\nCompare upstream repo to your private fork or to list of files on your servers. This helps spot badly deployed files, where sha256 doesn't match.\n\nExecute commands on all servers.\n\nPull files.\n\nSee -h for the rest of your options\n\nDependencies", + "Execute commands on all servers.\n\nPull files.\n\nSee -h for the rest of your options\n\nDependencies\n\nYou need to add your ssh key to .ssh/authorized_keys. I personally suggest to add it under root user.\n\nSSH Pivoting explained\n\nSSH pivoting is when you access to one server using another as proxy. In case if you need deeper explanation of this. Google it! admin.py support two types of of pivoting, simple and more complex. You need to configure admin/admin_conf.py\n\nComparing files\n\nThe idea of this is to spot files that doesn't match and fix them. Right now only deletion works, but in future it will support deploying of mismatched files.", + "In case you use your own fork of CAPE. Is good to compare from time to time that you didn't miss any update and have all files properly updated. Some of us will have made custom mods to some files as for example: file_extra_info.py for example. You can exclude them in config under EXCLUDE_FILENAMES. Also another known problem that most advanced users will have their own YARA rules, config extractors, etc. For that my personal suggestion is to use prefix of your choice in that way you can filter them out in config with EXCLUDE_PREFIX. To generate repositories listing run:\n\npoetry run python admin/admin.py -gfl --filename \n\npoetry run python admin/admin.py -gfl --filename upstream\n\nThe rest of the possibilities", + "Analysis Packages\n\nThe analysis packages are a core component of CAPE Sandbox. They consist of structured Python classes that, when executed in the guest machines, describe how CAPE's analyzer component should conduct the analysis.\n\nCAPE provides some default analysis packages that you can use, but you can create your own or modify the existing ones. You can find them at analyzer/windows/modules/packages/.\n\nAs described in ../usage/submit, you can specify some options to the analysis packages in the form of key1=value1,key2=value2. The existing analysis packages already include some default options that can be enabled.\n\nThe following is a list of the existing packages in alphabetical order:\n\naccess: used to run and analyze Microsoft Office Access files via msaccess.exe.\n\napplet: used to run and analyze Java applets via firefox.exe or iexplore.exe.\n\narchive: used to run and analyze archives such as ISO, VHD and anything else that 7-Zip can extract via 7z.exe.", + "Explanation how it works can be found in this Technical Session for CyberShock 2022, presented by CCCS.\n\nNB: Passing file= as a task option will ensure that the entire archive is passed to the victim VM and extracted there, prior to executing files of interest within in the extracted folder.\n\nchm: used to run and analyze Microsoft Compiled HTML Help files via hh.exe.\n\nchrome: used to open the given URL via chrome.exe.\n\nchromium: used to open the given URL via the Chromium version of chrome.exe.\n\ncpl: used to run and analyze Control Panel Applets via control.exe.\n\ndll: used to run and analyze Dynamically Linked Libraries via rundll32.exe.\n\ndoc_antivm: used to run and analyze Microsoft Word documents via winword.exe or wordview.exe.\n\nNB: Multiple applications are executed prior to the sample's execution, to prevent certain anti-vm techniques.\n\ndoc: used to run and analyze Microsoft Word documents via winword.exe.", + "doc: used to run and analyze Microsoft Word documents via winword.exe.\n\ndoc2016: used to run and analyze Microsoft Word documents via Microsoft Office 2016's winword.exe.\n\nedge: used to open the given URL via msedge.exe.\n\neml: used to run and analyze Electronic Mail files via outlook.exe.\n\nexe: default analysis package used to run and analyze generic Windows executables.\n\nfirefox: used to open the given URL via firefox.exe.\n\ngeneric: used to run and analyze generic samples via cmd.exe.\n\nhta: used to run and analyze HTML Applications via mshta.exe.\n\nhwp: used to run and analyze Hangul Word Processor files via hwp.exe or hword.exe.\n\nichitaro: used to run and analyze Ichitaro Word Processor files via taroview.exe.\n\nie: used to open the given URL via iexplore.exe.\n\ninp: used to run and analyze Inpage Word Processor files via inpage.exe.\n\njar: used to run and analyze Java JAR containers via java.exe.\n\njs_antivm: used to run and analyze JavaScript and JScript Encoded files via wscript.exe.", + "js_antivm: used to run and analyze JavaScript and JScript Encoded files via wscript.exe.\n\nNB: This package opens 20 Calculator windows prior to execution, to prevent certain anti-vm techniques.\n\njs: used to run and analyze JavaScript and JScript Encoded files via wscript.exe.\n\nNB: This package opens 20 Calculator windows prior to .jse execution, to prevent certain anti-vm techniques.\n\nlnk: used to run and analyze Windows Shortcuts via cmd.exe.\n\nmht: used to run and analyze MIME HTML files via iexplore.exe.\n\nmsbuild: used to run and analyze Microsoft Build Engine files via msbuild.exe.\n\nmsg: used to run and analyze Outlook Message Item files via outlook.exe.\n\nmsi: used to run and analyze Windows Installer Package files via msiexec.exe.\n\nnsis: used to run and analyze Nullsoft Scriptable Install System files via cmd.exe.\n\nollydbg: used to run and analyze generic samples via ollydbg.exe.\n\nNB: The ollydbg.exe application must be in the analyzer's bin directory.", + "NB: The ollydbg.exe application must be in the analyzer's bin directory.\n\none: used to run and analyze Microsoft OneNote documents via onenote.exe.\n\npdf: used to run and analyze PDF documents via acrord32.exe.\n\nppt: used to run and analyze Microsoft PowerPoint documents via powerpnt.exe.\n\nppt2016: used to run and analyze Microsoft PowerPoint documents via Microsoft Office 2016's powerpnt.exe.\n\nps1_x64: used to run and analyze PowerShell scripts via powershell.exe in SysNative.\n\nNB: This package uses the powershell.exe in SysNative.\n\nps1: used to run and analyze PowerShell scripts via powershell.exe in System32.\n\nNB: This package uses the powershell.exe in System32.\n\npub: used to run and analyze Microsoft Publisher documents via mspub.exe.\n\npub2016: used to run and analyze Microsoft Publisher documents via Microsoft Office 2016's mspub.exe.\n\npython: used to run and analyze Python scripts via py.exe or python.exe.", + "python: used to run and analyze Python scripts via py.exe or python.exe.\n\nrar: extracts WinRAR Compressed Archive files via the rarfile Python package, and runs an executable file (if it exists), with cmd.exe.\n\nNB: The rarfile Python package must be installed on the guest.\n\nreg: used to run and analyze Registry files via reg.exe.\n\nregsvr: used to run and analyze Dynamically Linked Libraries via regsvr32.exe.\n\nsct: used to run and analyze Windows Scriptlet files via regsvr32.exe.\n\nservice_dll: used to run and analyze Service Dynamically Linked Libraries via sc.exe.\n\nservice: used to run and analyze Services via sc.exe.\n\nshellcode_x64: used to run and analyze Shellcode via the 64-bit CAPE loader.\n\nshellcode: used to run and analyze Shellcode via the 32-bit CAPE loader, with unpacking!\n\nswf: used to run and analyze Shockwave Flash via flashplayer.exe.\n\nNB: You need to have flashplayer.exe in the analyzer's bin folder.", + "NB: You need to have flashplayer.exe in the analyzer's bin folder.\n\nvbejse: used to run and analyze VBScript Encoded and JScript Encoded files via wscript.exe.\n\nvbs: used to run and analyze VBScript and VBScript Encoded files via wscript.exe.\n\nwsf: used to run and analyze Windows Script Files via wscript.exe.\n\nxls: used to run and analyze Microsoft Excel documents via excel.exe.\n\nxls2016: used to run and analyze Microsoft Excel documents via Microsoft Office 2016's excel.exe.\n\nxslt: used to run and analyze eXtensible Stylesheet Language Transformation Files via wmic.exe.\n\nxps: used to run and analyze XML Paper Specification Files via xpsrchvw.exe.\n\nzip_compound: used to run and analyze Zip archives with more specific settings.\n\nNB: Either file option must be set, or a __configuration.json file must be present in the zip file. Sample json file:", + "{\n \"path_to_extract\": {\n \"a.exe\": \"%USERPROFILE%\\\\Desktop\\\\a\\\\b\\\\c\",\n \"folder_b\": \"%appdata%\"\n },\n \"target_file\":\"a.exe\"\n}\n\nzip: extract Zip archives via the zipfile Python package, and runs an executable file (if it exists), with cmd.exe.\n\nYou can find more details on how to start creating analysis packages in the ../customization/packages customization chapter.\n\nAs you already know, you can select which analysis package to use by specifying its name at submission time (see submit) as follows:\n\n$ ./utils/submit.py --package /path/to/malware\n\nIf no package is specified, CAPE will try to detect the file type and select the correct analysis package accordingly. If the file type is not supported by default, the analysis will be aborted. Therefore we encourage to specify the package name whenever possible.\n\nFor example, to launch a malware sample and specify some options you can do:", + "For example, to launch a malware sample and specify some options you can do:\n\n$ ./utils/submit.py --package dll --options function=FunctionName,loader=explorer.exe /path/to/malware.dll", + "Pattern replacement\n\nReplace/discard any host/network pattern.\n\nCleaning Operation system patterns.\n\nPut your patterns inside of data/safelist/replacepatterns.py\n\nCleaning Network patterns as IP(s)/Domains.", + "Submit an Analysis\n\nsubmitpy\n\napipy\n\ndistpy\n\nwebpy\n\npython\n\nSubmission Utility\n\nThe easiest way to submit an analysis is to use the provided submit.py command-line utility. It currently has the following options available:\n\nusage: submit.py [-h] [--remote REMOTE] [--url] [--package PACKAGE]\n [--custom CUSTOM] [--timeout TIMEOUT] [--options OPTIONS]\n [--priority PRIORITY] [--machine MACHINE]\n [--platform PLATFORM] [--memory] [--enforce-timeout]\n [--clock CLOCK] [--tags TAGS] [--max MAX] [--pattern PATTERN]\n [--shuffle] [--unique] [--quiet]\n target\n\npositional arguments:\n target URL, path to the file or folder to analyze", + "optional arguments:\n -h, --help show this help message and exit\n --remote REMOTE Specify IP:port to a CAPE API server to submit\n remotely\n --url Specify whether the target is an URL\n --package PACKAGE Specify an analysis package\n --custom CUSTOM Specify any custom value\n --timeout TIMEOUT Specify an analysis timeout\n --options OPTIONS Specify options for the analysis package (e.g.\n \"name=value,name2=value2\")\n --priority PRIORITY Specify a priority for the analysis represented by an\n integer\n --machine MACHINE Specify the identifier of a machine you want to use\n --platform PLATFORM Specify the operating system platform you want to use\n (windows/darwin/linux)\n --memory Enable to take a memory dump of the analysis machine\n --enforce-timeout Enable to force the analysis to run for the full\n timeout period", + "timeout period\n --clock CLOCK Set virtual machine clock\n --tags TAGS Specify tags identifier of a machine you want to use\n --max MAX Maximum samples to add in a row\n --pattern PATTERN Pattern of files to submit\n --shuffle Shuffle samples before submitting them\n --unique Only submit new samples, ignore duplicates\n --quiet Only print text on failure", + "If you specify a directory as the target path, all of the files contained within that directory will be submitted for analysis.\n\nThe concept of analysis packages will be dealt with later in this documentation (at packages). The following are some examples of how to use the submit.py tool:\n\nWarning\n\nRemember to use the cape user. The following commands are executed as cape.\n\nExample: Submit a local binary:\n\n$ poetry run python utils/submit.py /path/to/binary\n\nExample: Submit an URL:\n\n$ poetry run python utils/submit.py --url http://www.example.com\n\nExample: Submit a local binary and specify a higher priority:\n\n$ poetry run python utils/submit.py --priority 5 /path/to/binary\n\nExample: Submit a local binary and specify a custom analysis timeout of 60 seconds:\n\n$ poetry run python utils/submit.py --timeout 60 /path/to/binary\n\nExample: Submit a local binary and specify a custom analysis package:\n\n$ poetry run python utils/submit.py --package /path/to/binary", + "$ poetry run python utils/submit.py --package /path/to/binary\n\nExample: Submit a local binary and specify a custom analysis package and some options (in this case a command line argument for the malware):\n\n$ poetry run python utils/submit.py --package exe --options arguments=--dosomething /path/to/binary.exe\n\nExample: Submit a local binary to be run on the virtual machine cape1:\n\n$ poetry run python utils/submit.py --machine cape1 /path/to/binary\n\nExample: Submit a local binary to be run on a Windows machine:\n\n$ poetry run python utils/submit.py --platform windows /path/to/binary\n\nExample: Submit a local binary and take a full memory dump of the analysis machine once the analysis is complete:\n\n$ poetry run python utils/submit.py --memory /path/to/binary\n\nExample: Submit a local binary and force the analysis to be executed for the full timeout (disregarding the internal mechanism that CAPE uses to decide when to terminate the analysis):", + "$ poetry run python utils/submit.py --enforce-timeout /path/to/binary\n\nExample: Submit a local binary and set the virtual machine clock. The format is %m-%d-%Y %H:%M:%S. If not specified, the current time is used. For example, if we want to run a sample on January 24th, 2001, at 14:41:20:\n\n$ poetry run python utils/submit.py --clock \"01-24-2001 14:41:20\" /path/to/binary\n\nExample: Submit a sample for Volatility analysis (to reduce side effects of the CAPE hooking, switch it off with options free=True):\n\n$ poetry run python utils/submit.py --memory --options free=True /path/to/binary\n\n--options Options Available\n\nfilename: Rename the sample file\n\nname: This will force family extractor to run, Ex: name=trickbot\n\nexecutiondir: Sets directory to launch the file from. Need not be the same as the directory of sample file. Defaults to %TEMP% if both executiondir and curdir are not specified. Only supports full paths\n\nfree: Run without monitoring (disables many capabilities) Ex: free=1", + "free: Run without monitoring (disables many capabilities) Ex: free=1\n\nforce-sleepskip: Override default sleep skipping behavior: 0 disables all sleep skipping, 1 skips all sleeps.\n\nfull-logs: By default, logs prior to network activity for URL analyses and prior to access of the file in question for non-executable formats are suppressed. Set to 1 to disable log suppression.\n\nforce-flush: For performance reasons, logs are buffered before being sent back to the result server. We make every attempt to flush the buffer at critical points including when exceptions occur, but in some rare termination scenarios, logs may be lost. Set to 1 to force flushing of the log buffers after any non-duplicate API is called, set to 2 to force flushing of every log.\n\nno-stealth: Set to 1 to disable anti-anti-VM/sandbox code enabled by default.\n\nbuffer-max: When set to an integer of your choice, changes the maximum number of bytes that can be logged for most API buffers.", + "large-buffer-max: Some hooked APIs permit larger buffers to be logged. To change the limit for this, set this to an integer of your choice.\n\nnorefer: Disables use of a fake referrer when performing URL analyses\n\nfile: When using the zip or rar package, set the name of the file to execute\n\npassword: When using the zip or rar package, set the password to use for extraction. Also used when analyzing password-protected Office documents.\n\nfunction: When using the dll package, set the name of the exported function to execute\n\ndllloader: When using the dll package, set the name of the process loading the DLL (defaults to rundll32.exe).\n\narguments: When using the dll, exe, or python packages, set the arguments to be passed to the executable or exported function.\n\nappdata: When using the exe package, set to 1 to run the executable out of the Application Data path instead of the Temp directory.", + "startbrowser: Setting this option to 1 will launch a browser 30 seconds into the analysis (useful for some banking trojans).\n\nbrowserdelay: Sets the number of seconds to wait before starting the browser with the startbrowser option. Defaults to 30 seconds.\n\nurl: When used with the startbrowser option, this will determine the URL the started browser will access.\n\ndebug: Set to 1 to enable reporting of critical exceptions occurring during analysis, set to 2 to enable reporting of all exceptions.\n\ndisable_hook_content: Set to 1 to remove functionality of all hooks except those critical for monitoring other processes. Set to 2 to apply to all hooks.\n\nhook-type: Valid for 32-bit analyses only. Specifies the hook type to use: direct, indirect, or safe. Safe attempts a Detours-style hook.\n\nserial: Spoof the serial of the system volume as the provided hex value\n\nsingle-process: When set to 1 this will limit behavior monitoring to the initial process only.", + "single-process: When set to 1 this will limit behavior monitoring to the initial process only.\n\nexclude-apis: Exclude the colon-separated list of APIs from being hooked\n\nexclude-dlls: Exclude the colon-separated list of DLLs from being hooked\n\ndropped-limit: Override the default dropped file limit of 100 files\n\ncompression: When set to 1 this will enable CAPE's extraction of compressed payloads\n\nextraction: When set to 1 this will enable CAPE's extraction of payloads from within each process\n\ninjection: When set to 1 this will enable CAPE's capture of injected payloads between processes\n\ncombo: This combines compression, injection and extraction with process dumps\n\ndump-on-api: Dump the calling module when a function from the colon-separated list of APIs is used\n\nbp0: Sets breakpoint 0 (processor/hardware) to a VA or RVA value (or module::export). Applies also to bp1-bp3.\n\nfile-offsets: Breakpoints in bp0-bp3 will be interpreted as PE file offsets rather than RVAs", + "file-offsets: Breakpoints in bp0-bp3 will be interpreted as PE file offsets rather than RVAs\n\nbreak-on-return: Sets breakpoints on the return address(es) from a colon-separated list of APIs\n\nbase-on-api: Sets the base address to which breakpoints will be applied (and sets breakpoints)\n\ndepth: Sets the depth an instruction trace will step into (defaults to 0, requires Trace package)\n\ncount: Sets the number of instructions in a trace (defaults to 128, requires Trace package)\n\nreferrer: Specify the referrer to be used for URL tasks, overriding the default Google referrer\n\nloop_detection: Set this option to 1 to enable loop detection (compress call logs - behavior analysis)\n\nstatic: Check if config can be extracted statically, if not, send to vm\n\nDl&Exec add headers: Example: dnl_user_agent: \"CAPE Sandbox\", dnl_referrer: google\n\nservicedesc - for service package: Service description\n\narguments - for service package: Service arguments", + "arguments - for service package: Service arguments\n\nstore_memdump: Will force STORE memdump, only when submitting to analyzer node directly, as distributed cluster can modify this\n\npre_script_args: Command line arguments for pre_script. Example: pre_script_args=file1 file2 file3\n\npre_script_timeout: pre_script_timeout will default to 60 seconds. Script will stop after timeout Example: pre_script_timeout=30\n\nduring_script_args: Command line arguments for during_script. Example: during_script_args=file1 file2 file3\n\npwsh: - for ps1 package: prefer PowerShell Core, if available in the vm\n\ncheck_shellcode: - Setting check_shellcode=0 will disable checking for shellcode during package identification and extracting from archive\n\nunhook-apis: - capability to dynamically unhook previously hooked functions (unhook-apis option takes colon-separated list e.g. unhook-apis=NtSetInformationThread:NtDelayExecution)", + "ttd: - ttd=1. TTD integration (Microsoft Time Travel Debugging). Place TTD binaries in analyzer/windows/bin (with wow64 subdirectory for 32-bit). .trc files output to TTD directory in results folder for manual retrieval\n\nWeb Interface\n\nDetailed usage of the web interface is described in web.\n\nAPI\n\nDetailed usage of the REST API interface is described in api.\n\nDistributed CAPE\n\nDetailed usage of the Distributed CAPE API interface is described in dist.\n\nPython Functions\n\nTo keep track of submissions, samples, and overall execution, CAPE uses a popular Python ORM called SQLAlchemy that allows you to make the sandbox use PostgreSQL, SQLite, MySQL, and several other SQL database systems.\n\nCAPE is designed to be easily integrated into larger solutions and to be fully automated. To automate analysis submission we suggest using the REST API interface described in api, but in case you want to write a Python submission script, you can also use the add_path() and add_url() functions.", + "add_path(file_path[, timeout=0[, package=None[, options=None[, priority=1[, custom=None[, machine=None[, platform=None[, memory=False[, enforce_timeout=False], clock=None[]]]]]]]]])\n\nAdd a local file to the list of pending analysis tasks. Returns the ID of the newly generated task.\n\nExample usage:\n\n>>> from lib.cuckoo.core.database import Database\n>>> db = Database()\n>>> db.add_path(\"/tmp/malware.exe\")\n1\n>>>\n\nadd_url(url[, timeout=0[, package=None[, options=None[, priority=1[, custom=None[, machine=None[, platform=None[, memory=False[, enforce_timeout=False], clock=None[]]]]]]]]])\n\nAdd a local file to the list of pending analysis tasks. Returns the ID of the newly generated task.\n\nExample Usage:\n\n>>> from lib.cuckoo.core.database import Database\n>>> db = Database()\n>>> db.add_url(\"http://www.cuckoosandbox.org\")\n2\n>>>\n\nTroubleshooting\n\nsubmit.py\n\nIf you try to submit an analysis using submit.py and your output looks like:", + "submit.py\n\nIf you try to submit an analysis using submit.py and your output looks like:\n\n$ sudo -u cape poetry run python submit.py /path/to/binary/test.exe\nError: adding task to database\n\nIt could be due to errors while trying to communicate with the PostgreSQL instance. PostgreSQL is installed and configured by default when executing cape2.sh. Make sure your PostgreSQL instance is active and running. To check it out execute the following command:\n\n$ sudo systemctl status postgresql\n\nIf the status is other than Active (it can be in exited status, as long as it is Active), there is something that needs to be fixed.\n\nThe logs for PostgreSQL can be found under /var/log/postgresql/*.log.\n\nIf everything is working regarding PostgreSQL, make sure the cape user is able to access (both read and write) the directories involved in the analysis. For example, cape must be able to read and write in /tmp.\n\nAnalysis results\n\nCheck analysis_results.", + "CAPE's debugger\n\nCAPE's debugger is one of the most powerful features of the sandbox: a programmable debugger configured at submission by either Yara signature or submission options, allowing breakpoints to be set dynamically. This allows instruction traces of malware execution to be captured, as well as configuring actions to perform such as control flow manipulation for anti-sandbox bypasses, or dumping decrypted config regions or unpacked payloads.\n\nWhat make CAPE's debugger unique among Windows debuggers is the fact that it has been built with minimal (almost zero) use of Windows debugging interfaces specifically for the purpose of malware analysis. Its goal is to make maximal use of the processor's debugging hardware but to avoid Windows interfaces which are typically targeted by anti-debug techniques.\n\nThe debugger is not interactive, its actions are pre-determined upon submission and the results can be found in the debugger log which is presented in a dedicated tab in the UI.", + "Th following is a quick guide on getting started with the debugger.\n\nBreakpoints: bp0, bp1, bp2, bp3\n\nThe most important feature of the debugger is the ability to set and catch hardware breakpoints using the debug registers of the CPU. There are four breakpoints slots in the Intel CPU to make use of, however it's worth noting that there is no help from the hardware for implemening a debugger feature like stepping over calls, so to achieve this one of the four breakpoints is needed. There are instructions (such as syscalls) which cannot be stepped into, so which must be stepped over. So to allow this as well as stepping over calls via the 'depth' option, at least one breakpoint must be kept free. For more background information on the hardware used here see: https://en.wikipedia.org/wiki/X86_debug_register.", + "Breakpoints are set using the options bp0, bp1, bp2 and bp3, supplying an RVA value. For example bp0=0x1234. The image base for the RVAs can be set dynamically in a number of ways, please see the remainder of the documentation.\n\nIn order to break on entry point, the option can be to set to 'ep': bp0=ep.This will instruct the debugger to break on the entry point of the main executable of each process and begin tracing. (In the case of a DLL, this breakpoint will also be set on the entry point of the DLL).\n\nDepth\n\nIn single-step mode, the behaviour of a trace can be characterised in terms of whether it steps into a call, or over it. From this comes the concept of depth; the debugger will trace at the same depth in a trace by stepping-over calls to deeper functions. Thus if we set a depth of zero (which is also the default) the behaviour will be to step over all the subsequent calls (at least until a ret is encountered):\n\ndepth=0", + "depth=0\n\nIf we set a depth of, say, three, then the debugger will step into calls into further levels of depth three times:\n\ndepth=3\n\nCount\n\nAnother important characteristic of a trace is its length or count of instructions. This is set with the count option, for example:\n\ncount=10000\n\nThe count may also be specified as hexadecimal:\n\ncount=0xff00\n\nThe default count is 0x4000.\n\nBreak-on-return\n\nSometimes it might be more convenient or quicker to take advantage of the fact that a certain API call is made from an interesting code region, with its return or 'caller' address to the region in question accompanying the API output in the behavior log. We can tell the debugger to use that return address as a breakpoint with the break-on-return option, for example:\n\nbreak-on-return=RtlDecompressBuffer\n\nBase-on-api", + "break-on-return=RtlDecompressBuffer\n\nBase-on-api\n\nInstead of breaking directly on the return address of an API, we may just wish to base our breakpoints on the same base address as a particular API. For this we use the base-on-api option, for example: * base-on-api=NtSetInformationThread\n\nThis option requires that the breakpoint RVA value be specified by one of the breakpoint options (bp, br).\n\nBase-on-alloc", + "Base-on-alloc\n\nAn obvious restriction using this method is that the API call from which the image base is determined must be made before the code we wish to put a breakpoint on is executed. For this reason, there exists an alternative option, base-on-alloc, which will attempt to set the breakpoint RVA relative to every newly executable region (whether through allocation or protection). The advantage of this method is that the breakpoint will always be set before the code can execute, but the downside is that breakpoints may repeatedly be set needlessly with allocations that are not of interest. This is simply set by the option: * base-on-alloc=1\n\nActions", + "Actions\n\nOften we might wish to perform an action when a breakpoint is hit. These actions can be defined by the actions: action0, action1, action2, and action3, each corresponding to a respective breakpoint. The action is specified by a simple string (not case sensitive). The list of actions is constantly growing, so if the need arises for further actions, they can be simply added.\n\nThe list of actions and their implementation can be found in Trace.c of Capemon(CAPE's monitor), specifically in the ActionDispatcher. It would be really easy to add additionnal actions and there is a lot of other gadgets which could be added there depending on the needs of the debugger's user.\n\nType\n\nAlthough the debugger defaults to execution breakpoints, it is also possible to set data breakpoints either for read-only, or both read & write. This is specified with the options: type0, type1, type2, and type3 for the corresponding breakpoint. The type option uses the following values:\n\nr - read only", + "r - read only\n\nw - write and read\n\nx - execution\n\nbr0, br1, br2, br3\n\nSometimes it may be convenient to set a breakpoint on the return address of a function, for example when it might be easier to write a YARA signature to detect a function but when you wish to break after it has been executed. For this, the br options exist, where br0 will set a breakpoint on the return address of the function at the supplied address. The format for the address is the same as the one for breakpoints mentionned above. Since the return address (for the breakpoint) is fetched from the top of the stack, the addresses supplied must either be the very first instruction of the function or certainly must come before any instruction that modifies the stack pointer such as push or pop.\n\nFake-rdtsc\n\nThis advanced feature is there for interacting with the TSC register. To learn more on it and what it's used for see: https://en.wikipedia.org/wiki/Time_Stamp_Counter.", + "To 'emulate' (skip and fake) the rdtsc instruction, the option fake-rdtsc=1 may be set. This will only have an affect on rdtsc instructions that are traced over by the debugger. If the debugger is not tracing at the time the CPU executes the instruction, it cannot of course fake the return value.\n\nThe effect of this setting is to allow the first traced rdtsc instruction to execute normally, but thereafter to fake the return value with the original return value plus whatever value is specified in the option. For example:\n\n'rdtsc=0x1000'\n\nThis will result in each subsequent rdtsc instruction after the first being faked with a value that has incremented by 0x1000.\n\nPractical examples\n\nFor more and the most up-to-date versions of examples please see https://github.com/kevoreilly/CAPEv2/tree/master/analyzer/windows/data/yara", + "rule Guloader\n{\n meta:\n author = \"kevoreilly\"\n description = \"Guloader bypass\"\n cape_options = \"bp0=$trap0,bp0=$trap1+4,action0=skip,bp1=$trap2+11,bp1=$trap3+19,action1=skip,bp2=$antihook,action2=goto:ntdll::NtAllocateVirtualMemory,count=0,\"\n strings:\n $trap0 = {0F 85 [2] FF FF 81 BD ?? 00 00 00 [2] 00 00 0F 8F [2] FF FF 39 D2 83 FF 00}\n $trap1 = {49 83 F9 00 75 [1-20] 83 FF 00 [2-6] 81 FF}\n $trap2 = {39 CB 59 01 D7 49 85 C8 83 F9 00 75 B3}\n $trap3 = {61 0F AE E8 0F 31 0F AE E8 C1 E2 20 09 C2 29 F2 83 FA 00 7E CE C3}\n $antihook = {FF 34 08 [0-48] 8F 04 0B [0-80] 83 C1 04 83 F9 18 75 [0-128] FF E3}\n condition:\n 2 of them\n}", + "rule GuloaderB\n{\n meta:\n author = \"kevoreilly\"\n description = \"Guloader bypass 2021 Edition\"\n cape_options = \"bp0=$trap0+12,action0=ret,bp1=$trap1,action1=ret2,bp2=$antihook,action2=goto:ntdll::NtAllocateVirtualMemory,count=0,\"\n strings:\n $trap0 = {81 C6 00 10 00 00 81 FE 00 F0 FF 7F 0F 84 [2] 00 00}\n $trap1 = {31 FF [0-24] (B9|C7 85 F8 00 00 00) 60 5F A9 00}\n $antihook = {FF 34 08 [0-48] 8F 04 0B [0-80] 83 C1 04 83 F9 18 75 [0-128] FF E3}\n condition:\n 2 of them\n}\n\nrule Pafish\n{\n meta:\n author = \"kevoreilly\"\n description = \"Pafish bypass\"\n cape_options = \"bp0=$rdtsc_vmexit-2,action0=SetZeroFlag,count=1\"\n strings:\n $rdtsc_vmexit = {8B 45 E8 80 F4 00 89 C3 8B 45 EC 80 F4 00 89 C6 89 F0 09 D8 85 C0 75 07}\n condition:\n uint16(0) == 0x5A4D and $rdtsc_vmexit\n}", + "rule Ursnif3\n{\n meta:\n author = \"kevoreilly\"\n description = \"Ursnif Config Extraction\"\n cape_options = \"br0=$crypto32-73,instr0=cmp,dumpsize=eax,action0=dumpebx,dumptype0=0x24,count=1\"\n strings:\n $golden_ratio = {8B 70 EC 33 70 F8 33 70 08 33 30 83 C0 04 33 F1 81 F6 B9 79 37 9E C1 C6 0B 89 70 08 41 81 F9 84 00 00 00}\n $crypto32_1 = {8B C3 83 EB 01 85 C0 75 0D 0F B6 16 83 C6 01 89 74 24 14 8D 58 07 8B C2 C1 E8 07 83 E0 01 03 D2 85 C0 0F 84 AB 01 00 00 8B C3 83 EB 01 85 C0 89 5C 24 20 75 13 0F B6 16 83 C6 01 BB 07 00 00 00}\n $crypto32_2 = {8B 45 EC 0F B6 38 FF 45 EC 33 C9 41 8B C7 23 C1 40 40 D1 EF 75 1B 89 4D 08 EB 45}\n condition:\n ($golden_ratio) and any of ($crypto32*)\n}", + "As shown in the example above, the debugger options are passed in the cape_options section of yar files in the analyzer of CAPE but could be passed to the submission itself like other parameters. It is important to note that even through it appear that br0 and br1 would have multiple values in the Guloader rule above, it is not the case and it's not possible to assign multiples values to them. This is because the yara is designed with an assumption in mind: the patterns $trap0 and $trap1 should never appear concurrently in the same sample. This particular sig is designed to deal with two variants of the same malware where bp0 and bp1 will only ever be set to either one of those values.\n\nImporting instruction traces into disassembler\n\nIt is possible to import CAPE's debugger output into a dissassembler. One example procedure is as follow:\n\nHighlight CFG in disassembler:", + "Highlight CFG in disassembler:\n\n1 Install lighthouse plugin from\n pip3 install git+https://github.com/kevoreilly/lighthouse\n2 Load payload into IDA\n3 Check image base matches that from debugger log (if not rebase)\n4 Go to File -> Load File -> Code coverage file and load debugger logfile (ignore any warnings - any address outside image base causes these)\n\nimage", + "Clean all Tasks and Samples\n\nTo clean your setup, run -h to see available options:\n\n$ poetry run python utils/cleaners.py -h\n\nTo sum up, this command does the following:\n\nDelete analysis results.\n\nDelete submitted binaries.\n\nDelete all associated information of the tasks and samples in the configured database.\n\nDelete all data in the configured MongoDB (if configured and enabled in reporting.conf).\n\nDelete all data in ElasticSearch (if configured and enabled in reporting.conf).\n\nWarning\n\nIf you use this command you will delete permanently all data stored by CAPE in all storages: file system, SQL database, and MongoDB/ElasticSearch database. Use it only if you are sure you would clean up all the data.\n\nAfter executing the poetry run python cleaners.py --clean utility, you must restart CAPE service as it destroys the database.:\n\n$ sudo systemctl restart cape\n\nAfter any other option, you don't need to restart the service.", + "Starting CAPE\n\nMake sure to run it inside CAPE's root directory:\n\n$ cd /opt/CAPEv2\n\nTo start CAPE, use the command:\n\n$ python3 cuckoo.py\n\nYou will get an output similar to this:\n\nCuckoo Sandbox 2.1-CAPE\nwww.cuckoosandbox.org\nCopyright (c) 2010-2015\n\nCAPE: Config and Payload Extraction\ngithub.com/kevoreilly/CAPEv2\n\n2020-07-06 10:24:38,490 [lib.cuckoo.core.scheduler] INFO: Using \"kvm\" machine manager with max_analysis_count=0, max_machines_count=10, and max_vmstartup_count=10\n2020-07-06 10:24:38,552 [lib.cuckoo.core.scheduler] INFO: Loaded 100 machine/s\n2020-07-06 10:24:38,571 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks.\n\nNow CAPE is ready to run and it's waiting for submissions.\n\ncuckoo.py accepts some command line options as shown by the help:\n\nusage: cuckoo.py [-h] [-q] [-d] [-v] [-a] [-t] [-m MAX_ANALYSIS_COUNT]", + "usage: cuckoo.py [-h] [-q] [-d] [-v] [-a] [-t] [-m MAX_ANALYSIS_COUNT]\n\noptional arguments:\n-h, --help show this help message and exit\n-q, --quiet Display only error messages\n-d, --debug Display debug messages\n-v, --version show program's version number and exit\n-a, --artwork Show artwork\n-t, --test Test startup\n-m MAX_ANALYSIS_COUNT, --max-analysis-count MAX_ANALYSIS_COUNT\n Maximum number of analyses\n\nMost importantly --debug and --quiet respectively increase and decrease the logging verbosity.\n\nPoetry users\n\nIf you used poetry to install dependencies, you should launch cape with the following command:\n\n$ sudo -u cape poetry run python3 cuckoo.py\n\nIf you get any dependency-related error, make sure you execute the extra/libvirt_installer.sh script:\n\n$ sudo -u cape poetry run extra/libvirt_installer.sh\n\nTroubleshooting\n\nPermissionError: [Errno 13] Permission denied: '/opt/CAPEv2/log/cuckoo.log'", + "Troubleshooting\n\nPermissionError: [Errno 13] Permission denied: '/opt/CAPEv2/log/cuckoo.log'\n\nYou are not executing the CAPE (cuckoo.py) file with the appropriate user.\n\nRemember that the user meant to execute CAPE is the cape user. In fact, after installing CAPE with cape2.sh, the directory should look similar to the following structure:\n\nimage\n\nIn order to execute CAPE as the cape user you can either launch a shell or execute the following command (notice the command is using Poetry):\n\n$ sudo -u cape poetry run python3 cuckoo.py\n\nCuckooCriticalError: Cannot bind ResultServer on port 2042 because it was in use, bailing\n\nCAPE is already running in the background as cape.service\n\nIf you want to see the logs in realtime printed to stdout, stop the service by running the following command:\n\n$ sudo systemctl stop cape.service\n\nand run cuckoo.py again\n\nCuckooCriticalError: Unable to bind Result server on [Errno 99]\n\nCheck the cuckoo.conf configuration file again.", + "Check the cuckoo.conf configuration file again.\n\nYou will have to provide the host IP for the [resultserver], not the guest IP.\n\nStarting processing data generated by virtual machine\n\nSee -h for all latest options, for better customization:\n\nusage: process.py [-h] [-c] [-d] [-r] [-s] [-p PARALLEL] [-fp] [-mc MAXTASKSPERCHILD] [-md] [-pt PROCESSING_TIMEOUT] id\n\npositional arguments:\nid ID of the analysis to process (auto for continuous processing of unprocessed tasks).", + "optional arguments:\n-h, --help show this help message and exit\n-c, --caperesubmit Allow CAPE resubmit processing.\n-d, --debug Display debug messages\n-r, --report Re-generate report\n-s, --signatures Re-execute signatures on the report\n-p PARALLEL, --parallel PARALLEL\n Number of parallel threads to use (auto mode only).\n-fp, --failed-processing\n reprocess failed processing\n-mc MAXTASKSPERCHILD, --maxtasksperchild MAXTASKSPERCHILD\n Max children tasks per worker\n-md, --memory-debugging\n Enable logging garbage collection related info\n-pt PROCESSING_TIMEOUT, --processing-timeout PROCESSING_TIMEOUT\n Max amount of time spent in processing before we fail a task\n\nCommand example:\n\n$ python3 utils/process.py -p7 auto", + "REST API\n\nTo see the current hosted REST API documentation head to /apiv2/. You will find all endpoints and details on how to do requests.\n\n`API example`: https://capesandbox.com/apiv2/\n\nTo enable the REST API, we use `django-rest-framework`:\n\n$ poetry run pip install djangorestframework\n\nTo generate a user authorization token:\n\n# Ensure you are in CAPE's web directory\ncd /opt/CAPEv2/web\n\n# To create super user aka admin\nsudo -u cape poetry run python3 manage.py createsuperuser\n\n# To create normal user, use web interface /admin/ (in case if you not changed path)\n\n# By hand, only required if auth enabled and user MUST exist\nsudo -u cape poetry run python3 manage.py drf_create_token ", + "# Auto generation local or any public instance\ncurl -d \"username=&password=\" http://127.0.0.1:8000/apiv2/api-token-auth/\ncurl -d \"username=&password=\" https://capesandbox.com/apiv2/api-token-auth/\ncurl -d \"username=&password=\" http(s):///apiv2/api-token-auth/\n\n# Usage\nimport requests\n\nurl = 'http://127.0.0.1:8000/apiv2/'\nheaders = {'Authorization': 'Token '}\nr = requests.get(url, headers=headers)\n\nCAPE throttling, aka requests per minute/hour/day.\n\nRequires token authentication enabled in api.conf\n\nDefault 5/m\n\nYou can change the default throttle limits in api.conf\n\nTo change the user limit go to django admin /admin/ if you didn't change the path, and set the limit per user in the user profile at the bottom.\n\nWarning\n\nAll documentation below this warning is deprecated.\n\napi.py DEPRECATED\n\nResources", + "Warning\n\nAll documentation below this warning is deprecated.\n\napi.py DEPRECATED\n\nResources\n\nFollowing is a list of currently available resources and a brief description of each one. For details click on the resource name.", + "Resource Description POST tasks_create_file Adds a file to the list of pending tasks to be processed and\nanalyzed. POST tasks_create_url Adds an URL to the list of pending tasks to be processed and\nanalyzed. GET tasks_list Returns the list of tasks stored in the internal Cuckoo database.\nYou can optionally specify a limit of entries to return. GET tasks_view Returns the details on the task assigned to the specified ID. GET tasks_delete Removes the given task from the database and deletes the\nresults. GET tasks_report Returns the report generated out of the analysis of the task\nassociated with the specified ID. You can optionally specify which\nreport format to return, if none is specified the JSON report will be\nreturned. GET tasks_shots Retrieves one or all screenshots associated with a given analysis\ntask ID. GET files_view Search the analyzed binaries by MD5 hash, SHA256 hash or internal ID", + "task ID. GET files_view Search the analyzed binaries by MD5 hash, SHA256 hash or internal ID\n(referenced by the tasks details). GET files_get Returns the content of the binary with the specified SHA256\nhash. GET pcap_get Returns the content of the PCAP associated with the given task. GET machines_list Returns the list of analysis machines available to Cuckoo. GET machines_view Returns details on the analysis machine associated with the\nspecified name. GET cuckoo_status Returns the basic cuckoo status, including version and tasks\noverview", + "/tasks/create/file\n\nPOST /tasks/create/file\n\nAdds a file to the list of pending tasks. Returns the ID of the newly created task.\n\nExample request .. code-block:: bash\n\ncurl -F file=@/path/to/file http://localhost:8090/tasks/create/file\n\nExample request using Python .. code-block:: python\n\nimport requests import json\n\nREST_URL = \"http://localhost:8090/tasks/create/file\" SAMPLE_FILE = \"/path/to/malware.exe\"\n\n# Add your code to error checking for request.status_code.\n\njson_decoder = json.JSONDecoder() task_id = json_decoder.decode(request.text)[\"task_id\"]\n\n# Add your code for error checking if task_id is None.\n\nExample response:\n\n{\n \"task_id\" : 1\n}\n\n/tasks/create/url\n\nPOST /tasks/create/url\n\nAdds a file to the list of pending tasks. Returns the ID of the newly created task.\n\nExample request:\n\ncurl -F url=\"http://www.malicious.site\" http://localhost:8090/tasks/create/url\n\nExample request using Python .. code-block:: python\n\nimport requests import json", + "Example request using Python .. code-block:: python\n\nimport requests import json\n\nREST_URL = \"http://localhost:8090/tasks/create/url\" SAMPLE_URL = \"http://example.org/malwr.exe\"\n\nmultipart_url = {\"url\": (\"\", SAMPLE_URL)} request = requests.post(REST_URL, files=multipart_url)\n\n# Add your code to error checking for request.status_code.\n\njson_decoder = json.JSONDecoder() task_id = json_decoder.decode(request.text)[\"task_id\"]\n\n# Add your code to error checking if task_id is None.\n\nExample response:\n\n{\n \"task_id\" : 1\n}\n\n/tasks/list\n\nGET /tasks/list/ (int: limit) / (int: offset)\n\nReturns list of tasks.\n\nExample request:\n\ncurl http://localhost:8090/tasks/list\n\nExample response:", + "{\n \"tasks\": [\n {\n \"category\": \"url\",\n \"machine\": null,\n \"errors\": [],\n \"target\": \"http://www.malicious.site\",\n \"package\": null,\n \"sample_id\": null,\n \"guest\": {},\n \"custom\": null,\n \"priority\": 1,\n \"platform\": null,\n \"options\": null,\n \"status\": \"pending\",\n \"enforce_timeout\": false,\n \"timeout\": 0,\n \"memory\": false,\n \"tags\": []\n \"id\": 1,\n \"added_on\": \"2012-12-19 14:18:25\",\n \"completed_on\": null\n },\n {\n \"category\": \"file\",\n \"machine\": null,\n \"errors\": [],\n \"target\": \"/tmp/malware.exe\",\n \"package\": null,\n \"sample_id\": 1,\n \"guest\": {},\n \"custom\": null,\n \"priority\": 1,\n \"platform\": null,\n \"options\": null,\n \"status\": \"pending\",", + "\"platform\": null,\n \"options\": null,\n \"status\": \"pending\",\n \"enforce_timeout\": false,\n \"timeout\": 0,\n \"memory\": false,\n \"tags\": [\n \"32bit\",\n \"acrobat_6\",\n ],\n \"id\": 2,\n \"added_on\": \"2012-12-19 14:18:25\",\n \"completed_on\": null\n }\n ]\n}", + "/tasks/view\n\nGET /tasks/view/ (int: id)\n\nReturns details on the task associated with the specified ID.\n\nExample request:\n\ncurl http://localhost:8090/tasks/view/1\n\nExample response:\n\n{\n \"task\": {\n \"category\": \"url\",\n \"machine\": null,\n \"errors\": [],\n \"target\": \"http://www.malicious.site\",\n \"package\": null,\n \"sample_id\": null,\n \"guest\": {},\n \"custom\": null,\n \"priority\": 1,\n \"platform\": null,\n \"options\": null,\n \"status\": \"pending\",\n \"enforce_timeout\": false,\n \"timeout\": 0,\n \"memory\": false,\n \"tags\": [\n \"32bit\",\n \"acrobat_6\",\n ],\n \"id\": 1,\n \"added_on\": \"2012-12-19 14:18:25\",\n \"completed_on\": null\n }\n}\n\n/tasks/delete\n\nGET /tasks/delete/ (int: id)\n\nRemoves the given task from the database and deletes the results.\n\nExample request:\n\ncurl http://localhost:8090/tasks/delete/1\n\n/tasks/report", + "Example request:\n\ncurl http://localhost:8090/tasks/delete/1\n\n/tasks/report\n\nGET /tasks/report/ (int: id) / (str: format)\n\nReturns the report associated with the specified task ID.\n\nExample request:\n\ncurl http://localhost:8090/tasks/report/1\n\n/tasks/screenshots\n\nGET /tasks/screenshots/ (int: id) / (str: number)\n\nReturns one or all screenshots associated with the specified task ID.\n\nExample request:\n\nwget http://localhost:8090/tasks/screenshots/1\n\n/files/view\n\nGET /files/view/md5/ (str: md5)\n\nGET /files/view/sha256/ (str: sha256)\n\nGET /files/view/id/ (int: id)\n\nReturns details on the file matching either the specified MD5 hash, SHA256 hash or ID.\n\nExample request:\n\ncurl http://localhost:8090/files/view/id/1\n\nExample response:", + "Example request:\n\ncurl http://localhost:8090/files/view/id/1\n\nExample response:\n\n{\n \"sample\": {\n \"sha1\": \"da39a3ee5e6b4b0d3255bfef95601890afd80709\",\n \"file_type\": \"empty\",\n \"file_size\": 0,\n \"crc32\": \"00000000\",\n \"ssdeep\": \"3::\",\n \"sha256\": \"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\",\n \"sha512\": \"cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e\",\n \"id\": 1,\n \"md5\": \"d41d8cd98f00b204e9800998ecf8427e\"\n }\n}\n\n/files/get\n\nGET /files/get/ (str: sha256)\n\nReturns the binary content of the file matching the specified SHA256 hash.\n\nExample request:\n\ncurl http://localhost:8090/files/get/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 > sample.exe\n\n/pcap/get\n\nGET /pcap/get/ (int: task)\n\nReturns the content of the PCAP associated with the given task.\n\nExample request:", + "Returns the content of the PCAP associated with the given task.\n\nExample request:\n\ncurl http://localhost:8090/pcap/get/1 > dump.pcap\n\n/machines/list\n\nGET /machines/list\n\nReturns a list with details on the analysis machines available to Cuckoo.\n\nExample request:\n\ncurl http://localhost:8090/machines/list\n\nExample response:\n\n{\n \"machines\": [\n {\n \"status\": null,\n \"locked\": false,\n \"name\": \"cuckoo1\",\n \"resultserver_ip\": \"192.168.56.1\",\n \"ip\": \"192.168.56.101\",\n \"tags\": [\n \"32bit\",\n \"acrobat_6\",\n ],\n \"label\": \"cuckoo1\",\n \"locked_changed_on\": null,\n \"platform\": \"windows\",\n \"snapshot\": null,\n \"interface\": null,\n \"status_changed_on\": null,\n \"id\": 1,\n \"resultserver_port\": \"2042\"\n }\n ]\n}\n\n/machines/view\n\nGET /machines/view/ (str: name)", + "/machines/view\n\nGET /machines/view/ (str: name)\n\nReturns details on the analysis machine associated with the given name.\n\nExample request:\n\ncurl http://localhost:8090/machines/view/cuckoo1\n\nExample response:\n\n{\n \"machine\": {\n \"status\": null,\n \"locked\": false,\n \"name\": \"cuckoo1\",\n \"resultserver_ip\": \"192.168.56.1\",\n \"ip\": \"192.168.56.101\",\n \"tags\": [\n \"32bit\",\n \"acrobat_6\",\n ],\n \"label\": \"cuckoo1\",\n \"locked_changed_on\": null,\n \"platform\": \"windows\",\n \"snapshot\": null,\n \"interface\": null,\n \"status_changed_on\": null,\n \"id\": 1,\n \"resultserver_port\": \"2042\"\n }\n}\n\n/cuckoo/status\n\nGET /cuckoo/status/\n\nReturns status of the cuckoo server.\n\nExample request:\n\ncurl http://localhost:8090/cuckoo/status\n\nExample response:", + "Example request:\n\ncurl http://localhost:8090/cuckoo/status\n\nExample response:\n\n{\n \"tasks\": {\n \"reported\": 165,\n \"running\": 2,\n \"total\": 167,\n \"completed\": 0,\n \"pending\": 0\n },\n \"version\": \"1.0\",\n \"protocol_version\": 1,\n \"hostname\": \"Patient0\",\n \"machines\": {\n \"available\": 4,\n \"total\": 5\n }\n \"tools\":[\"vanilla\"]\n}", + "Performance\n\nThere are several ways to tune the CAPE performance\n\nProcessing\n\n\"Processing\" consists of three steps after the malware is executed in a VM. Those are\n\nprocessing of raw data\n\nsignature matching\n\nreporting\n\nProcessing can take up to 30 minutes if the original raw log is large. This is caused by many API calls in that log. Several steps will iterate through that API list which causes a slowdown. There are several ways to mitigate the impact:\n\nEvented signatures\n\nEvented signatures have a common loop through the API calls. Use them wherever possible and either switch the old-style signatures with their api-call loop or convert them to event based signatures\n\nReporting\n\nReports that contain the API log will also iterate through the list. De-activate reports you do not need. For automated environments switching off the html report will be a good choice.\n\nRam-boost", + "Ram-boost\n\nRam boost can be switched on in the configuration (in conf/processing.conf -> ram_boost in [behavior]). This will keep the whole API list in Ram. Do that only if you have plenty of Ram (>20 GB for 8 VMs).", + "CAPE internals\n\nCAPE base core components\n\ncuckoo.py or cape.service - Is in charge of schedule tasks, set proper routing, run them inside of the VM, etc\n\nutils/process.py or cape-processor.service - Is in charge of process the data generated inside of the VM.\n\nutils/rooter.py or cape-rooter.service - Is set proper iptables to route traffic from VM over exit node. As internet, proxy, vpn, etc.\n\nweb/manage.py or cape-web.service - Is web interface. It allows you to see reports if MongoDB or ElasticSearch is enabled, otherwise it only useful for restapi.\n\nCAPE advanced core components\n\nutils/dist.py or cape-dist.service - Allows you to have CAPE cluster with many different workers\n\nutils/fstab.py or cape-fstab.service - Utility for distributed CAPE with NFS mode. It automatically adds entries to /etc/fstab and mounts it. Useful for cloud setups as Google Cloud Platform (GCP) for auto scaling.\n\nHow CAPE processing works?", + "How CAPE processing works?\n\nAll data processing is divided into stages where lib/cuckoo/core/plugins.py does the magic.\n\nCheck out lib/cuckoo/common/abstracts.py -> class for all auxiliary functions that can help you make your code cleaner.\n\nCheck custom/conf/.conf for all features/modules that you can enable/disable.", + "Installation\n\nThis chapter explains how to install CAPE.\n\nNote\n\nThis documentation refers to Host as the underlying operating system on which you are running CAPE (generally being a GNU/Linux distribution) and to Guest as the Windows virtual machine used to run the isolated analysis.\n\nhost/index guest/index guest_physical/index upgrade", + "Upgrade from a previous release\n\nCAPE Sandbox grows fast. In every release, features are added, fixed and/or removed. There are two ways to upgrade your CAPE: start from scratch or migrate your \"old\" setup. The suggested way to upgrade CAPE is to start from a fresh setup because it's easier and faster than migrating your old setup.\n\nUpgrade starting from scratch\n\nTo start from scratch you have to perform a fresh setup as described in index. The following steps are suggested:\n\nBack up your installation.\n\nRead the documentation shipped with the new release.\n\nMake sure to have installed all required dependencies, otherwise install them.\n\nDo a CAPE fresh installation of the Host components.\n\nReconfigure CAPE as explained in this book (copying old configuration files is not safe because options can change between releases).", + "If you are using an external database instead of the default or you are using the MongoDb reporting module is suggested to start all databases from scratch, due to possible schema changes between CAPE releases.\n\nTest it!\n\nIf something goes wrong you probably failed to do some steps during the fresh installation or reconfiguration. Check again the procedure explained in this book.\n\nIt's not recommended to rewrite an old CAPE installation with the latest release files, as it might raise some problems because:\n\nYou are overwriting Python source files (.py) but Python bytecode files (.pyc) are still in place.\n\nThere are configuration file changes across the two versions, check our CHANGELOG file for added or removed configuration options.\n\nThe part of CAPE which runs inside guests (agent.py) may change.\n\nIf you are using an external database like the reporting module for MongoDb a change in the data schema may corrupt your database.\n\nMigrate your CAPE", + "Migrate your CAPE\n\nThe following steps are suggested as a requirement to migrate your data:\n\nBack up your installation.\n\nRead the documentation shipped with the new release.\n\nMake sure to have installed all required dependencies, otherwise install them.\n\nDownload and extract the latest CAPE.\n\nReconfigure CAPE as explained in this book (copying old configuration files is not safe because options can change between releases), and update the agent in your virtual machines.\n\nCopy from your backup \"storage\" and \"db\" folders. (Reports and analyses already present in \"storage\" folder will keep the old format.)\n\nNow setup Alembic (the framework used for migrations) and dateutil with:\n\npoetry run pip install alembic\npoetry run pip install python-dateutil\n\nEnter the alembic migration directory in \"utils/db_migration\" with:\n\ncd utils/db_migration", + "Enter the alembic migration directory in \"utils/db_migration\" with:\n\ncd utils/db_migration\n\nBefore starting the migration script you must set your database connection in \"cuckoo.conf\" if you are using a custom one. Alembic migration script will use the database connection parameters configured in cuckoo.conf.\n\nAgain, please remember to backup before launching the migration tool! A wrong configuration may corrupt your data, backup should save kittens!\n\nRun the database migrations with:\n\nalembic upgrade head\n\nPython library upgrades:\n\nPIP3:\n\n$ poetry run pip install -U \n\nTroubleshooting:\n\nWhen trying to update your local CAPE installation with poetry with either of the following commands:\n\n$ sudo -u cape poetry install\n$ sudo -u cape poetry update\n\nyou may encounter the following error:", + "$ sudo -u cape poetry install\n$ sudo -u cape poetry update\n\nyou may encounter the following error:\n\nCalledProcessError\n Command '['git', '--git-dir', '/tmp/pypoetry-git-web3.pyocemorcf/.git', '--work-tree', '/tmp/pypoetry-git-web3.pyocemorcf', 'checkout', 'master']' returned non-zero exit status 1.\n\nOr maybe when trying to update poetry itself with:\n\n$ sudo -u cape poetry self update\n\nyou may face the following error:\n\nRuntimeError\n Poetry was not installed with the recommended installer. Cannot update automatically.\n\nThat is because you probably installed poetry with pip.\n\nIn order to solve it you must first upgrade your local poetry installation with:\n\n$ sudo pip3 install poetry --upgrade\n\nand then run the update command again:\n\n$ sudo -u cape poetry update", + "Preparing the Host\n\nEven though it's reported to run on other operating systems too, CAPE is originally supposed to run on a GNU/Linux native system. For this documentation, we chose the latest Ubuntu LTS as the reference system for the commands examples.\n\ninstallation configuration routing cloud", + "Configuration\n\nCAPE relies on six main configuration files:\n\ncuckoo_conf: for configuring general behavior and analysis options.\n\nauxiliary_conf: for enabling and configuring auxiliary modules.\n\nmemory_conf: Volatility configuration.\n\nprocessing_conf: for enabling and configuring processing modules.\n\nreporting_conf: for enabling or disabling report formats.\n\nrouting_conf: for defining the routing of internet connection for the VMs.\n\nTo get CAPE working you have to edit auxiliary_conf, cuckoo_conf, and machinery_conf at least. We suggest you check all configs before starting, to be familiar with the possibilities that you have and what you want to be done.\n\nNote\n\nWe recommend to you: create a custom/conf/ directory and put files in there whose names are the same as those in the top-level conf/ directory. These files only need to include settings that will override the defaults. In that way you won't have problems with any upcoming changes to default configs.", + "To allow for further flexibility, you can also create a custom/conf/.conf.d/ (e.g. custom/conf/reporting.conf.d/) directory and place files in there. Any file in that directory whose name ends in .conf will be read (in lexicographic order). The last value read for a value will be the one that is used.\n\nWarning\n\nAny section inside the configs that is marked #community at the top refers to a plugin that was developed by our community, but that doesn't mean that we maintain it. Those plugins might be outdated or broken due to software/dependency changes. If you find anything like this broken, you are more than welcome to fix it and submit a pull request. The alternative is to switch off the offending plugin. Opening an issue for any of these is pointless as we don't maintain them and cannot support them.\n\ncuckoo.conf\n\nThe first file to edit is conf/cuckoo.conf, it contains the generic configuration options that you might want to verify before launching CAPE.", + "The file is largely commented and self-explaining, but some of the options you might want to pay more attention to are:\n\nmachinery in [cuckoo]: this defines which Machinery module you want CAPE to use to interact with your analysis machines. The value must be the name of the module without extension.\n\nip and port in [resultserver]: defines the local IP address and port that CAPE is going to use to bind the result server to. Make sure this matches the network configuration of your analysis machines, or they won't be able to return the collected results.\n\nconnection in [database]: defines how to connect to the internal database. You can use any DBMS supported by SQLAlchemy using a valid Database Urls syntax.\n\nWarning", + "Warning\n\nCheck your interface for resultserver IP! Some virtualization software (for example Virtualbox) doesn't bring up the virtual networking interfaces until a virtual machine is started. CAPE needs to have the interface where you bind the resultserver up before the start, so please check your network setup. If you are not sure about how to get the interface up, a good trick is to manually start and stop an analysis virtual machine, this will bring virtual networking up. If you are using NAT/PAT in your network, you can set up the resultserver IP to 0.0.0.0 to listen on all interfaces, then use the specific options resultserver_ip and resultserver_port in .conf to specify the address and port as every machine sees them. Note that if you set resultserver IP to 0.0.0.0 in cuckoo.conf you have to set resultserver_ip for all your virtual machines.\n\nNote", + "Note\n\nDefault freespace value is 50GB It is worth mentioning that the default freespace value in cuckoo.conf is 50000 MB aka 50 GB.\n\nPlease check the latest version of cuckoo.conf here: cuckoo.conf.\n\nauxiliary.conf\n\nAuxiliary modules are scripts that run concurrently with malware analysis, this file defines their options. Please see the default version here: auxiliary.conf.\n\n.conf\n\nMachinery modules are scripts that define how Cuckoo should interact with your virtualization software of choice.\n\nEvery module should have a dedicated configuration file that defines the details of the available machines. For example, if you created a kvm.py machinery module, you should specify kvm in conf/cuckoo.conf and have a conf/kvm.conf file.\n\nCAPE provides some modules by default and for the sake of this guide, we'll assume you're going to use KVM. Please see the latest version here: kvm.conf.", + "If you are using KVM (kvm.conf), for each VM you want to use for analysis there must be a dedicated section. First you have to create and configure the VM (following the instructions in the dedicated chapter, see preparing_the_guest). The name of the section must be the same as the label of the VM as printed by $ virsh list --all. If no VMs are shown, you can execute the following command sequence: $ virsh, $ connect qemu:///system, $ list --all; or you can check this link to learn how to change the connection in Virtual Manager.\n\nYou can also find examples of other hypervisors like:\n\nVirtualBox: virtualbox.conf.\n\nVMWare: vmware.conf.\n\nThe comments for the options are self-explanatory.\n\nYou can use this same configuration structure for any other machinery module, although existing ones might have some variations or additional configuration options.\n\nmemory.conf", + "memory.conf\n\nThe Volatility tool offers a large set of plugins for memory dump analysis. Some of them are quite slow. In volatility.conf lets you enable or disable the plugins of your choice. To use Volatility you have to follow two steps:\n\nEnable it in processing.conf\n\nEnable memory_dump in cuckoo.conf\n\nIn the memory.conf's basic section you can configure the Volatility profile and the deletion of memory dumps after processing:\n\n# Basic settings\n[basic]\n# Profile to avoid wasting time identifying it\nguest_profile = WinXPSP2x86\n# Delete memory dump after volatility processing.\ndelete_memdump = no\n\nAfter that every plugin has an own section for configuration:\n\n# Scans for hidden/injected code and dlls\n# http://code.google.com/p/volatility/wiki/CommandReference#malfind\n[malfind]\nenabled = on\nfilter = on", + "# Lists hooked api in user mode and kernel space\n# Expect it to be very slow when enabled\n# http://code.google.com/p/volatility/wiki/CommandReference#apihooks\n[apihooks]\nenabled = off\nfilter = on\n\nThe filter configuration helps you to remove known clean data from the resulting report. It can be configured separately for every plugin.\n\nThe filter itself is configured in the [mask] section. You can enter a list of pids in pid_generic to filter out processes:\n\n# Masks. Data that should not be logged\n# Just get this information from your plain VM Snapshot (without running malware)\n# This will filter out unwanted information in the logs\n[mask]\n# pid_generic: a list of process ids that already existed on the machine before the malware was started.\npid_generic = 4, 680, 752, 776, 828, 840, 1000, 1052, 1168, 1364, 1428, 1476, 1808, 452, 580, 652, 248, 1992, 1696, 1260, 1656, 1156\n\nPlease see the latest version here: memory.conf.\n\nprocessing.conf", + "Please see the latest version here: memory.conf.\n\nprocessing.conf\n\nThis file allows you to enable, disable and configure all processing modules. These modules are located under modules/processing/ and define how to digest the raw data collected during the analysis.\n\nYou will find a section for each processing module here: processing.conf.\n\nYou might want to configure the VirusTotal key if you have an account of your own.\n\nreporting.conf\n\nThe conf/reporting.conf file contains information on the automated reports generation. Please see the latest version here: reporting.conf.\n\nBy setting these options to on or off you enable or disable the generation of such reports.\n\nrouting.conf\n\nThe conf/routing.conf file contains information about how the guest VM is connected (or not) to the Internet via the Host, or whether it is isolated. This file is used in conjunction with the rooter.py utility.\n\nPlease see the latest version of routing.conf here: routing.conf.", + "Please see the latest version of routing.conf here: routing.conf.\n\nYou can read more about the routing.conf file and its options in the routing chapter and more about the rooter.py utility in the rooter chapter.\n\nUsing environment variables in config files\n\nAny of the above config files may reference environment variables in their values by using %(ENV:VARIABLE_NAME)s. For example, instead of putting a VirusTotal Intelligence API key in auxiliary_conf, you could use the following:\n\n[virustotaldl]\nenabled = yes\ndlintelkey = %(ENV:DLINTELKEY)s\n\nassuming the DLINTELKEY environment variable contains the API key.", + "Deploying CAPE in the Cloud\n\nThe following documentation will detail how to install CAPE using cloud resources.\n\nAzure\n\nTo use Azure as a machinery for CAPE, significant work must be done to deploy and secure the resource groups, network architecture, credential management, etc required.\n\nResource groups\n\nThe description below details how to create the resource groups that are required for isolating resources that should be controlled by the Azure machinery, which is running on a virtual machine and will have raw malware on it.\n\nNetworking\n\nThe description below details how a REST client could send files to CAPE, which would then detonate the submitted files in an isolated network.\n\nA route table resource has to be created in RG2 and applied to direct all traffic from guests through the host (VNET2_RT1). Apply this route table to VNET2_SUB2, and create a new rule that directs all traffic (0.0.0.0/0) to a virtual appliance, aka the IP of VNET2_SUB2_NIC.", + "These are the main networking resources required to deploy CAPE in Azure.\n\nCredential Management\n\nIn the az.conf, there are several crucial details that we will need for accessing/manipulating Azure resources. These details are client_id, secret, and tenant. To get these details, perform the following:\n\nSee ../guest/saving for instructions on how to create a shared gallery image definition version, the equivalent of a snapshot for virtual machine scale sets.", + "Per-Analysis Network Routing\n\nWith the more advanced per-analysis routing, it is naturally also possible to have one default route - a setup that used to be popular before, when the more luxurious routing was not yet available.\n\nIn our examples, we'll be focusing on KVM as it is our default machinery choice.\n\nWarning\n\nIn case if you see proxy IP:PORT in networking for example as tor 9040 port. It happens due that you have installed docker on your host and it breaks some networking filters.\n\nTo fix proxy IP:PORT problem, you need to run following script. Save it to file, give execution permission with sudo a+x iptables_fix.sh and run it with proper arguments:\n\n!/bin/bash\n# Fix when docker breaks your iptables\nif [ $# -eq 0 ] || [ $# -lt 2 ]; then\n echo \"$0 \"\n echo \" example: $0 192.168.1.0 virbr0 eno0\"\n exit 1\nfi", + "echo \"[+] Setting iptables\"\niptables -t nat -A POSTROUTING -o \"$2\" -j MASQUERADE\niptables -A FORWARD -i \"$2\" -o \"$2\" -m state --state RELATED,ESTABLISHED -j ACCEPT\niptables -A FORWARD -i \"$2\" -o \"$2\" -j ACCEPT\niptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT\niptables -I FORWARD -o \"$2\" -d \"$1\"/24 -j ACCEPT\niptables -t nat -A POSTROUTING -s \"$1\"/24 -j MASQUERADE\niptables -A FORWARD -o \"$2\" -m state --state RELATED,ESTABLISHED -j ACCEPT\niptables -A FORWARD -i \"$2\" -o \"$3\" -j ACCEPT\niptables -A FORWARD -i \"$2\" -o lo -j ACCEPT", + "echo \"[+] Setting network options\"\n# https://forums.fedoraforum.org/showthread.php?312824-Bridge-broken-after-docker-install&s=ffc1c60cccc19e46c01b9a8e0fcd0c35&p=1804899#post1804899\n{\n echo \"net.bridge.bridge-nf-call-ip6tables=0\";\n echo \"net.bridge.bridge-nf-call-iptables=0\";\n echo \"net.bridge.bridge-nf-call-arptables=0\";\n echo \"net.ipv4.conf.all.forwarding=1\";\n echo \"net.ipv4.ip_forward=1\";\n} >> /etc/sysctl.conf\nsysctl -p\necho \"iptables -A FORWARD -i $2 -o $2 -j ACCEPT\" >> /etc/network/if-pre-up.d/kvm_bridge_iptables\n\nvirsh nwfilter-list\n\nTo make it permanent you can use iptables-save.\n\nPer-Analysis Network Routing Options\n\nFollowing is the list of available routing options.", + "Per-Analysis Network Routing Options\n\nFollowing is the list of available routing options.\n\nRouting Option Description routing_none No routing whatsoever, the only option that does not require the Rooter to be run (and therefore also the default routing option). routing_drop Completely drops all non-CAPE traffic, including traffic within the\nVMs' subnet. routing_internet Full internet access as provided by the given network interface routing_inetsim Routes all traffic to an InetSim instance -which provides fake\nservices - running on the host machine. routing_tor Routes all traffic through Tor. routing_tun Route traffic though any \"tun\" interface routing_vpn Routes all traffic through one of perhaps multiple pre-defined VPN\nendpoints. routing_socks Routes all traffic through one of perhaps multiple pre-defined VPN\nendpoints.\n\nUsing Per-Analysis Network Routing", + "Using Per-Analysis Network Routing\n\nNow that you know the available network routing options, it is time to use them in practice. Assuming CAPE has been configured properly taking advantage of its features is as simple as starting the CAPE Rooter and choosing a network routing option for your analysis.\n\nDocumentation on starting the Rooter may be found in the cape_rooter_usage document.\n\nBoth global routing and per-analysis routing require ip forwarding to be enabled:\n\n$ echo 1 | sudo tee -a /proc/sys/net/ipv4/ip_forward\n$ sudo sysctl -w net.ipv4.ip_forward=1\n\nWarning\n\nPlease be aware by default these changes do not persist and will be reset after a system restart.\n\nConfiguring netplan\n\nIn modern releases of Ubuntu, all network configuration is handled by netplan, including routing tables.\n\nIf you are using Ubuntu Server, disable cloud-init, which is used by default.", + "If you are using Ubuntu Server, disable cloud-init, which is used by default.\n\nDo this by writing a file at /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg, with the content network: {config: disabled}, then delete /etc/netplan/50-cloud-init.yaml.\n\nIf you are using a desktop version of Ubuntu instead, you will need to disable NetworkManager and enable networkd.\n\nsudo systemctl stop NetworkManager\nsudo systemctl disable NetworkManager\nsudo systemctl mask NetworkManager\n\nsudo systemctl unmask systemd-networkd\nsudo systemctl enable systemd-networkd\nsudo systemctl start systemd-networkd\n\nNext, create your own netplan configuration file manually at /etc/netplan/99-manual.yaml", + "Next, create your own netplan configuration file manually at /etc/netplan/99-manual.yaml\n\nThe example netplan configuration below has a 5G hotspot interface named enx00a0c6000000 for routing_internet (aka the dirty line) and a management interface named enp8s0 for hosting the CAPE web UI, SSH and other administrative services. In this configuration the dirty line is used as the default gateway for all internet traffic on the host. This helps prevent network leaks, firewall IDS/IPS issues, and keeps administrative traffic separate, where it could be placed in its own subnet for additional security.\n\nYou will need to replace the interface names and IP addresses to reflect your own system.\n\nEach interface configuration needs a routes section that describes the routes that can be accessed via that interface. In order for the configuration to work with CAPE's per-analysis routing, each routes section must have an arbitrary but unique table integer value.", + "network:\n version: 2\n renderer: networkd\n ethernets:\n lo:\n addresses: [ \"127.0.0.1/8\", \"::1/128\", \"7.7.7.7/32\" ]\n enx00a0c6000000:\n dhcp4: no\n addresses: [ \"192.168.1.2/24\" ]\n nameservers:\n addresses: [ \"192.168.1.1\" ]\n routes:\n - to: default\n via: 192.168.1.1\n - to: 192.168.1.0/24\n via: 192.168.1.1\n table: 101\n routing-policy:\n - from: 192.168.1.0/24\n table: 101\n enp8s0:\n dhcp4: no\n addresses: [ \"10.23.6.66/24\" ]\n routes:\n - to: 10.23.6.0/24\n via: 10.23.6.1\n table: 102\n routing-policy:\n - from: 10.23.6.0/24\n table: 102", + "Run sudo netplan apply to apply the new netplan configuration. You can verify the new routing rules and tables have been created with:\n\nip r. To show 'main' table.\n\nip r show table X. To show 'X' table, where X is either the number or the name you specified in the netplan file.\n\nip r show table all. To show all routing rules form all tables.\n\nNote\n\nThere are some considerations you should take into account when configuring and setting netplan and others components necessary so as to provide the Hosts with Internet connection:\n\nIP forwarding MUST be enabled.\n\nThe routing table NUMBER specified in the netplan config file should be the SAME as the one specified in /etc/iproute2/rt_tables.\n\nThe routing table NAME specified in /etc/iproute2/rt_tables (next to its number) should be the SAME as the one specified specified in routing.conf (with the rt_table field).\n\nProtecting host ports", + "Protecting host ports\n\nBy default, most Linux network services listen on all network interface interfaces/addresses, leaving the services running on the host machine exposed to potential attacks from the analysis VMs.\n\nTo mitigate this issue, use the ufw firewall included with Ubuntu. It will not break CAPE\u2019s per-analysis network routing.\n\nAllow access to administrative services using the interface that is being used for management of the sandbox. Network interface details can be found by using the ip addr command.\n\nIn this example the management interface name is enp8s0, with an IP address of 10.23.6.66. Replace these values with the proper values for your server.\n\n# HTTP\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 80 proto tcp\n\n# HTTPS\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 443 proto tcp\n\n# SSH\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 22 proto tcp", + "# SSH\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 22 proto tcp\n\n# SMB (smbd is enabled by default on desktop versions of Ubuntu)\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 22 proto tcp\n\n# RDP (if xrdp is used on the server)\nsudo ufw allow in on enp8s0 to 10.23.6.66 port 445 proto tcp\n\nAllow analysis VMs to access the CAPE result server, which used TCP port 2042 by default.\n\nIn this example the host interface name is virbr1 with an IP address of 192.168.42.1. Replace these values with the proper values for your server.\n\nsudo ufw allow in on virbr1 to 192.168.42.1 port 2042 proto tcp\n\nEnable the firewall after all of the rules have ben configured.\n\nsudo ufw enable\n\nNone Routing\n\nThe default routing mechanism in the sense that CAPE allows the analysis to route as defined by a third party. As in, it doesn't do anything. One may use the none routing\n\nDrop Routing", + "Drop Routing\n\nThe drop routing option is somewhat like a default routing_none setup (as in, in a machine where no global iptables rules have been created providing full internet access to VMs or so), except that it is much more aggressive in actively locking down the internet access provided to the VM.\n\nWith drop routing the only traffic possible is internal CAPE traffic and hence any DNS requests or outgoing TCP/IP connections are blocked.\n\nInternet Routing\n\nBy using the internet routing one may provide full internet access to VMs through one of the connected network interfaces. We also refer to this option as the dirty line due to its nature of allowing all potentially malicious samples to connect to the internet through the same uplink.\n\nNote\n\nIt is required to register the dirty line network interface with iproute2 as described in the routing_netplan section.\n\nInetSim Routing", + "InetSim Routing\n\nFor those that have not heard of InetSim, it's a project that provides fake services for malware to talk to. To use InetSim routing one will have to set up InetSim on the host machine (or in a separate VM) and configure CAPE so that it knows where to find the InetSim server.\n\nThe configuration for InetSim is self-explanatory and can be found as part of the $CWD/conf/routing.conf configuration file:\n\n[inetsim]\nenabled = yes\nserver = 192.168.122.1\n\nTo quickly get started with InetSim it is possible to download the latest version of the REMnux distribution which features - among many other tools - the latest version of InetSim. Naturally, this VM will require a static IP address which should then be configured in the routing.conf configuration file.\n\nWe suggest running it on a virtual machine to avoid any possible leaks\n\nTor Routing\n\nNote", + "We suggest running it on a virtual machine to avoid any possible leaks\n\nTor Routing\n\nNote\n\nAlthough we highly discourage the use of Tor for malware analysis - the maintainers of Tor exit nodes already have a hard enough time keeping up their servers - it is a well-supported feature.\n\nFirst of all, Tor will have to be installed. Please find instructions on installing the latest stable version of Tor here.\n\nWe'll then have to modify the Tor configuration file (not talking about CAPE's configuration for Tor yet!) To do so, we will have to provide Tor with the listening address and port for TCP/IP connections and UDP requests. For a default KVM setup, where the host machine has IP address 192.168.122.1, the following lines will have to be configured in the /etc/tor/torrc file:\n\nTransPort 192.168.122.1:9040\nDNSPort 192.168.122.1:5353", + "TransPort 192.168.122.1:9040\nDNSPort 192.168.122.1:5353\n\nDon't forget to restart Tor (/etc/init.d/tor restart). That leaves us with the Tor configuration for Cuckoo, which may be found in the $CWD/conf/routing.conf file. The configuration is pretty self-explanatory so we'll leave filling it out as an exercise to the reader (in fact, toggling the enabled field goes a long way):\n\n[tor]\nenabled = yes\ndnsport = 5353\nproxyport = 9040\n\nNote that the port numbers in the /etc/tor/torrc and $CWD/conf/routing.conf files must match for the two to interact correctly.\n\nTun Routing\n\nThis allows you to route via any tun interface. You can pass the tun interface name on demand per analysis. The interface name can be tunX or tun_foo. This assumes you create the tunnel inferface outside of CAPE.\n\nThen you set the route=tun_foo on the /apiv2/tasks/create/file/ API call.\n\nVPN Routing", + "Then you set the route=tun_foo on the /apiv2/tasks/create/file/ API call.\n\nVPN Routing\n\nIt is possible to route analyses through multiple VPNs. By defining a couple of VPNs, perhaps ending up in different countries, it may be possible to see if potentially malicious samples behave differently depending on the country of origin of their IP address.\n\nThe configuration for a VPN is much like the configuration of a VM. For each VPN you will need one section in the $CWD/conf/routing.conf configuration file detailing the relevant information for the VPN. In the configuration, the VPN will also have to be registered in the list of available VPNs (the same as you'd do for registering more VMs).\n\nConfiguration for a single VPN looks roughly as follows:\n\n[vpn]\n# Are VPNs enabled?\nenabled = yes\n\n# Comma-separated list of the available VPNs.\nvpns = vpn0", + "[vpn]\n# Are VPNs enabled?\nenabled = yes\n\n# Comma-separated list of the available VPNs.\nvpns = vpn0\n\n[vpn0]\n# Name of this VPN. The name is represented by the filepath to the\n# configuration file, e.g., CAPE would represent /etc/openvpn/cuckoo.conf\n# Note that you can't assign the names \"none\" and \"internet\" as those would\n# conflict with the routing section in cuckoo.conf.\nname = vpn0\n\n# The description of this VPN which will be displayed in the web interface.\n# Can be used to for example describe the country where this VPN ends up.\ndescription = Spain, Europe\n\n# The tun device hardcoded for this VPN. Each VPN *must* be configured to use\n# a hardcoded/persistent tun device by explicitly adding the line \"dev tunX\"\n# to its configuration (e.g., /etc/openvpn/vpn1.conf) where X in tunX is a\n# unique number between 0 and your lucky number of choice.\ninterface = tun0", + "# Routing table name/id for this VPN. If table name is used it *must* be\n# added to /etc/iproute2/rt_tables as \" \" line (e.g., \"201 tun0\").\n# ID and name must be unique across the system (refer /etc/iproute2/rt_tables\n# for existing names and IDs).\nrt_table = tun0\n\nNote\n\nIt is required to register each VPN network interface with netplan as described in the routing_netplan section.\n\nQuick and dirty example of iproute2 configuration for VPN:\n\nExample:\n /etc/iproute2/rt_tables\n 5 host1\n 6 host2\n 7 host3\n\n conf/routing.conf\n [vpn5]\n name = X.ovpn\n description = X\n interface = tunX\n rt_table = host1\n\nBear in mind that you will need to adjust some values inside of VPN route script. Read it!\n\nHelper script vpt2cape.py, read code to understand it\n\nVPN persistence & auto-restart source:", + "Helper script vpt2cape.py, read code to understand it\n\nVPN persistence & auto-restart source:\n\n1. Run the command:\n # sudo nano /etc/default/openvpn`\n and uncomment, or remove, the \u201c#\u201d in front of AUTOSTART=\"all\"\n then press \u2018Ctrl X\u2019 to save the changes and exit the text editor.\n\n2. Move the .ovpn file with the desired server location to the \u2018/etc/openvpn\u2019 folder:\n # sudo cp /location/whereYouDownloadedConfigFilesTo/Germany.ovpn /etc/openvpn/\n\n3. In the \u2018/etc/openvpn\u2019 folder, create a text file called login.creds:\n # sudo nano /etc/openvpn/login.creds\n and enter your IVPN Account ID (starts with \u2018ivpn\u2019) on the first line and any non-blank text on the 2nd line, then press \u2018Ctrl X\u2019 to save the changes and exit the text editor.\n\n4. Change the permissions on the pass file to protect the credentials:\n # sudo chmod 400 /etc/openvpn/login.creds\n\n5. Rename the .ovpn file to \u2018client.conf\u2019:\n # sudo cp /etc/openvpn/Germany.ovpn /etc/openvpn/client.conf", + "6. Reload the daemons:\n# sudo systemctl daemon-reload\n\n7. Start the OpenVPN service:\n # sudo systemctl start openvpn\n\n8. Test if it is working by checking the external IP:\n # curl ifconfig.co\n\n9. If curl is not installed:\n # sudo apt install curl\n\nWireguard VPN\n\nSetup Wireguard\n\nOriginal blog post on how to setup WireGuard with CAPE\n\nInstall wireguard:\n\nsudo apt install wireguard\n\nDownload Wireguard configurations from your VPN provider and copy them into /etc/wireguard/wgX.conf. E.g.:\n\n/etc/wireguard/wg1.conf\n/etc/wireguard/wg2.conf\n/etc/wireguard/wg3.conf\n\nEach configuration is for a different exit destination.\n\nAn example config for wg1.conf:\n\n# VPN-exit-CC\n[Interface]\nPrivateKey = \nAddress = xxx.xxx.xxx.xxx/32\nTable = 420", + "# VPN-exit-CC\n[Interface]\nPrivateKey = \nAddress = xxx.xxx.xxx.xxx/32\nTable = 420\n\n# Following 2 lines added in attempt to allow local traffic\nPreUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o %i -j MASQUERADE\nPreDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o %i -j MASQUERADE\n\n[Peer]\nPublicKey = \nAllowedIPs = 0.0.0.0/0\nEndpoint = xxx.xxx.xxx.xxx:51820\n\nThe only changes I made to the original file from my VPN provider was adding Table = 420 and the PreUp and PreDown lines to configure iptables.\n\nThen start the VPN: wg-quick up wg1. If all goes well you can run wg and see that the tunnel is active. If you want to test it\u2019s working I suggest:\n\ncurl https://ifconfig.me/\ncurl --interface wg1 https://ifconfig.me/\n\nExample snippet from /opt/CAPEv2/conf/routing.conf configuration:", + "Example snippet from /opt/CAPEv2/conf/routing.conf configuration:\n\n[vpn0]\nname = vpn0\ndescription = vpn_CC_wg1\ninterface = wg1\nrt_table = wg1\n\nNote\n\nIt is required to register each VPN network interface with netplan as described in the routing_netplan section. Check quick and dirty note in original VPN section.\n\nSOCKS Routing\n\nYou also can use socks proxy servers to route your traffic. To manage your socks server you can use Socks5man software. Building them by yourself, using your favorite software, buying, etc The configuration is pretty simple and looks like VPN, but you don't need to configure anything else\n\nRequires to install dependency: poetry run pip install git+https://github.com/CAPESandbox/socks5man\n\nExample:\n\n[socks5]\n# By default we disable socks5 support as it requires running utils/rooter.py as\n# root next to cuckoo.py (which should run as regular user).\nenabled = no\n\n# Comma-separated list of the available proxies.\nproxies = socks_CC", + "# Comma-separated list of the available proxies.\nproxies = socks_CC\n\n[socks_CC]\nname = CC_socks\ndescription = CC_socks\nproxyport = 5000\ndnsport = 10000\n\nTroubleshooting\n\nConfiguring the Internet connection in the Hosts (VMs) can become a tedious task given the elements involved in the correct functioning. Here you can find several ways of debugging the connections from and to the Hosts besides cuckoo.py -d.\n\nManually testing Internet connection\n\nYou can manually test the Internet connection from inside the VMs without the need of performing any analysis. To do so, you have to use the . This utility allows you to enable or disable specific routes and debug them. It is a \"Standalone script to debug VM problems that allows to enable routing on VM\".\n\nFirst, stop the cape-rooter service with:\n\n$ sudo systemctl stop cape-rooter.service\n\nAssuming you already have any VM running, to test the internet connection using router_manager.py you have to execute the following commands:", + "$ sudo python3 router_manager.py -r internet -e --vm-name win1 --verbose\n$ sudo python3 router_manager.py -r internet -d --vm-name win1 --verbose\n\nThe -e flag is used to enable a route and -d is used to disable it. You can read more about all the options the utility has by running:\n\n$ sudo python3 router_manager.py -h\n\nNote\n\nThe --vm-name parameters expects any ID from the ones in .conf, not the label you named each VM with. To see the available options you can execute $ sudo python3 router_manager.py --show-vm-names.\n\nWhenever you use the router_manager.py utility to either enable or disable any given route, there are changes made to iptables are you should be able to see them take place.\n\nFor instance, this is how it looks BEFORE enabling any route:\n\n$ ip rule\n0: from all lookup local\n32766: from all lookup main\n32767: from all lookup default\n\nAnd this is how it looks AFTER executing the following commands:", + "And this is how it looks AFTER executing the following commands:\n\n$ sudo python3 router_manager.py -r internet -e --vm-name win1 --verbose\ninternet eno1 eno1 {'label': 'win10', 'platform': 'windows', 'ip': 'X.X.X.133', 'arch': 'x64'} None None\n$ sudo python3 router_manager.py -r internet -e --vm-name win2 --verbose\ninternet eno1 eno1 {'label': 'win10-clone', 'platform': 'windows', 'ip': 'X.X.X.134', 'arch': 'x64'} None None\n\n$ ip rule\n0: from all lookup local\n32764: from X.X.X.134 lookup eno1\n32765: from X.X.X.133 lookup eno1\n32766: from all lookup main\n32767: from all lookup default\n\nThen again, if everything is configured as expected, when executing the utility with the -d option the IP rules should disappear, reverting them to their original state.\n\nIf your routing configuration is correct, you should now be able to successfully ping 8.8.8.8. If you disable the route you shouldn't be able to ping anything on the Internet.\n\nNote", + "Note\n\nSometimes ip rules may remain undeleted for several reasons. You can manually delete them with $ sudo ip rule delete from $IP, where $IP is the IP the rule refers to.\n\nDebugging iptables rules\n\nEvery single time the rooter brings up or down any route (assuming it works as expected) or you do so by using the router_manager.py utility, your iptables set of rules is modified in one way or another.\n\nTo inspect the changes being made and verify them, you can use the watch utility preinstalled in the vast majority of *nix systems. For example, to view rules created by CAPE-rooter or the utility you can run the following command:\n\n$ sudo watch -n 1 iptables -L -n -v\n\nYou can also leverage watch to inspect the connections being made from the Guest to the Host or viceversa:\n\n$ sudo watch -n 1 'netstat -peanut | grep $IP'\n\nwhere $IP is the IP of your Guest.", + "Installing CAPE\n\nProceed with download and installation. Read ../../introduction/what to learn where you can obtain a copy of the sandbox.\n\nAutomated installation, read the full page before you start\n\nWe have automated all work for you but bear in mind that 3rd party dependencies change frequently and can break the installation, so please check the installation log and try to provide the fix / correct issue to the developers.\n\nWarning\n\nWe advise against modifying or updating any package installed by the script explained below. By using package managers like apt there are high chances your KVM/libvirt/CAPE installation will break and you will most likely end up riding the lanes of dependency hell.\n\nTo install KVM\n\nWhile you can install and use any hypervisor you like, we recommend using KVM. The script to install everything related to KVM (including KVM itself) can be found here: kvm-qemu.sh.\n\nNote", + "Note\n\nWe recommend using the script to install everything related with KVM-Qemu since the script performs a stealthier configuration and achieves better performance than the installation from APT.\n\nBEFORE executing the script, you should replace the occurrences withing the script itself with real hardware patterns. You can use acpidump in Linux and acpiextract in Windows to obtain such patterns, as stated in the script itself.\n\nWarning\n\nIf you are installing or using CAPE in a laboratory environment you can replace with any random 4 chars you like. However, if you are planning to use CAPE in real production environments and you want to hinder the sandbox/VM detection, you should use REAL hardware 4 chars. To find out which chars correspond to each piece of HW, you should use ACPIDUMP/ACPIEXTRACT and Google.\n\nIn order to install KVM itself, execute the following command:\n\n$ sudo chmod a+x kvm-qemu.sh\n$ sudo ./kvm-qemu.sh all | tee kvm-qemu.log", + "$ sudo chmod a+x kvm-qemu.sh\n$ sudo ./kvm-qemu.sh all | tee kvm-qemu.log\n\nreplacing with your actual username.\n\nRemember to reboot after the installation.\n\nIf you want to install Virtual Machine Manager (virt-manager), execute the following command:\n\n$ sudo ./kvm-qemu.sh virtmanager | tee kvm-qemu-virt-manager.log\n\nreplacing with your actual username.\n\nRemember to reboot after the installation.\n\nImportant\n\nIt is important to assert everything works as expected before moving forward. The vast majority of errors at this point can be solved by reinstalling the specific component with kvm-qemu.sh. For example, the error below was raised when trying to open virt-manager but libvirt installation was corrupted for some reason. Reinstalling libvirt with the script solved the issue.\n\nError\n\n.. image:: ../../_images/screenshots/libvirt_error_virtmanager.png\n\nTo install CAPE\n\nThe script to install CAPE can be found here: cape2.sh.\n\nNote", + "To install CAPE\n\nThe script to install CAPE can be found here: cape2.sh.\n\nNote\n\nCAPE is being maintained and updated in a rolling fashion. That is, there are no versions or releases. It is your responsibility to regularly pull the repo and stay up to date.\n\nPlease keep in mind that all our scripts use the -h flag to print the help and usage message. However, it is recommended to read the scripts themselves to understand what they do.\n\nPlease become familiar with available options using:\n\n$ sudo chmod a+x cape2.sh\n$ ./cape2.sh -h\n\nTo install CAPE with all the optimizations, use one of the following commands:\n\n$ sudo ./cape2.sh base cape | tee cape.log\n$ sudo ./cape2.sh all cape | tee cape.log\n\nRemember to reboot after the installation.\n\nThis should install all libraries and services for you, read the code if you need more details. Specifically, the installed services are:\n\ncape.service\n\ncape-processor.service\n\ncape-web.service\n\ncape-rooter.service\n\nTo restart any service use:", + "cape-processor.service\n\ncape-web.service\n\ncape-rooter.service\n\nTo restart any service use:\n\n$ systemctl restart \n\nTo see service log use:\n\n$ journalctl -u \n\nTo install dependencies\n\nTo install dependencies with poetry, execute the following command (from the main working directory of CAPE, usually /opt/CAPEv2/):\n\n$ poetry install\n\nOnce the installation is done, you can confirm a virtual environment has been created with:\n\n$ poetry env list\n\nThe output should be similar to:\n\n$ poetry env list\ncapev2-t2x27zRb-py3.10 (Activated)\n\nFrom now on, you will have to execute CAPE within the virtual env of Poetry. To do so you need just poetry run . For example:\n\n$ sudo -u cape poetry run python3 cuckoo.py\n\nIf you need further assistance with Poetry, there are hundreds of cheat sheets on the Internet\n\nOptional dependencies\n\nsudo -u cape poetry run pip install -r extra/optional_dependencies.txt\n\nATTENTION! cape user", + "sudo -u cape poetry run pip install -r extra/optional_dependencies.txt\n\nATTENTION! cape user\n\nOnly the installation scripts and some utilities like rooter.py must be executed with sudo, the rest of configuration scripts and programs MUST be executed under the cape user, which is created in the system after executing cape2.sh.\n\nBy default, the cape user has no login. In order to substitute it and use the cmd on its behalf, you can execute the following command:\n\n$ sudo su - cape -c /bin/bash", + "Preparing the Guest (Physical Machine)\n\nAt this point, you should have configured the CAPE host component and you should have designed and defined the number and the names of the physical machines you are going to use for malware execution.\n\nYou don't need KVM or any other hypervisor to run physical machinery. You only need FOG.\n\nPlease see this writeup for more updated details 15.10.2020\n\nhttps://mariohenkel.medium.com/using-cape-sandbox-and-fog-to-analyze-malware-on-physical-machines-4dda328d4e2c\n\nNow it's time to create such machines and configure them properly.\n\ncreation requirements network ../guest/agent saving", + "Creation of the Physical Machine\n\nOnce you have properly installed <../host/installation> your imaging software, you can proceed with creating all the physical machines you need.\n\nUsing and configuring your imaging software is out of the scope of this guide, so please refer to the official documentation.\n\nNote\n\nYou can find some hints and considerations on how to design and create your virtualized environment in the ../../introduction/sandboxing chapter.\n\nNote\n\nFor analysis purposes, you are recommended to use Windows 10 21H2 with User Access Control disabled.\n\nWhen creating the physical machine, CAPE doesn't require any specific configuration. You can choose the options that best fit your needs.", + "Requirements\n\nTo make CAPE run properly in your physical Windows system, you will have to install some required software and libraries.\n\nInstall Python\n\nPython is a strict requirement for the CAPE guest component (analyzer) to run properly.\n\nYou can download the proper Windows installer from the official website. Also in this case Python > 3.6 is preferred.\n\nSome Python libraries are optional and provide some additional features to the CAPE guest component. They include:\n\nPython Image Library: it's used for taking screenshots of the Windows desktop during the analysis.\n\nThey are not strictly required by CAPE to work properly, but you are encouraged to install them if you want to have access to all available features. Make sure to download and install the proper packages according to your Python version.\n\nAdditional Software\n\nAt this point, you should have installed everything needed by CAPE to run properly.", + "At this point, you should have installed everything needed by CAPE to run properly.\n\nDepending on what kind of files you want to analyze and what kind of sandboxed Windows environment you want to run the malware samples in, you might want to install additional software such as browsers, PDF readers, office suites, etc. Remember to disable the \"auto update\" or \"check for updates\" feature of any additional software.\n\nThis is completely up to you and what your needs are. You can get some hints by reading the ../../introduction/sandboxing chapter.\n\nAdditional Host Requirements\n\nOn Debian/Ubuntu:\n\n$ sudo apt-get install samba-common-bin\n\nFor the physical machine manager to work, you must have a way for physical machines to be returned to a clean state. In development/testing Fog (http://www.fogproject.org/) was used as a platform to handle re-imaging the physical machines. However, any re-imaging platform can be used (Clonezilla, Deepfreeze, etc) to accomplish this.", + "Some extras by doomedraven: .. choco.bat: https://github.com/kevoreilly/CAPEv2/raw/master/installer/choco.bat .. disablewin7noise.bat: https://github.com/kevoreilly/CAPEv2/blob/master/installer/disable_win7noise.bat", + "Network Configuration\n\nNow it's time to set up the network for your physical machine.\n\nWindows Settings\n\nBefore configuring the underlying networking of the sandbox, you might want to tweak some settings inside Windows itself.\n\nOne of the most important things to do is disable Windows Firewall and the Automatic Updates. The reason behind this is that they can affect the behavior of the malware under normal circumstances and that they can pollute the network analysis performed by CAPE, by dropping connections or including irrelevant requests.\n\nYou can do so from Windows' Control Panel as shown in the picture:\n\nimage\n\nUsing a physical machine manager requires a few more configuration options than the virtual machine managers to run properly. In addition to the steps laid out in the regular Preparing the Guest section, some settings need to be changed for physical machines to work properly.\n\nEnable auto-login (Allows for the agent to start upon reboot)", + "Enable auto-login (Allows for the agent to start upon reboot)\n\nEnable Remote RPC (Allows for CAPE to reboot the sandbox using RPC)\n\nTurn off paging (Optional)\n\nDisable Screen Saver (Optional)\n\nIn Windows 7 the following commands can be entered into an Administrative command prompt to enable auto-logon and Remote RPC. :\n\nreg add \"hklm\\software\\Microsoft\\Windows NT\\CurrentVersion\\WinLogon\" /v DefaultUserName /d /t REG_SZ /f\nreg add \"hklm\\software\\Microsoft\\Windows NT\\CurrentVersion\\WinLogon\" /v DefaultPassword /d /t REG_SZ /f\nreg add \"hklm\\software\\Microsoft\\Windows NT\\CurrentVersion\\WinLogon\" /v AutoAdminLogon /d 1 /t REG_SZ /f\nreg add \"hklm\\system\\CurrentControlSet\\Control\\TerminalServer\" /v AllowRemoteRPC /d 0x01 /t REG_DWORD /f\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\" /v LocalAccountTokenFilterPolicy /d 0x01 /t REG_DWORD /f\n\nNetworking", + "Networking\n\nNow you need to decide how to make your physical machine able to access the Internet or your local network.\n\nTo make it work properly you'll have to configure your machine's network so that the Host and the Guest can communicate. Testing the network access by pinging a guest is a good practice, to make sure the virtual network was set up correctly. Use only static IP addresses for your guest, as today CAPE doesn't support DHCP, and using it will break your setup.\n\nThis stage is very much up to your requirements and the characteristics of your virtualization software.\n\nFor physical machines, make sure when setting the IP address of the guest to also set the Gateway and DNS server to be the IP address of the CAPE server on the physical network. For example, if your CAPE server has the IP address of 192.168.1.1, then you would set the Gateway and DNS server in Windows Settings to be 192.168.1.1 as well.\n\nimage", + "Saving the Guest\n\nNow you should be ready to save the physical machine to a clean state. For the physical machine manager to work, you must have a way for physical machines to be returned to a clean state.\n\nBefore doing this make sure you rebooted it softly and that it's currently running, with CAPE's agent running and with Windows fully booted.\n\nNow you can proceed with saving the machine. The way to do it depends on the imaging software you decided to use.\n\nIn development/testing Fog (http://www.fogproject.org/) was used as a platform to handle re-imaging the physical machines. However, any re-imaging platform can be used (Clonezilla, Deepfreeze, etc.) to accomplish this.\n\nIf you follow all the below steps properly, your virtual machine should be ready to be used by CAPE.\n\nFog\n\nAfter installing Fog, you will need to create an image and add an image and a host to the Fog server.", + "To add an image to the fog server, open the Image Management window (http:///fog/management/index.php?node=images) and click \"Create New Image.\" Provide the proper inputs for your OS configuration and click \"Add\"\n\nimage\n\nNext, you will need to add the host you plan to re-image to Fog. To add a host, open a web browser and navigate to the Host Management page of Fog (http:///fog/management/index.php?node=host). Click \"Create New Host.\" Provide the proper inputs for your host configuration. Be sure to select the image you created above from the \"Host Image\" option when finished click the \"Add\" button.\n\nimage", + "image\n\nAt this point, you should be ready to take an image from the guest machine. To take an image you will need to navigate to the Task Management page and list all hosts (http:///fog/management/index.php?node=tasks&sub=listhosts). From here you should be able to click the Upload icon (Green up arrow), which should instantly add a task to the queue to take an image. Now you should reboot your CAPE guest image and it should PXE boot into Fog and capture the base image from the CAPE guest.", + "After you have successfully taken an image of the guest machine, you can use that image as one to deploy to the CAPE physical sandbox as needed. It is recommended to use a scheduled task to accomplish this. In order to create a scheduled task to re-image sandboxes, navigate to the Host Management page on Fog (http:///fog/management/index.php?node=host&sub=list). Then click \"Download\" the machine you wish to schedule the re-image task for. From this menu, select \"Schedule Cron-style Deployment\" and put in the values you wish for the schedule to apply to (*/5 * * * *) in the case shown in the screenshot below, but you may need to tweak these times for your environment.\n\nimage", + "Installing the Agent\n\nThe CAPE agent is designed to be cross-platform, therefore you should be able to use it on Windows as well as on Linux and OS X. To make CAPE work properly, you'll have to install and start this agent on every guest.\n\nIn the agent/ directory you will find an agent.py file, just copy it to the Guest operating system (in whatever way you want, perhaps in a temporary shared folder, downloading it from a Host webserver, or mounting a CDROM containing the agent.py file) and run it. This will launch the HTTP server which will listen for connections.\n\nImportant\n\nIt is a MUST to launch agent.py/w with elevated privileges. One of the (arguably) easiest way of doing so is creating a Scheduled Task, as explained further below in this page.", + "On Windows, if you simply launch the script, a Python window will be spawned, with a title similar to C:\\Windows\\py.exe. If you want to hide this window you can rename the file from agent.py to agent.pyw which will prevent the window from spawning upon launching the script.\n\nWarning\n\nIt is encouraged to use the agent in its window-less version (.pyw extension) given that opening a cmd window will definitely interfere with human.py, causing several problems like blocking the agent.py. communication with the host or producing no behavioral analysis output, just to mention some.\n\nDon't forget to test the agent before saving the snapshot. You can do it both navigating to VM_IP:8000 with a browser from your Host or be executing: curl VM_IP:8000. You should see an output similar to the following:\n\nimage\n\nimage\n\nPrior To Windows 10", + "image\n\nimage\n\nPrior To Windows 10\n\nIf you want the script to be launched at Windows' boot, place the file in the admin startup folder. To access this folder, open the app launcher with Win+R and search for \"shell:common startup\" which will open the folder you want (usually C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\StartUp). Do not place the agent in the user startup folder (usually C:\\Users\\\\AppData\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup) as it will launch the agent without admin privileges and therefore insufficient permissions resulting in the agent not being able to work as intended.\n\nWindows 10+\n\nNote\n\nUsing the scheduler as documented below is not strictly necessary. It is sufficient to take a snapshot with the agent running.\n\nTo start the script at boot, you will need to set the agent to be run as a scheduler task. Dropping it in C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\StartUp will result in it being ran with improper privilege.", + "Open Windows menu (Win key) and search for Task Scheduler.\n\nSelect Create Basic Task from the action list.\n\nimage\n\nGive the task a name (for example pizza.pyw, the name is irrelevant as long as you don't make any mention to CAPE or anything blatant for anti-VM detection algorithms) and click Next.\n\nSet the trigger as When I logon and click Next.\n\nIn the Action window, select Start a program and click Next.\n\nIn the Start a program window, select the path of the agent.py, and click Finish.\n\nAfter the task is created, click the Task Scheduler Library and find the one you just created. Right click on it and select Properties.\n\nimage\n\nIn the general tab tell it to Run with highest privileges.\n\nimage\n\nSelect OK.\n\nAfter that all is done, it will come up on the next restart/login.", + "Preparing the Guest\n\nAt this point, you should have configured the CAPE host component and you should have designed and defined the number and the names of the virtual machines you are going to use for malware execution.\n\nNow it's time to create such machines and configure them properly.\n\ncreation requirements agent additional_configuration network troubleshooting saving cloning linux", + "Additional Configuration\n\nIn this chapter we will enumerate several recommendations so as to make your Guest virtual machine as stealthy and operational as it gets. Additionally, we intend to address some of the most common problems that may arise.\n\nWindows Guest\n\nWindows Debloating\n\nThere exist some tools that automatically try to debloat your Windows instance. That is, uninstalling lots of pre-installed software and disabling intrusive features of Windows. The purpose of these tools is optimization, performance, security or all of these. In the context of CAPE, they're useful to reduce noise and the probability of malware not detonating. Examples of these tools are Debloat-Windows-10 or BlackBird. You can find a larger list here.\n\nNote\n\nIt is recommended to use any of these tools to disable as much noise as possible. Remember to create a snapshot before executing them.\n\nDisable Microsoft Store", + "Disable Microsoft Store\n\nSometimes the Microsoft Store opens up as soon as an analysis starts. In order to disable it, you can remove the environment variable %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps from the user PATH, as specified in this issue (#1237).\n\nReduce Overall Noise\n\nSometimes disabling all Windows services (like UAC, defender, update, aero, firewall, etc...) is necessary in order to make the analysis as fluent as possible. Make sure you check this script out and use it to get rid of all unnecessary noise. This is just an example. Your VM may require a different configuration in order to reduce or delete any Windows noise.\n\nWindows automatically enables the Virus Real-time Protection\n\nOne possible annoying behavior of Windows occurs when it automatically enables the real-time protection whenever an analysis is started therefore deleting the sample (if it identifies the sample as malware).", + "To definitely turn it off you can follow one or more options listed in this site.", + "Cloning the Virtual Machine\n\nIf you want to use more than one virtual machine based on a single \"golden image\", there's no need to repeat all the steps done so far: you can clone it. This way you'll have a copy of the original virtualized Windows with all requirements already installed.\n\nThere is a Python command-line utility available that can automate this process for you.\n\nThe new virtual machine will also contain all of the settings of the original one, which is not good. Now you need to proceed by repeating the steps explained in network, agent, and saving for this new machine.", + "One alternative to manually make the clones unique is to enable the disguise auxiliary module, windows_static_route and windows_static_route_gateway in conf/auxiliary.conf. The auxiliary option is applicable to dnsmasq user which can't set the default gateway there because of the usage of an isolated routing in kvm. One could run it once and snapshot to apply the modification or running the auxiliary module at every analysis.", + "Installing the Linux guest\n\nLinux guests doesn't have official CAPAE support! First, prepare the networking for your machinery platform on the host side.\n\nNext, get the list of virtual machines for which to configure the interface from conf/qemu.conf. For example, ubuntu_x32, ubuntu_x64, ubuntu_arm, ubuntu_mips, ubuntu_mipsel, et cetera. For each VM, preconfigure a network tap interface on the host, required to avoid having to start as root, e.g.:\n\n$ sudo ip tuntap add dev tap_ubuntu_x32 mode tap user cape\n$ sudo ip link set tap_ubuntu_x32 master br0\n$ sudo ip link set dev tap_ubuntu_x32 up\n$ sudo ip link set dev br0 up\n\n$ sudo ip tuntap add dev tap_ubuntu_x64 mode tap user cape\n$ sudo ip link set tap_ubuntu_x64 master br0\n$ sudo ip link set dev tap_ubuntu_x64 up\n$ sudo ip link set dev br0 up\n\nNote that if you run CAPE as a different user, replace ``cape`` after -u with your user. You also have a script in utils/linux_mktaps.sh\n\nPreparing x32/x64 Linux guests\n\nWarning", + "Preparing x32/x64 Linux guests\n\nWarning\n\nFor Linux guests on an Azure hypervisor, installing Python3 32-bit breaks the way that the Azure agent starts: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/agent-linux#installation. So the use of the monitor is limited to what can be run with the 64-bit version of Python3. You will have to comment out the architecture check in the CAPE agent.py for the CAPE agent to start. To reiterate, this warning is only relevant if you are using an Azure hypervisor.\n\nx32 guests\n\nInstall support file dependencies:\n\n$ sudo apt update\n$ sudo apt install python3-pip systemtap-runtime\n$ sudo pip3 install pyinotify\n$ sudo pip3 install Pillow # optional\n$ sudo pip3 install pyscreenshot # optional\n$ sudo pip3 install pyautogui # optional\n\nx64 guests\n\nInstall support file dependencies (we need Python3 32-bit):", + "x64 guests\n\nInstall support file dependencies (we need Python3 32-bit):\n\n$ sudo dpkg --add-architecture i386\n$ sudo apt update\n$ sudo apt install python3:i386 -y\n$ sudo apt install python3-distutils -y\n$ sudo apt install systemtap-runtime -y\n$ curl -sSL https://bootstrap.pypa.io/get-pip.py -o get-pip.py\n$ sudo python3 get-pip.py\n$ sudo python3 -m pip install pyinotify\n$ sudo python3 -m pip install Pillow # optional\n$ sudo python3 -m pip install pyscreenshot # optional\n$ sudo python3 -m pip install pyautogui # optional\n\nEnsure the agent automatically starts. The easiest way is to add it to crontab:\n\n$ sudo crontab -e\n@reboot python3 /path/to/agent.py\n\nDisable the firewall inside of the VM, if it exists:\n\n$ sudo ufw disable\n\nDisable NTP inside of the VM:\n\n$ sudo timedatectl set-ntp off\n\nDisable auto-update for noise reduction:", + "$ sudo timedatectl set-ntp off\n\nDisable auto-update for noise reduction:\n\n$ sudo tee /etc/apt/apt.conf.d/20auto-upgrades << EOF\nAPT::Periodic::Update-Package-Lists \"0\";\nAPT::Periodic::Download-Upgradeable-Packages \"0\";\nAPT::Periodic::AutocleanInterval \"0\";\nAPT::Periodic::Unattended-Upgrade \"0\";\nEOF\n\n$ sudo systemctl stop snapd.service && sudo systemctl mask snapd.service\n\nIf needed, kill the unattended-upgrade process using htop or ps + kill.\n\nOptional - remove preinstalled software and configurations:\n\n$ sudo apt-get purge update-notifier update-manager update-manager-core ubuntu-release-upgrader-core -y\n$ sudo apt-get purge whoopsie ntpdate cups-daemon avahi-autoipd avahi-daemon avahi-utils -y\n$ sudo apt-get purge account-plugin-salut libnss-mdns telepathy-salut -y", + "It is recommended to configure the Linux guest with a static IP addresses. Make sure the machine entry in the configuration has the correct IP address and has the platform variable set to linux. Create a snapshot once the VM has been configured. It is now ready for analysis!\n\nCommunity Feature - Tracee ---\n\nFor more information about Tracee in CAPEv2 and how to install it, visit its integration page: :ref:`tracee`.\n\nTo use [Tracee eBPF event tracing](https://github.com/kevoreilly/CAPEv2/pull/2235) in Linux, you will have to install Docker and the Tracee container in the Ubuntu guest:\n\n`shell docker pull docker.io/aquasec/tracee:0.20.0 docker image tag aquasec/tracee:0.20.0 aquasec/tracee:latest`\n\nAfterwards, enable Tracee using the appropriate options in auxiliary.conf and processing.conf and install the [CAPEv2 Community Repo](https://github.com/CAPESandbox/community). Here is a guide: https://capev2.readthedocs.io/en/latest/usage/utilities.html#community-download-utility.", + "Tracee should be able to automatically highlight events such as fileless execution and syscall hooking.", + "Creation of the Virtual Machine\n\nOnce you have properly installed <../host/installation> your virtualization software, you can create the virtual machines that you need.\n\nThe usage and configuration of your virtualization software is out of scope for this guide, so please refer to the virtualization software's official documentation.\n\nNote\n\nYou can find some hints and considerations on how to design and create your virtualized environment in the ../../introduction/sandboxing chapter.\n\nNote\n\nFor analysis purposes, it is recommended to use Windows 10 21H2 with User Access Control disabled.\n\nNote\n\nKVM Users - Be sure to choose a hard drive image format that supports snapshots, such as QCOW2. See saving for more information.\n\nWhen creating the virtual machine, CAPE doesn't require any specific configuration. Choose the options that best fit your needs.", + "Requirements\n\nTo make CAPE run properly in your virtualized Windows system, you will have to install some required software and libraries.\n\nInstall Python\n\nPython is a strict requirement for the CAPE guest component (analyzer) to run properly.\n\nWarning\n\nPlease note that only 32-bit (x86) versions of Python3 are supported at this time for Windows, due to the way the analyzer interacts with low-level Windows libraries. Using a 64-bit version of Python will crash the analyzer in Windows. For other platforms the version of Python can be 64-bit (x64).\n\nYou can download the proper Windows / Linux installer from the official website. Python versions > 3.10 and < 3.13 are preferred.\n\nImportant\n\nWhen installing Python, it is recommended to select the Add Python to PATH option. And remove from that PATH %USERPROFILE%AppDataLocalMicrosoftWindowsApps\n\nimage", + "image\n\nWhen the installation is done, it is recommended to test whether Python is correctly set into your PATH environment variable. In order to do so, you can execute the following commands from a command prompt:\n\n> python --version\n\nYou should be prompted with Python's installed version. If not, make sure you add the binaries to your PATH. There are tutorials galore on the Internet.\n\nSome Python libraries are optional and provide some additional features to the CAPE guest component. They include:\n\nPython Image Library: used for taking screenshots of the Windows desktop during the analysis.\n\nThe recommended installation is the execution of the following commands:\n\n> python -m pip install --upgrade pip\n> python -m pip install Pillow\n\nThese Python libraries are not strictly required by CAPE, but you are encouraged to install them if you want to have access to all available features. Make sure to download and install the proper packages according to your Python version.", + "Additional Software\n\nAt this point, you should have installed everything needed by CAPE to run properly.\n\nDepending on what kind of files you want to analyze and what kind of sandbox environment you want to run the malware samples in, you may want to install additional software such as browsers, PDF readers, office suites, etc.\n\nNote\n\nRemember to disable the \"Auto Update\" or \"Check For Updates\" feature of any additional software that you install.\n\nFor Microsoft Office we recommend Office 2010 SP2. This is both for its susceptibility to exploits typically used in maldocs, and its proven compatibility with CAPE. The only recommended alternative is Office 2016 (32-bit).\n\nWe do not recommend any Office version more recent than 2016 due to lack of proven compatibility with both maldocs and CAPE.\n\nFor hints about what your needs may be, give the ../../introduction/sandboxing chapter a read.", + "Troubleshooting\n\nWhere to start diagnosing\n\nGiven the large number of technologies involved in CAPE installation, configuration and usage, chances are high one or more of them start failing, crashing or simply dying. When this happens, there are a few places you should always look first, since it could help you diagnosing the real problem and avoid wasting time looking at rabbit holes. These places are (no surprise here) the logs generated by each fundamental service involved in CAPE's execution. While all of them can be checked in their corresponding directories, journalctl can be leveraged to check all of them at once.\n\nNote\n\nPlease note the following errors are just random examples used to show how and where to start dealing with CAPE problems. You may never face these errors at all!", + "Regardless of the error you are facing there are two places where you should start looking: (1) CAPE's logs (naming inherited from cuckoo) and (2) virtqemu logs (assuming you have installed KVM/QEMU using the kvm_qemu.sh script .)\n\nFor example, lets consider the following situation:\n\nCAPE submission (either via web or ) apparently works, but the VM is never spawned (and analysis never launched).\n\nWithout prior indication, the first place to check are CAPE logs, located in /opt/CAPEv2/log/ (installation directory). The logs corresponding to the analyses are rotated daily, and the current log is cuckoo.log. Checking the contents of /opt/CAPEv2/log/cuckoo.log one could easily spot the culprit:\n\nError\n\nError example in file.\n\nimage", + "Error\n\nError example in file.\n\nimage\n\nFurthermore, the error states something about libvirt. This is a clear indication that the corresponding logs must also be inspected. In this case, the logs of libvirt are stored under the virtqemud service. Whenever something seems wrong regarding the virtual machines, this is the place to look after. It can be inspected with $ sudo journalctl -u virtqemud -r:\n\nError\n\nError examples by inspecting vertqemud logs with journalctl.\n\nimage\n\nimage\n\nAdditionally, you should always try to see if you are able to manually replicate the error in order to discard technologies and find out which one is failing. Considering the scenario above, when trying to manually spawn the virtual machines:\n\nError\n\nError example in Virtual Manager - KVM.\n\nimage\n\nNo Internet connection in the guest", + "Error\n\nError example in Virtual Manager - KVM.\n\nimage\n\nNo Internet connection in the guest\n\nThere are reasons galore why your guest VM has no Internet connection when an analysis is fired up. Before digging into this problem, please make sure you followed the steps at Network Configuration to set up both the virtual machine and its connections. Furthermore, you should read the routing chapter in order to know and understand the different routing modes as well as the rooter chapter to understand what the Rooter is.\n\nSome considerations:\n\ndirtyline should be the interface that provides your host internet connection like eno1, not a virtual interface like virbr1. This must be configured in the routing.conf configuration file.\n\nCheck agent.py is running with elevated privileges within the guest VM.\n\nMake sure you specify the correct STATIC IP in kvm.conf.\n\nMake sure you specified the correct interface in auxiliary.conf.", + "Make sure you specified the correct interface in auxiliary.conf.\n\nAlso, bear in mind there are already created (and some of them solved) issues about this particular problem. For example:\n\nhttps://github.com/kevoreilly/CAPEv2/issues/1234\n\nhttps://github.com/kevoreilly/CAPEv2/issues/1245\n\nhttps://github.com/kevoreilly/CAPEv2/issues/371\n\nhttps://github.com/kevoreilly/CAPEv2/issues/367\n\nhttps://github.com/kevoreilly/CAPEv2/issues/136\n\nPCAP Generation\n\nIf you are facing problems related to either tcpdump or the PCAP generation, take a look at this issue (#1234).\n\nNote\n\nMake sure the pcap group exists in your system and that the user you use to launch CAPE (presumably the cape user) belongs to it as well as the tcpdump binary.\n\nMake sure the correct path is specified in auxiliary.conf for tcpdump. Check the path of your local installation of tcpdump with:\n\n$ whereis tcpdump", + "$ whereis tcpdump\n\nCheck permissions of tcpdump binary. cape user must be able to run it. Also check whether you specified the correct interface in auxiliary.conf.\n\nIf you are still facing problems and the PCAP is not generating, verify the tcpdump binary belongs to the pcap group and it has the neede capabilities:\n\n$ sudo chgrp pcap /usr/bin/tcpdump\n$ sudo setcap cap_net_raw,cap_net_admin=eip /usr/bin/tcpdump\n\nOther issues about this problem:\n\nhttps://github.com/kevoreilly/CAPEv2/issues/1193", + "Network Configuration\n\nNow it's time to set up the network for your virtual machine.\n\nWindows Settings\n\nNote\n\nAs was discussed in the previous chapter , any additional configuration like disabling the Windows Firewall and the Automatic Updates should be done before configuring the network as stated below. Given that VMs may be left without internet connection, it is convenient to download and make changes before this happens. The reason for turning off updates and firewall is that these features can affect the behavior of the malware under normal circumstances and they can pollute the network analysis performed by CAPE, by dropping connections or including irrelevant requests.\n\nWindows 10\n\nTo do so in Windows 10, open Control Panel and search for Windows Defender Firewall. Disable it completely:\n\nimage\n\nimage", + "image\n\nimage\n\nThe next step is disabling automatic updates. To do so, open Control Panel and search for Administrative Tools. Open it, then open Services. Look for the Windows Update entry and double-click on it. Set Startup type to disabled and click stop.\n\nimage\n\nWindows XP\n\nYou can do so from Windows' Control Panel as shown in the picture:\n\nimage\n\nVirtual Networking\n\nNow you need to decide whether you want your virtual machine to be able to access the Internet or your local network.\n\nTo make the virtual machine's networking work properly you'll have to configure your machine's network so that the Host and the Guest can communicate.\n\nTesting the network access by pinging a guest from the host is good practice, to make sure that the virtual network was set up correctly.\n\nOnly use static IP addresses for your guests, since CAPE doesn't support DHCP (at least, as of this writing).\n\nWarning", + "Warning\n\nThe range 192.168.122.0/24 is the default range for KVM's first interface (usually virbr01) and it can be used as an ANTI VM check. If you want to read more about ANTI VM checks and how to set up your VM, check this KVM ANTIVM post.\n\nThe recommended setup is using a Host-Only networking layout with proper forwarding and filtering configuration done with iptables on the Host.\n\nWe have automated this for you with:\n\n$ utils/rooter.py\n\nYou can read more about rooter.py in its dedicated chapter: rooter.\n\nIn the chapter Setting a static IP you will find the instructions for configuring a Windows guest OS to use a static IP. In the chapter Creating an isolated network you will find instructions on how to create an isolated network (usually referred to as hostonly) network and use it in your virtual machine. You can find further instructions on creating VMs with Virtual Machine Manage in this post.\n\nCreating an isolated network", + "Creating an isolated network\n\nThe recommended setup is using an isolated network for your VM. In order to do so, you can follow the instructions below if you are using KVM and virt-manager (Virtual Machine Manager).\n\nFirst, in the Virtual Machine Manager GUI click con Edit -> Connection Details.\n\nimage\n\nIn the opened window click on the + sign, at the bottom left corner of the image. We are now defining the details of the new network. Give it a name (hostonly, for example) and make sure you select Isolated mode. Then, click on the IPv$ configuration drop-down menu and define the range of your network. In the image below only the third octet is changed.\n\nimage\n\nOnce the new isolated network is created, if you already created a VM, you can select it from Virtual Machine Manager by clicking Show virtual hardware details of that specific VM. Then click on the network adapter and choose the recently created network. Then click Apply.\n\nimage", + "image\n\nThe next thing is checking the new interface was indeed created and the VM is actually using it. From your Host, execute the following command from a command prompt:\n\n> ip a\n\nimage\n\nThere should be an interface with the IP address you specified while creating it. in the image above the specific interface is virbr1.\n\nFrom the guest VM (Windows OS in this example) execute the following command from a command prompt:\n\n> ipconfig\n\nimage\n\nThe assigned IP should be in the range of the hostonly network.\n\nThe guest VM and host must have connectivity between them. In order to check it, you can use tools like ping or telnet.\n\nimage\n\nPlease bear in mind that this time the IP is assigned via DHCP, something CAPE does not support. Please set a static IP for your VM. Next chapter has instructions on that.\n\nSetting a static IP", + "Setting a static IP\n\nTo set up a static IP it is first recommended to inspect the assigned IP, which will be (ideally) in the range of your interface (presumably y virbr0). To see your actual IP settings execute the following command from a command prompt:\n\n> ipconfig /all\n\nimage\n\nNote\n\nThe IP addresses and ranges used throughout this chapter are just examples. Please make sure you use your own working configurations and addresses.\n\nOpen Control Panel and search for Network. Find and open the Network and Sharing Center. Click Change adapter settings.\n\nimage\n\nNow open the Ethernet adapter and click Properties.\n\nimage\n\nThen click Internet Protocol Version 4 (TCP/IPv4) and Properties. Set the IP address, Subnet mask, Default gateway and DNS Server according to the results of the ipconfig command.\n\nimage\n\nNote\n\nYou can set as static IP address the address previously given by DHCP or any other address you like within the range of your interface.", + "Wait a few seconds and you should have Internet access (in case you are using NAT. Bear in mind an isolated network will not provide Internet connection).\n\nIt is important to check connectivity between the Host and the Guest, like in the previous chapter.\n\nThis stage is very much up to your requirements and the characteristics of your virtualization software.\n\nWarning\n\nVirtual networking errors! Virtual networking is a vital component for CAPE. You must be sure that connectivity works between the host and the guests. Most of the issues reported by users are related to an incorrect networking setup. If you aren't sure about your networking, check your virtualization software documentation and test connectivity with ping and telnet.\n\nDisable Noisy Network Services\n\nWindows 7 introduced new network services that create a lot of noise and can hinder PCAP processing. Disable them by following the instructions below.\n\nTeredo\n\nOpen a command prompt as Administrator, and run:", + "Teredo\n\nOpen a command prompt as Administrator, and run:\n\n> netsh interface teredo set state disabled\n\nLink Local Multicast Name Resolution (LLMNR)\n\nOpen the Group Policy editor by typing gpedit.msc into the Start Menu search box, and press Enter. Then navigate to Computer Configuration> Administrative Templates> Network> DNS Client, and open Turn off Multicast Name Resolution.\n\nSet the policy to enabled.\n\ngpedit.msc missing\n\nWarning\n\nIf gpedit.msc is not present in your system (if you are using Windows 10 Home Edition, for example), you can enable it by executing the following commands from an Administrator command prompt:\n\n> FOR %F IN (\"%SystemRoot%\\servicing\\Packages\\Microsoft-Windows-GroupPolicy-ClientTools-Package~*.mum\") DO (DISM /Online /NoRestart /Add-Package:\"%F\")\n> FOR %F IN (\"%SystemRoot%\\servicing\\Packages\\Microsoft-Windows-GroupPolicy-ClientExtensions-Package~*.mum\") DO (DISM /Online /NoRestart /Add-Package:\"%F\")", + "If the commands were successful, you should now be able to execute Run (Win+R) -> gpedit.msc.\n\nNetwork Connectivity Status Indicator, Error Reporting, etc\n\nWindows has many diagnostic tools such as Network Connectivity Status Indicator and Error Reporting, that reach out to Microsoft servers over the Internet. Fortunately, these can all be disabled with one Group Policy change.\n\nOpen the Group Policy editor by typing gpedit.msc into the Start Menu search box, and press Enter. Then navigate to Computer Configuration> Administrative Templates> System> Internet Communication Management, and open Restrict Internet Communication.\n\nSet the policy to enabled.", + "Saving the Virtual Machine\n\nNow you should be ready to save the virtual machine to a snapshot state.\n\nBefore doing this, make sure that you have rebooted the guest softly and that it's currently running, with CAPE's agent running and with Windows fully booted.\n\nNow you can proceed with saving the machine, which depends on the virtualization software that you decided to use.\n\nThe virtualization software-specific instructions found below can assist with getting the virtual machine ready to be used by CAPE.\n\nKVM\n\nHere are some helpful links for creating a virtual machine with virt-manager:\n\nCreate a virtual machine with virt-manager aka GUI client\n\nAdvanced KVM preparation for malware analysis", + "Advanced KVM preparation for malware analysis\n\nIf you have decided to adopt KVM, you must use a disk format for your virtual machines that supports snapshots. By default, libvirt tools create RAW virtual disks, and since we need snapshots you'll have to use either QCOW2 or LVM. For the scope of this guide, we adopt QCOW2, since it is easier to set up than LVM.\n\nThe easiest way to create such a virtual disk is by using the tools provided by the libvirt suite. You can either use virsh if you prefer command-line interfaces or virt-manager for a nice GUI. You should be able to directly create the virtual disk in the QCOW2 format, but in case you have a RAW disk you can convert it like this:\n\n$ cd /your/disk/image/path\n$ qemu-img convert -O qcow2 your_disk.raw your_disk.qcow2\n\nNow edit your VM definition as follows:\n\n$ virsh edit \"\"\n\nFind the disk section, which looks like this:", + "$ virsh edit \"\"\n\nFind the disk section, which looks like this:\n\n\n \n \n \n
\n\n\nAnd change \"type\" to qcow2 and \"source file\" to your qcow2 disk image path, like this:\n\n\n \n \n \n
\n\n\nKVM by default will pass through a feature flag, viewable in ECX as the 31st bit after executing the CPUID instruction with EAX set to 1. Some malware will use this unprivileged instruction to detect its execution in a VM. One way to avoid this is to modify your VM definition as follows: find the following line:\n\n\n\nChange it to:", + "\n\nChange it to:\n\n\n\nThen within the domain element, add the following:\n\n\n \n \n\n\nInstead of using \"host\", you can also choose from multiple other CPU models from the list displayed with the qemu-system-i386 -cpu help command (SandyBridge, Haswell, etc).\n\nNow test your virtual machine. If everything works, prepare it for snapshotting while running CAPE's agent. This means the virtual machine needs to be running when you take the snapshot. You can take a snapshot with the following command via virsh:\n\n$ virsh snapshot-create \"\"\n$ virsh snapshot-create-as --domain \"\" --name \"\"\n\nAfter snapshotting the guest, you can shut it down.\n\nWarning\n\nHaving multiple snapshots can cause errors such as:\n\nERROR: No snapshot found for virtual machine ", + "ERROR: No snapshot found for virtual machine \n\nVM snapshots can be managed using the following commands.\n\n$ virsh snapshot-list \"\"\n\n$ virsh snapshot-delete \"\" \"\"\"\n\nSnapshot with Virtual Manager (virt-manager)\n\nIf you are using virtual manager (virt-manager) to manage you VMs (as mentioned in the installation_kvm chapter), you can also use it to create the snapshots.\n\nWarning\n\nVirtual manager allows you to create either internal or external snapshots (which you can read more about here). The arguably easier mode of operation are internal snapshots, given that external ones use individual files that may mess up your whole libvirt - qemu - kvm installation in case of name/path modification or loss.\n\nWhen creating a new snapshot, in newer versions of KVM you can select whether you want an internal or external or one:\n\nimage\n\nWhen any given snapshot is external, it's label will be suffixed with \"*(External)*\".\n\nimage\n\nVirtualBox", + "image\n\nVirtualBox\n\nIf you are going for VirtualBox you can take the snapshot from the graphical user interface or the command line:\n\n$ VBoxManage snapshot \"\" take \"\" --pause\n\nAfter the snapshot creation is completed, you can power off the machine and restore it:\n\n$ VBoxManage controlvm \"\" poweroff\n$ VBoxManage snapshot \"\" restorecurrent\n\nVMware Workstation\n\nIf you decided to adopt VMware Workstation, you can take the snapshot from the graphical user interface or the command line:\n\n$ vmrun snapshot \"/your/disk/image/path/wmware_image_name.vmx\" your_snapshot_name\n\nWhere your_snapshot_name is the name you choose for the snapshot. After that power off the machine from the GUI or the command line:\n\n$ vmrun stop \"/your/disk/image/path/wmware_image_name.vmx\" hard\n\nXenServer", + "$ vmrun stop \"/your/disk/image/path/wmware_image_name.vmx\" hard\n\nXenServer\n\nIf you decided to adopt XenServer, the XenServer machinery supports starting virtual machines from either disk or a memory snapshot. Creating and reverting memory snapshots require that the Xen guest tools be installed in the virtual machine. The recommended method of booting XenServer virtual machines is through memory snapshots because they can greatly reduce the boot time of virtual machines during analysis. If, however, the option of installing the guest tools is not available, the virtual machine can be configured to have its disks reset on boot. Resetting the disk ensures that malware samples cannot permanently modify the virtual machine.\n\nMemory Snapshots\n\nThe Xen guest tools can be installed from the XenCenter application that ships with XenServer. Once installed, restart the virtual machine and ensure that the CAPE agent is running.", + "Snapshots can be taken through the XenCenter application and the command line interface on the control domain (Dom0). When creating the snapshot from XenCenter, ensure that the \"Snapshot disk and memory\" is checked. Once created, right-click on the snapshot and note the snapshot UUID.\n\nTo snapshot from the command line interface, run the following command:\n\n$ xe vm-checkpoint vm=\"vm_uuid_or_name\" new-name-label=\"Snapshot Name/Description\"\n\nThe snapshot UUID is printed to the screen once the command completes.\n\nRegardless of how the snapshot was created, save the UUID in the virtual machine's configuration section. Once the snapshot has been created, you can shut down the virtual machine.\n\nBooting from Disk\n\nIf you can't install the Xen guest tools or if you don't need to use memory snapshots, you will need to ensure that the virtual machine's disks are reset on boot and that the CAPE agent is set to run at boot time.", + "Running the agent at boot time can be configured in Windows by adding a startup item for the agent.\n\nThe following commands must be run while the virtual machine is powered off.\n\nTo set the virtual machine's disks to reset on boot, you'll first need to list all the attached disks for the virtual machine. To list all attached disks, run the following command:\n\n$ xe vm-disk-list vm=\"vm_name_or_uuid\"\n\nIgnoring all CD-ROM and read-only disks, run the following command for each remaining disk to change its behavior to reset on boot:\n\n$ xe vdi-param-set uuid=\"vdi_uuid\" on-boot=reset\n\nAfter the disk is set to reset on boot, no permanent changes can be made to the virtual machine's disk. Modifications that occur while a virtual machine is running will not persist past shutdown.\n\nAzure\n\nOnce you have a virtual machine that is ready to be your golden image for a virtual machine scale set, take a snapshot of the virtual machine's disk.", + "Official documentation on how to do this: Create a snapshot of a virtual hard disk\n\nWe are now going to turn this snapshot into an \"image\", which is the terminology Azure uses as the base for all virtual machines in a scale set.\n\nThe creation of an image from a snapshot takes a while, so be patient.\n\nIn the az.conf file, you will need to specify the Compute Gallery Name as well as the Image Definition Name.", + "Integrations\n\nThis chapter explains how to integrate external/3rd party services to CAPE. CAPE is written in a modular architecture built to be as customizable as it can, to fit the needs of all users.\n\nbox-js curtain librenms suricata", + "Curtain\n\nDetailed writeup by Mandiant's powershell blog post\n\nConfiguration required in Virtual Machine. Example for Windows 7:\n\nWindows 7 SP1, .NET at least 4.5, powershell 5 preferably over v4\nKB3109118 - Script block logging back port update for WMF4\nx64 - https://cuckoo.sh/vmcloak/Windows6.1-KB3109118-v4-x64.msu\nx32 - https://cuckoo.sh/vmcloak/Windows6.1-KB3109118-v4-x86.msu\nKB2819745 - WMF 4 (Windows Management Framework version 4) update for Windows 7\n\nx64 - https://cuckoo.sh/vmcloak/Windows6.1-KB2819745-x64-MultiPkg.msu\nx32 - https://cuckoo.sh/vmcloak/Windows6.1-KB2819745-x86-MultiPkg.msu\nKB3191566 - https://www.microsoft.com/en-us/download/details.aspx?id=54616", + "You should create following registry entries\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\PowerShell\\ModuleLogging\\ModuleNames\" /v * /t REG_SZ /d * /f /reg:64\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\PowerShell\\ScriptBlockLogging\" /v EnableScriptBlockLogging /t REG_DWORD /d 00000001 /f /reg:64\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\PowerShell\\Transcription\" /v EnableTranscripting /t REG_DWORD /d 00000001 /f /reg:64\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\PowerShell\\Transcription\" /v OutputDirectory /t REG_SZ /d C:\\PSTranscipts /f /reg:64\nreg add \"HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\PowerShell\\Transcription\" /v EnableInvocationHeader /t REG_DWORD /d 00000001 /f /reg:64", + "Suricata\n\nSuricata can be used to grab binaries or the like off the wire and then feed them to CAPEv2 for detonation. This involves several parts.\n\nA box running Suricata listening on a network span.\n\nsuricata_extract_submit from CAPE::Utils for handling found binaries.\n\nA CAPEv2 box for detonation.\n\nmojo_cape_submit from CAPE::Utils for accepting submissions via suricata_extract_submit.\n\nSuricata requires rules are capable of this and a output configured for file extraction.\n\nCAPE::Utils can be installed via the command cpanm CAPE::Utils and on some Linux distros the headers, which on Debian is included in the package zlib1g-dev.\n\nOnce that is installed, a config file for suricata_extract_submit needs configured. The default location is usr/local/etc/suricata_extract_submit.ini.", + "# the API key to use if needed\n#apikey=\n# URL to find mojo_cape_submit at\nurl=http://192.168.14.15:8080/\n# the group/client/whathaveya slug\nslug=foo\n# where Suricata has the file store at\nfilestore=/var/log/suricata/files\n# a file of IPs or subnets to ignore SRC or DEST IPs of\n#ignore=\n# a file of regex to use for checking host names go ignore\n#ignoreHosts=\n# if it should use HTTPS_PROXY and HTTP_PROXY from ENV or not\nenv_proxy=0\n# stats file holding only the stats for the last run\nstats_file=/var/cache/suricata_extract_submit_stats.json\n# stats dir\nstats_dir=/var/cache/suricata_extract_submit_stats/\n\nAnd then a cron job setup akin to below to handle the submission.\n\n/5 * * * * /usr/local/bin/suricata_extract_submit 2> /dev/null > /dev/null\n\nThe output is safe to dump to /dev/null as script sends it's data to syslog as suricata_extract_submit to the daemon log.\n\nYou can check if this has hung like below.", + "You can check if this has hung like below.\n\n/usr/local/libexec/nagios/check_file_age -i -f /var/run/suricata_extract_submit.pid\n\nAnd if monitoring via LibreNMS the following line can be added to the SNMPD config to enable monitoring of it. There are then several rules available in the rules collection that can be used for alerting upon submission issues.\n\nextend suricata_extract /usr/local/bin/suricata_extract_submit_extend\n\nWith the submission CAPE::Utils just needs installed on the CAPEv2 system beingused for detonation. In the default configuration of CAPEv2 does not require /usr/local/etc/cape_utils.ini being used, but may be worthwhile reviewing the documentation. You will need to make sure the directories specifeid via the variable incoming and incoming_json exists and is writable/readable by CAPEv2.", + "And if using the supplied systemd service file the following config file needs configured at /usr/local/etc/mojo_cape_submit.env. For more information on deploying Mojolicious based apps, the listen string, or for writing your own service file or something similar, checkout docs for Mojolicious Deployment.\n\nCAPE_USER=\"cape\"\nLISTEN_ON=\"http://192.168.14.15:8080\"\n\nSecurity mojo_cape_submit defaults to IP and can be controlled by the auth value in the config and has the default value of subnet as being 192.168.0.0/16,127.0.0.1/8,::1/128,172.16.0.0/12,10.0.0.0/8, which allows submission via anything on common private/local subnets.\n\nIf you using LibreNMS, you can monitor monitor it via mojo_cape_submit_extend by adding the following to your SNMPD config.\n\nextend mojo_cape_submit /usr/local/bin/mojo_cape_submit_extend", + "Box-js\n\nbox_installation\n\nbox_preparation\n\nbox_starting\n\nbox_restapi\n\nInstallation\n\nQuick and dirty notes on how to integrate box-js to CAPE:\n\n$ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -\n$ sudo apt install docker.io nodejs git\n$ sudo usermod -aG docker cape\n# newgrp docker\n$ docker run hello-world\n$ sudo npm install -g --save npm@latest core-util-is hapi rimraf express express-fileupload\n$ git clone https://github.com/kirk-sayre-work/box-js /opt/box-js\n$ cd /opt/box-js\n$ sudo npm audit fix --force\n\nPreparation\n\nWe will leave fixing and hardening of box-js for you, here are just a few examples:", + "Preparation\n\nWe will leave fixing and hardening of box-js for you, here are just a few examples:\n\nUSERNAME=\"CAPE\"\nIP=\"0.0.0.0\"\nsudo sed -i \"s|\\\\\\\\SYSOP1~1\\\\\\\\|\\\\\\\\$USERNAME\\\\\\\\|g\" emulator/WScriptShell.js\nsudo sed -i \"s|\\\\\\\\Sysop12\\\\\\\\|\\\\\\\\$USERNAME\\\\\\\\|g\" emulator/WScriptShell.js\nsudo sed -i \"s|windows-xp|windows 7|g\" emulator/WScriptShell.js # or 10 who knows\nsudo sed -i \"s|\\\\\\\\MyUsername\\\\\\\\|\\\\\\\\$USERNAME\\\\\\\\|g\" emulator/ShellApplication.js\nsudo sed -i \"s|USER-PC|$USERNAME-PC|g\" emulator/WMI.js\nsudo sed -i \"s|Sysop12|$USERNAME|g\" emulator/WMI.js\nsudo sed -i \"s|127.0.0.1|$IP|g\" integrations/api/api.js\n\nreplace emulator/processes.json with your own, you can use this to generate one:\n\n$ gwmi -Query \"SELECT * FROM Win32_Process\" > a.txt\n$ tools/makeProcList.js\n\ncreate a tar.gz with tar -czvf master.tar.gz box-js-master/:\n\n$ cd integrations/api/\n\nreplace Dockerfile with this content, required to run fixed/patched box-js inside of the Docker:", + "replace Dockerfile with this content, required to run fixed/patched box-js inside of the Docker:\n\nFROM node:10-alpine\n#ENV http_proxy http://PROXY_IP:PORT\n#ENV https_proxy http://PROXY_IP:PORT\nRUN apk update && apk upgrade\nRUN apk add --no-cache bash file gcc m4\nRUN apk add -U --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing aufs-util\n# Install the latest v1 of box-js\nCOPY master.tar.gz /samples/\nRUN npm install /samples/master.tar.gz --global --production\nRUN rm /samples/master.tar.gz\nWORKDIR /samples\nCMD box-js /samples --output-dir=/samples --loglevel=debug\n\nStarting box-js rest-api\n\nThe default port is 9000 you can change it inside of api.py\n\n$ node api.js\n\nBox-js rest-api endpoints\n\nSandbox configuration\n\nIn conf/processing.conf enable box-js and set correct url", + "Tracee eBPF for Linux\n\nCAPEv2 now has support for [Aqua Security Tracee](https://www.aquasec.com/products/tracee/), an eBPF-based threat detection engine with built-in signatures, for Linux dynamic analysis to complement the existing strace implementation.\n\nTo use it, you need to install the [CAPEv2 Community Repo](https://github.com/CAPESandbox/community). Here is a guide: https://capev2.readthedocs.io/en/latest/usage/utilities.html#community-download-utility.\n\nOnce you have installed the CAPEv2 Community Repo, you should have analyzer/linux/modules/auxiliary/tracee.py.\n\nTracee has functionality to:\n\ncapture artifacts such as loaded kernel modules, suspicious memory regions and eBPF programs in their run-time state, allowing their easy extraction even from packed and encrypted malwares\n\noperate at the eBPF level to capture events\n\nThe information captured from Tracee will then be displayed in a results UI:", + "The information captured from Tracee will then be displayed in a results UI:\n\n![Screenshot of the Tracee Behaviour UI](https://github.com/user-attachments/assets/039ea42f-36bd-4530-b5d9-48face5f642b)\n\nConfiguring Tracee using Policies ===\n\nThe CAPEv2 Tracee module provides analyzer/linux/modules/auxiliary/tracee/policy.yml to Tracee. This policy.yml file defines how Tracee should behave and what events it should capture. You can modify locally it to fit your needs.\n\nDocumentation for the policy file: https://aquasecurity.github.io/tracee/v0.20/docs/policies/\n\nVerifying Functionality ===\n\nAfter performing the Tracee setup for Linux guests detailed in [Installing the Linux guest](https://capev2.readthedocs.io/en/latest/installation/guest/linux.html), you may want to verify the functionality of your installation and make sure everything is working well.", + "You can obtain a live malware sample for Linux to load into CAPEv2 from https://bazaar.abuse.ch/sample/bd0141e88a0d56b508bc52db4dab68a49b6027a486e4d9514ec0db006fe71eed/. Please be careful with this file as it's actual malware. We do not take responsibility for anything that goes wrong.\n\nOnce the task is finished processing, the \"Detailed Behaviour (Tracee)\" tab ought to be available.", + "LibreNMS\n\nLibreNMS is capable of monitoring stats for CAPEv2. This is handled by a SNMP extend.\n\nwget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/cape -O /etc/snmp/cape\nchmod +x /etc/snmp/cape\napt-get install libfile-readbackwards-perl libjson-perl libconfig-tiny-perl libdbi-perl libfile-slurp-perl libstatistics-lite-perl libdbi-perl libdbd-pg-perl\n\nWith that all in place, you will then need to create a config file for it at /usr/local/etc/cape_extend.ini. Unless you are doing anything custom DB wise, the settings below, but with the proper PW will work.\n\n# DBI connection DSN\ndsn=dbi:Pg:dbname=cape;host=127.0.0.1\n\n# DB user\nuser=cape\n\n# DB PW\npass=12345\n\nThis module will also send warnings, errors, and critical errors found in the logs to LibreNMS. To filter these, /usr/local/etc/cape_extend.ignores can be used. The format for that is as below.\n\n ", + " \n\nThis the ignore level will be lower cased. The separator between the level and the regexp pattern is /[\\ \\t]+/. So if you want to ignore the two warnings generated when VM traffic is dropped, you would use the two lines such as below.\n\nWARNING PCAP file does not exist at path\nWARNING Unable to Run Suricata: Pcap file\n\nOn the CAPEv2 side, you will need to make a few tweaks to reporting.conf. litereport will need enabled and keys_to_copy should include 'signatures' and 'detections'.\n\nFinally will need to enable the extend for your\n\nextend cape /etc/snmp/extends/cape\n\nOnce snmpd is restarted and the the device rediscovered via LibreNMS, you will then be able to\n\nFor more detailed monitoring, if using KVM, you will likely want to also considering using HV::Monitor, which will allow detailed monitoring various stats VMs.", + "Final Remarks\n\nLinks\n\nAsking for help\n\nPlease read the following rules before posting:\n\nBefore posting, Google about your issue. DO NOT post questions that have already been answered over and over everywhere.\n\nPosting messages saying just something like \"Doesn't work, help me\" is completely useless. If something is not working report the error, paste the logs, the config file, the information on the virtual machine, the results of the troubleshooting, etc. Give context. We are not wizards and we don't have a crystal ball.\n\nUse a proper title. Stuff like \"Doesn't work\", \"Help me\", and \"Error\" are not proper titles.\n\nTry to use pastebin.com, pastie.org or similar services to paste logs and configs: makes the message more readable.\n\nGithub issues. Please read the Markdown documentation before posting for tips on how to escape and post configs as code blocks.\n\nSupport Us", + "Support Us\n\nCAPE Sandbox is a completely open source software, released freely to the public and developed mostly during free time by volunteers. If you enjoy it and want to see it kept developed and updated, please consider supporting us.\n\nWe are always looking for financial support, hardware support, and contributions of any sort. If you're interested in cooperating, feel free to contact us.\n\nPeople\n\nCAPE Sandbox is an open source project resulting from the efforts and contributions of a lot of people who enjoyed volunteering some of their time for the greater good :).\n\nActive Developers\n\nName Role Kevin O'Reilly Lead Developer Andriy Brukhovetskyy Lead Developer", + "Title: CAPE issues RAG chatbot\nBody: None\nComment: ## Summary of Changes\n\nHello @doomedraven, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!\n\nThis pull request introduces a sophisticated Retrieval-Augmented Generation (RAG) chatbot designed to automatically answer newly opened GitHub issues. By leveraging a knowledge base constructed from project documentation and historical issue data, the bot can semantically search for relevant information and use a large language model (Gemini) to formulate helpful, context-aware responses, significantly streamlining issue support and reducing response times.\n\n### Highlights\n\n* **Automated Issue Answering**: A new GitHub Actions workflow (`auto_answer.yml`) has been introduced to automatically trigger a Retrieval-Augmented Generation (RAG) chatbot whenever a new issue is opened in the repository.\n* **RAG Chatbot Implementation**: The `auto_answer_bot.py` script implements the core RAG logic, utilizing `SentenceTransformer` for creating embeddings, FAISS for efficient semantic search within the knowledge base, and Google's Gemini LLM for generating context-aware answers.\n* **Knowledge Base Management**: A dedicated `build_knowledge_base.py` script is included to construct and update the bot's knowledge base. This script processes project documentation (RST files) and existing GitHub issues, converting them into searchable embeddings and metadata.\n* **Comprehensive Documentation**: A detailed `readme.md` file provides step-by-step instructions for setting up the local knowledge base, deploying the bot to GitHub, and outlining the maintenance process to keep the bot's information current.\n\n\n\n
\nIgnored Files\n\n* **Ignored by pattern:** `.github/workflows/**` (1)\n * .github/workflows/auto_answer.yml\n
\n\n\n\n
\nUsing Gemini Code Assist\n
\n\nThe full guide for Gemini Code Assist can be found on our [documentation page](https://developers.google.com/gemini-code-assist/docs/review-github-code), here are some quick tips.\n\nInvoking Gemini\n\nYou can request assistance from Gemini at any point by creating a comment using either `/gemini ` or `@gemini-code-assist `. Below is a summary of the supported commands on the current page.\n\nFeature | Command | Description\n--- | --- | ---\nCode Review | `/gemini review` | Performs a code review for the current pull request in its current state.\nPull Request Summary | `/gemini summary` | Provides a summary of the current pull request in its current state.\nComment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments.\nHelp | `/gemini help` | Displays a list of available commands.\n\nCustomization\n\nTo customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a `.gemini/` folder in the base of the repository. Detailed instructions can be found [here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).\n\nLimitations & Feedback\n\nGemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up [here](https://google.qualtrics.com/jfe/form/SV_2cyuGuTWsEw84yG).\n\nYou can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the [Gemini Code Assist IDE Extension](https://cloud.google.com/products/gemini/code-assist).\n
\n\n\n\n\n[^1]: Review the [Privacy Notices](https://policies.google.com/privacy), [Generative AI Prohibited Use Policy](https://policies.google.com/terms/generative-ai/use-policy), [Terms of Service](https://policies.google.com/terms), and learn how to configure Gemini Code Assist in GitHub [here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github). Gemini can make mistakes, so double check it and [use code with caution](https://support.google.com/legal/answer/13505487).\n", + "Title: Failing unserviceable task\nBody: # Failing unserviceable task\n\n- [x] I am running the latest version \n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed \n- [x] I'm reporting the issue to the correct repository (for multi-repository projects) \n- [x] I have read and checked all configs (with all optional parts) \n\n## Expected Behavior\n\nOnce submitting a payload, capev2 should run the vm and start the analysis\n\n## Current Behavior\n\nAfter I submitted the malicious software, capev2 failed to operate kvm as expected\n\n# Failure Information (for bugs)\n\n1. I think this might be a permission issue, but I'm not sure if it's a bug. \n2. Whether the virtual machine is started or not, capev2 cannot turn the machine on or off\n3. I'm able to use virsh --connect qemu:///system list --all from the cape user, and run the vm.\n4. There may be a crucial mistake here `libvirt.libvirtError: operation failed: domain is not running`\n\n* After the installation of CapeV2 is completed, has Linux been restarted?\n\t* yes\n* Has CapeV2 been successfully run after installation\n\t* Yes, but it only ran successfully once. After Linux restarted, capev2 failed to run\n* The attempts I have made\n\t* Restart the cape service\n\t* Restart the libvirtd service\n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n1.submit payload\n\n2. Takes a while\n3. Get failed_analysis\n\n## Configuration\n\nkvm.conf\n```\n[kvm]\nmachines = win10\ninterface = virbr1\ndsn = qemu:///system\n\n[cape1]\nlabel = cape1\nplatform = linux\nip = 192.168.66.1\narch = x64\n\n[win10]\nlabel = win10\nplatform = windows\nip = 192.168.66.166\nsnapshot = win10sandbox\narch = x64\n```\n\n## Failure Logs\n\n```\n2025-09-24 02:52:11,151 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[kvm] with max_machines_count=10\n2025-09-24 02:52:11,151 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\n2025-09-24 02:52:11,177 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-09-24 02:52:11,202 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-09-24 02:52:11,203 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n2025-09-24 03:00:53,743 [lib.cuckoo.core.machinery_manager] INFO: Task #13: found useable machine win10 (arch=x64, platform=windows)\n2025-09-24 03:00:53,743 [lib.cuckoo.core.scheduler] INFO: Task #13: Processing task\n2025-09-24 03:00:53,772 [lib.cuckoo.core.analysis_manager] INFO: Task #13: File already exists at '/opt/CAPEv2/storage/binaries/be808fba3f74f9083abf04b2f2725cc46c79ba71368564a1338aaca9990f73fb'\n2025-09-24 03:00:53,773 [lib.cuckoo.core.analysis_manager] INFO: Task #13: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_3006c4px/mbr.exe'\n2025-09-24 03:01:08,033 [lib.cuckoo.core.analysis_manager] ERROR: Task #13: Unable to restore snapshot win10sandbox on virtual machine win10\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 434, in start\n self.vms[label].revertToSnapshot(snapshot, flags=0)\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/libvirt.py\", line 2456, in revertToSnapshot\n raise libvirtError('virDomainRevertToSnapshot() failed')\nlibvirt.libvirtError: operation failed: domain is not running\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 437, in start\n raise CuckooMachineError(msg) from e\nlib.cuckoo.common.exceptions.CuckooMachineError: Unable to restore snapshot win10sandbox on virtual machine win10\n2025-09-24 03:01:08,084 [lib.cuckoo.core.analysis_manager] ERROR: Task #13: failure in AnalysisManager.run\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 434, in start\n self.vms[label].revertToSnapshot(snapshot, flags=0)\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/libvirt.py\", line 2456, in revertToSnapshot\n raise libvirtError('virDomainRevertToSnapshot() failed')\nlibvirt.libvirtError: operation failed: domain is not running\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 437, in start\n raise CuckooMachineError(msg) from e\nlib.cuckoo.common.exceptions.CuckooMachineError: Unable to restore snapshot win10sandbox on virtual machine win10\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 511, in run\n self.launch_analysis()\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 472, in launch_analysis\n success = self.perform_analysis()\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 456, in perform_analysis\n with self.machine_running(), self.result_server(), self.network_routing(), self.run_auxiliary():\n File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 335, in machine_running\n raise CuckooDeadMachine(self.machine.name) from e\nlib.cuckoo.core.analysis_manager.CuckooDeadMachine: win10 is dead!\n2025-09-24 03:01:08,818 [lib.cuckoo.core.scheduler] INFO: Task #13: Failing unserviceable task\n```\nComment: * Local file permission check\n\n```\n# ll /var/lib/libvirt/images/\n\ntotal 29404504\ndrwxrwxrwx 2 root libvirt 4096 Sep 1 10:11 ./\ndrwxr-xr-x 13 root root 4096 Sep 1 06:24 ../\n-rwxrwxrwx 1 root libvirt 781058048 Sep 1 10:20 win10.1756721487*\n-rwxrwxrwx 1 root libvirt 4179465651 Sep 1 08:15 win10-mem.win10sandbox*\n-rwxrwxrwx 1 root libvirt 85912715264 Sep 1 07:27 win10.qcow2*\n-rwxrwxrwx 1 root libvirt 8594456576 Sep 24 03:46 win10.shutdown*\n-rwxrwxrwx 1 root libvirt 196704 Sep 1 10:11 zh-cn_windows_10_version_22h2_x64_dvd_139de365.1756721487*\n-rwxrwxrwx 1 root libvirt 6078826496 Sep 1 06:35 zh-cn_windows_10_version_22h2_x64_dvd_139de365.iso*\n\n# getent group libvirt\n\nlibvirt:x:1001:root,manner,cape\n```", + "Title: Tracee not working properly-Linux Guest\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nI am expecting that Tracee works properly and is able to analyze the malware file.\n\n# Current Behavior\n\nWhat is the current behavior?\nThe current behavior is that docker is able to start with tracee but eventually there is a repeat of:\n[modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\nNo meaningful signatures or information is gained and the submission is simply timed out. This is not the case when stracee is enabled by itself.\n\nWhat I have made sure:\n\nI've enabled tracee properly, made sure to install it correctly in the Linux VM, and properly configured it in the auxiliary.conf and processing.conf. Strace is also enabled. I'm not sure why I'm not getting back any signatures or meaningful data.\n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n\n1. Submit a file intended for Linux guest\n2. Get a report back\n3. No data is gained and looks like no analysis of the malware\n\n\n## Failure Logs\n\nPlease include any relevant log snippets or files here:\n\n2025-09-22 14:10:45,005 [root] DEBUG: Starting analyzer from: /cr8mee3r\n2025-09-22 14:10:45,006 [root] DEBUG: Storing results at: /tmp/XFfABqotb\n2025-09-22 14:10:45,013 [root] DEBUG: Importing auxiliary module \"modules.auxiliary.filecollector\"...\n2025-09-22 14:10:45,096 [root] DEBUG: Importing auxiliary module \"modules.auxiliary.human\"...\n2025-09-22 14:10:45,249 [root] DEBUG: Importing auxiliary module \"modules.auxiliary.screenshots\"...\n2025-09-22 14:10:45,295 [lib.api.screenshot] DEBUG: Importing 'PIL.ImageChops'\n2025-09-22 14:10:45,297 [lib.api.screenshot] DEBUG: Importing 'PIL.ImageDraw'\n2025-09-22 14:10:45,297 [lib.api.screenshot] INFO: Please upgrade Pillow to >= 5.4.1 for best performance\n2025-09-22 14:10:45,349 [root] DEBUG: Importing auxiliary module \"modules.auxiliary.sysmon\"...\n2025-09-22 14:10:45,351 [root] DEBUG: Importing auxiliary module \"modules.auxiliary.tracee\"...\n2025-09-22 14:10:45,355 [modules.auxiliary.filecollector] INFO: FileCollector run started\n2025-09-22 14:10:45,364 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir snap\n2025-09-22 14:10:46,644 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir root\n2025-09-22 14:10:46,692 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir s1p3_whz\n2025-09-22 14:10:46,696 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir 6tsmu_9i\n2025-09-22 14:10:46,700 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir mnt\n2025-09-22 14:10:46,701 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir ahjr7_14\n2025-09-22 14:10:46,704 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir media\n2025-09-22 14:10:46,705 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir sbin.usr-is-merged\n2025-09-22 14:10:46,705 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir cr8mee3r\n2025-09-22 14:10:46,706 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir srv\n2025-09-22 14:10:46,706 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir .Library\n2025-09-22 14:10:46,707 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir boot\n2025-09-22 14:10:46,709 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir d9w5qvow\n2025-09-22 14:10:46,713 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir etc\n2025-09-22 14:10:46,803 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir kpdzasrr\n2025-09-22 14:10:46,807 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir home\n2025-09-22 14:10:46,886 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir lib.usr-is-merged\n2025-09-22 14:10:46,886 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir sbin\n2025-09-22 14:10:46,886 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir bin.usr-is-merged\n2025-09-22 14:10:46,887 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir cdrom\n2025-09-22 14:10:46,887 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir bin\n2025-09-22 14:10:46,887 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir lost+found\n2025-09-22 14:10:46,888 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir xcz3xob1\n2025-09-22 14:10:46,892 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir tkgyvp3u\n2025-09-22 14:10:46,897 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir opt\n2025-09-22 14:10:46,898 [modules.auxiliary.filecollector] INFO: FileCollector trying to watch dir tmp\n2025-09-22 14:10:46,900 [modules.auxiliary.filecollector] INFO: FileCollector setup complete\n2025-09-22 14:10:47,356 [root] DEBUG: Initialized auxiliary module \"FileCollector\"\n2025-09-22 14:10:47,358 [root] DEBUG: Trying to start auxiliary module \"FileCollector\"...\n2025-09-22 14:10:47,359 [root] DEBUG: Started auxiliary module \"FileCollector\"\n2025-09-22 14:10:47,362 [modules.auxiliary.human] DEBUG: Human init complete\n2025-09-22 14:10:47,364 [root] DEBUG: Initialized auxiliary module \"Human\"\n2025-09-22 14:10:47,366 [root] DEBUG: Trying to start auxiliary module \"Human\"...\n2025-09-22 14:10:47,366 [root] DEBUG: Started auxiliary module \"Human\"\n2025-09-22 14:10:47,367 [root] DEBUG: Initialized auxiliary module \"Screenshots\"\n2025-09-22 14:10:47,368 [root] DEBUG: Trying to start auxiliary module \"Screenshots\"...\n2025-09-22 14:10:47,371 [asyncio] DEBUG: Using selector: EpollSelector\n2025-09-22 14:10:47,374 [root] DEBUG: Started auxiliary module \"Screenshots\"\n2025-09-22 14:10:47,374 [root] DEBUG: Initialized auxiliary module \"Sysmon\"\n2025-09-22 14:10:47,376 [root] DEBUG: Trying to start auxiliary module \"Sysmon\"...\n2025-09-22 14:10:47,378 [root] DEBUG: Started auxiliary module \"Sysmon\"\n2025-09-22 14:10:47,380 [modules.auxiliary.tracee] INFO: docker start\n2025-09-22 14:10:47,390 [modules.auxiliary.tracee] INFO: True\n2025-09-22 14:10:47,392 [modules.auxiliary.tracee] INFO: Tracee\n2025-09-22 14:10:47,395 [root] DEBUG: Initialized auxiliary module \"Docker\"\n2025-09-22 14:10:47,398 [root] DEBUG: Trying to start auxiliary module \"Docker\"...\n2025-09-22 14:10:47,514 [modules.auxiliary.tracee] DEBUG: Starting docker container\n2025-09-22 14:10:47,515 [modules.auxiliary.tracee] DEBUG: sudo docker run --name tracee -d --pid=host --cgroupns=host --privileged -v /etc/os-release:/etc/os-release-host:ro -v /cr8mee3r/tracee-artifacts/:/tmp/tracee/out/host -v /var/run:/var/run:ro -v /cr8mee3r/modules/auxiliary/tracee:/policy aquasec/tracee:latest --output json --output option:parse-arguments,exec-env,exec-hash --policy /policy/policy.yml --cache cache-type=mem --cache mem-cache-size=1024 --capture bpf --capture module\n2025-09-22 14:10:48,152 [modules.auxiliary.tracee] DEBUG: Docker container started: 7ba9c5608d4a2868c0bc475b5f2ccca5cd25a335f44b68caa598f41f5bf9dd7f\n\n2025-09-22 14:10:58,220 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:10:58,274 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:08,312 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:08,366 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:18,387 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:18,449 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:28,478 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:28,524 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:38,533 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:38,579 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:48,591 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:48,654 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:11:58,707 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:11:58,756 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:08,768 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:08,816 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:18,825 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:18,869 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:28,880 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:28,925 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:38,936 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:38,990 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:49,006 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:49,053 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:12:59,074 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:12:59,126 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:09,157 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:09,217 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:19,267 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:19,313 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:29,324 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:29,377 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:39,390 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:39,440 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:49,464 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:49,509 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:13:59,559 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:13:59,611 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:14:09,616 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:14:09,656 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:14:19,677 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:14:19,736 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:14:29,768 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:14:29,819 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:14:39,861 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:14:39,913 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:14:49,950 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:14:50,006 [modules.auxiliary.tracee] INFO: \n2025-09-22 14:15:00,011 [modules.auxiliary.tracee] INFO: tracee-health-check-1903ae\n\n2025-09-22 14:15:00,056 [modules.auxiliary.tracee] INFO: \n\n", + "Title: Use universal SID instead of locale-specific group names\nBody: Fixes agent failures on non-English Windows by using the SID \"S-1-5-32-545\" instead of hardcoded \"BUILTIN\\Users\" in icacls commands.\r\n\r\nhttps://learn.microsoft.com/en-us/windows/win32/secauthz/well-known-sids\nComment: ## Summary of Changes\n\nHello @AndreasHanikel, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!\n\nThis pull request addresses a critical issue where agents running on non-English Windows operating systems would fail due to hardcoded, locale-specific group names in permission settings for temporary directories. By switching to a universal Security Identifier (SID), the change ensures that temporary directories are created with the correct permissions across all Windows locales, thereby improving the robustness and international compatibility of the agent.\n\n### Highlights\n\n* **Windows Permissions**: Replaced the locale-specific group name \"BUILTIN\\Users\" with the universal Well-Known SID \"S-1-5-32-545\" in `icacls` commands for temporary directories on Windows. This ensures consistent permission application regardless of the operating system's language.\n\n\n\n\n\n
\nUsing Gemini Code Assist\n
\n\nThe full guide for Gemini Code Assist can be found on our [documentation page](https://developers.google.com/gemini-code-assist/docs/review-github-code), here are some quick tips.\n\nInvoking Gemini\n\nYou can request assistance from Gemini at any point by creating a comment using either `/gemini ` or `@gemini-code-assist `. Below is a summary of the supported commands on the current page.\n\nFeature | Command | Description\n--- | --- | ---\n\nCustomization\n\nTo customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a `.gemini/` folder in the base of the repository. Detailed instructions can be found [here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github).\n\nLimitations & Feedback\n\nGemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up [here](https://google.qualtrics.com/jfe/form/SV_2cyuGuTWsEw84yG).\n\nYou can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the [Gemini Code Assist IDE Extension](https://cloud.google.com/products/gemini/code-assist).\n
\n\n\n\n\n[^1]: Review the [Privacy Notices](https://policies.google.com/privacy), [Generative AI Prohibited Use Policy](https://policies.google.com/terms/generative-ai/use-policy), [Terms of Service](https://policies.google.com/terms), and learn how to configure Gemini Code Assist in GitHub [here](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github). Gemini can make mistakes, so double check it and [use code with caution](https://support.google.com/legal/answer/13505487).\n\nComment: Danke!", + "Title: There was an error when deploying with Gunicorn\nBody: # Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [ \u2705] I am running the latest version\n- [ \u2705] I did read the README!\n- [ \u2705] I checked the documentation and found no answer\n- [ \u2705] I checked to make sure that this issue has not already been filed\n- [ ] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [ \u2705] I have read and checked all configs (with all optional parts)\n\n\n# Bugs\nI followed https://capev2.readthedocs.io/en/latest/usage/web.html#exposed-to-internet instructions to deploy cape-web, but found that there were errors during its operation.\n\n## Failure Logs\n```\nSep 20 14:52:41 capev2-sandbox python3[1694779]: ERROR:lib.cuckoo.common.demux:[Errno 2] No such file or directory: 'sflock/data/password.txt'\nSep 20 14:52:41 capev2-sandbox python3[1694779]: Traceback (most recent call last):\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/opt/CAPEv2/lib/cuckoo/common/demux.py\", line 208, in demux_sflock\nSep 20 14:52:41 capev2-sandbox python3[1694779]: unpacked = unpack(filename, password=password, check_shellcode=check_shellcode)\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/main.py\", line 72, in unpack\nSep 20 14:52:41 capev2-sandbox python3[1694779]: Unpacker.single(f, password, duplicates)\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/abstracts.py\", line 130, in single\nSep 20 14:52:41 capev2-sandbox python3[1694779]: return Unpacker(None).process([f], duplicates, password)\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/abstracts.py\", line 109, in process\nSep 20 14:52:41 capev2-sandbox python3[1694779]: f.children = plugin.unpack(password, duplicates)\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/unpack/zip.py\", line 87, in unpack\nSep 20 14:52:41 capev2-sandbox python3[1694779]: f = self.bruteforce(password, archive, entry)\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/abstracts.py\", line 157, in bruteforce\nSep 20 14:52:41 capev2-sandbox python3[1694779]: for password in iter_passwords():\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/sflock/config.py\", line 16, in iter_passwords\nSep 20 14:52:41 capev2-sandbox python3[1694779]: for line in passwd_file.read_text().splitlines():\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/usr/lib/python3.10/pathlib.py\", line 1134, in read_text\nSep 20 14:52:41 capev2-sandbox python3[1694779]: with self.open(mode='r', encoding=encoding, errors=errors) as f:\nSep 20 14:52:41 capev2-sandbox python3[1694779]: File \"/usr/lib/python3.10/pathlib.py\", line 1119, in open\nSep 20 14:52:41 capev2-sandbox python3[1694779]: return self._accessor.open(self, mode, buffering, encoding, errors,\nSep 20 14:52:41 capev2-sandbox python3[1694779]: FileNotFoundError: [Errno 2] No such file or directory: 'sflock/data/password.txt'\nSep 20 14:52:43 capev2-sandbox python3[1694780]: INFO:lib.cuckoo.core.database:Do sandbox packages need an update? Sflock identifies as: False - b'/tmp/cuckoo-tmp/upload_hpm4v2v2/VirusShare_02325.unknown'\n```\n\nComment: ```\nroot@capev2-sandbox:/opt/CAPEv2# git log | head -n1\ncommit 4612c4699b865d0f60d8379b7bed86e53ab3bec8\nroot@capev2-sandbox:/opt/CAPEv2# cat /etc/os-release\nPRETTY_NAME=\"Ubuntu 22.04.4 LTS\"\nNAME=\"Ubuntu\"\nVERSION_ID=\"22.04\"\nVERSION=\"22.04.4 LTS (Jammy Jellyfish)\"\nVERSION_CODENAME=jammy\nID=ubuntu\nID_LIKE=debian\nHOME_URL=\"https://www.ubuntu.com/\"\nSUPPORT_URL=\"https://help.ubuntu.com/\"\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\nUBUNTU_CODENAME=jammy\n```\nComment: By examining the source code of sflock, it was found that it read the file \"password.txt\" located in the \"/opt/CAPEv2/web/sflock/data/\" directory. However, this file is actually within the sflock package.", + "Title: PyMongo auto-reconnecting...127.0.0.1:27017\nBody: What is the best version of mongod?\n\n I have error PyMongo auto-reconnecting...127.0.0.1:27017: [Errno 111] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: ]>. Waiting 0.5 seconds\n\nMy operating system version is:\nVERSION_ID=\"24.04\"\nVERSION=\"24.04.3 LTS (Noble Numbat)\"\n\n", + "Title: failing unserviceable task\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [yes ] I am running the latest version\n- [ yes] I did read the README!\n- [ yes] I checked the documentation and found no answer\n- [ yes] I checked to make sure that this issue has not already been filed\n- [ yes] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [ yes] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nPlease describe the behavior you are expecting. __If your samples(x64) stuck in pending ensure that you set tags=x64 in hypervisor conf for x64 vms__\n\nwhen i submit my exe file it should start the analysis \n\n# Current Behavior\n\nWhat is the current behavior?\n\nthe current behaviour is its failed instantly\nINFO: Task #xxxx: Failing unserviceable task\n# Failure Information (for bugs)\n\nPlease help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.\n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n\n1. step 1\n2. step 2\n3. you get it...\n\n## Context\n\nPlease provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. Operating system version, bitness, installed software versions, test sample details/hash/binary (if applicable).\n\n| Question | Answer\n|------------------|--------------------\n| Git commit | Type `$ git log \\| head -n1` to find out\n| OS version | Ubuntu 24.04.3 \n## Failure Logs\n\nPlease include any relevant log snippets or files here.\n\ni can't find anything while watching log it just say \n\nINFO: Task #xxxx: Failing unserviceable task\n", + "Title: failing unserviceable task\nBody: facing same issue while i put a exe file on cape web ui , it directlly say \"failing unserviceable task \"\n\n> please get a quick look again over the docs to avoid future issues \n\n _Originally posted by @doomedraven in [#2662](https://github.com/kevoreilly/CAPEv2/issues/2662#issuecomment-3173806370)_", + "Title: Analysis failed - vm is dead!\nBody: Please answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nSuccessfully analyzing files.\n\n# Current Behavior\n\nIt cannot start the VM. Analysis fails.\n\n# Failure Information (for bugs)\n\nI am not sure it's a bug - I am probably doing something wrong. \n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n\n1. Installed cape using the \"base\" version instead of all as \"all\" was getting stuck in the \"Redirecting output to w.get\" forever.\n2. Installed KVM.\n3. Changed the .conf files are required.\n4. I then cd to dir -> /opt/CAPEv2 and changed user to cape by running sudo -u cape -i\n5. Start cape as user cape using the command `poetry run python3 cuckoo.py`\n6. Submit an analysis on the web portal\n\nPlease note that I have done everything required in the guest machine too.\n\n## Context\n\nI installed cape and KVM by following the docs, configured the files, and it seems that cape cannot start the vm. I tried using a snapshot too. It starts analyzing but fails with the following log errors.\n\nIf it matters, `virsh list --all` does not return anything unless ran with sudo first. Only then it correctly shows my guest 'win10'.\n\nOne thing I noticed is that if I manually start the win10 guest by running \"sudo virsh start win10\" AFTER submitting the analysis, it may run the analysis successfully but apart from some signatures, it is not complete, e..g, there is no output in any of the analysis sections (behavior, network etc) in the UI despite the binary resolving DNS servers. Also, `analysis log` is empty.\n\n| Question | Answer\n|------------------|--------------------\n| OS version | Ubuntu 22.04.3, Windows 10 22H2,\n\n## Failure Logs\n\n**poetry python3 run cuckoo.py**\n\n```\ncape@TK-Ubuntu:/opt/CAPEv2$ poetry run python3 cuckoo.py\n\n .-----------------.\n | Cuckoo Sandbox? |\n | OH NOES! |\\ '-.__.-'\n '-----------------' \\ /oo |--.--,--,--.\n \\_.-'._i__i__i_.'\n \"\"\"\"\"\"\"\"\"\n\n Cuckoo Sandbox 2.4-CAPE\n www.cuckoosandbox.org\n Copyright (c) 2010-2015\n\n CAPE: Config and Payload Extraction\n github.com/kevoreilly/CAPEv2\n\npip3 install certvalidator asn1crypto mscerts\nOPTIONAL! Missed dependency: poetry run pip install -U git+https://github.com/DissectMalware/batch_deobfuscator\nOPTIONAL! Missed dependency: poetry run pip install -U git+https://github.com/CAPESandbox/httpreplay\n/usr/bin/tcpdump\n2025-09-13 15:22:32,220 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[kvm] with max_machines_count=10\n2025-09-13 15:22:32,220 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\n2025-09-13 15:22:32,242 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-09-13 15:22:32,259 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-09-13 15:22:32,267 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n2025-09-13 15:23:25,475 [lib.cuckoo.core.machinery_manager] INFO: Task #11: found useable machine win10 (arch=x64, platform=windows)\n2025-09-13 15:23:25,475 [lib.cuckoo.core.scheduler] INFO: Task #11: Processing task\n2025-09-13 15:23:25,492 [lib.cuckoo.core.analysis_manager] INFO: Task #11: File already exists at '/opt/CAPEv2/storage/binaries/39ef2d3e9220c561a7b713c89345c7d4d0aaa023f739919cc66b8cb4d9646e02'\n2025-09-13 15:23:25,493 [lib.cuckoo.core.analysis_manager] INFO: Task #11: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_65_77nuh/test.exe'\n2025-09-13 15:28:30,612 [lib.cuckoo.core.analysis_manager] ERROR: Task #11: Timeout hit while for machine win10 to change status\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 461, in start\n self._wait_status(label, self.RUNNING)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 364, in _wait_status\n raise CuckooMachineError(f\"Timeout hit while for machine {label} to change status\")\nlib.cuckoo.common.exceptions.CuckooMachineError: Timeout hit while for machine win10 to change status\n2025-09-13 15:28:30,635 [lib.cuckoo.core.analysis_manager] ERROR: Task #11: failure in AnalysisManager.run\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 461, in start\n self._wait_status(label, self.RUNNING)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 364, in _wait_status\n raise CuckooMachineError(f\"Timeout hit while for machine {label} to change status\")\nlib.cuckoo.common.exceptions.CuckooMachineError: Timeout hit while for machine win10 to change status\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 511, in run\n self.launch_analysis()\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 472, in launch_analysis\n success = self.perform_analysis()\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 456, in perform_analysis\n with self.machine_running(), self.result_server(), self.network_routing(), self.run_auxiliary():\n File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 335, in machine_running\n raise CuckooDeadMachine(self.machine.name) from e\nlib.cuckoo.core.analysis_manager.CuckooDeadMachine: win10 is dead!\n2025-09-13 15:28:31,542 [lib.cuckoo.core.scheduler] INFO: Task #11: Failing unserviceable task\n```\n\n**sudo journalctl -u libvirtd**\n```\n\n-- Boot 81edac61e831438bb13c332dc2af71c8 --\nSep 13 10:23:02 TK-Ubuntu systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon...\nSep 13 10:23:02 TK-Ubuntu systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon.\nSep 13 10:23:18 TK-Ubuntu libvirtd[12974]: libvirt version: 11.1.0\nSep 13 10:23:18 TK-Ubuntu libvirtd[12974]: hostname: TK-Ubuntu\nSep 13 10:23:18 TK-Ubuntu libvirtd[12974]: Domain id=1 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 10:28:44 TK-Ubuntu libvirtd[12974]: Domain id=2 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 10:52:17 TK-Ubuntu libvirtd[12974]: Domain id=3 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 10:57:02 TK-Ubuntu libvirtd[12974]: Domain id=4 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 11:01:58 TK-Ubuntu libvirtd[12974]: Domain id=5 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 11:05:48 TK-Ubuntu libvirtd[12974]: End of file while reading data: Input/output error\nSep 13 11:40:44 TK-Ubuntu libvirtd[12974]: End of file while reading data: Input/output error\nSep 13 11:40:52 TK-Ubuntu systemd[1]: Stopping libvirtd.service - libvirt legacy monolithic daemon...\nSep 13 11:40:52 TK-Ubuntu systemd[1]: libvirtd.service: Deactivated successfully.\nSep 13 11:40:52 TK-Ubuntu systemd[1]: Stopped libvirtd.service - libvirt legacy monolithic daemon.\nSep 13 11:40:52 TK-Ubuntu systemd[1]: libvirtd.service: Consumed 8.267s CPU time.\n-- Boot 5d8f8e5fc33b4f9d899a7d1c2439eb38 --\nSep 13 16:32:05 TK-Ubuntu systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon...\nSep 13 16:32:05 TK-Ubuntu systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon.\nSep 13 16:32:16 TK-Ubuntu libvirtd[10992]: libvirt version: 11.1.0\nSep 13 16:32:16 TK-Ubuntu libvirtd[10992]: hostname: TK-Ubuntu\nSep 13 16:32:16 TK-Ubuntu libvirtd[10992]: Domain id=1 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\nSep 13 16:44:25 TK-Ubuntu systemd[1]: Stopping libvirtd.service - libvirt legacy monolithic daemon...\nSep 13 16:44:25 TK-Ubuntu systemd[1]: libvirtd.service: Deactivated successfully.\nSep 13 16:44:25 TK-Ubuntu systemd[1]: Stopped libvirtd.service - libvirt legacy monolithic daemon.\nSep 13 16:44:25 TK-Ubuntu systemd[1]: libvirtd.service: Consumed 2.237s CPU time.\nSep 13 16:44:25 TK-Ubuntu systemd[1]: Starting libvirtd.service - libvirt legacy monolithic daemon...\nSep 13 16:44:25 TK-Ubuntu systemd[1]: Started libvirtd.service - libvirt legacy monolithic daemon.\nSep 13 16:48:13 TK-Ubuntu libvirtd[12512]: libvirt version: 11.1.0\nSep 13 16:48:13 TK-Ubuntu libvirtd[12512]: hostname: TK-Ubuntu\nSep 13 16:48:13 TK-Ubuntu libvirtd[12512]: Domain id=1 name='win10' uuid=fbdc0f71-0e91-4deb-8bed-5c9cf7600773 is tainted: high-privileges\n```\nComment: Found the solution. It was a privileges issue, cape could not see the VM. I fixed it by running the below commands:\n\n\n```\n sudo usermod -a -G libvirt cape\n sudo usermod -a -G kvm cape\n```\n\nAlso, to make the cape user default to system-wide connection instead of user-session so it can see the VM:.\n\n```\nsudo -u cape mkdir -p /home/cape/.config/libvirt\nsudo -u cape bash -c 'echo \"uri_default = \\\"qemu:///system\\\"\" > /home/cape/.config/libvirt/libvirt.conf'\n```\n\n```\nsudo virsh dumpxml win10 > /tmp/win10.xml\nsudo virsh undefine win10 \nsudo virsh -c qemu:///system define /tmp/win10.xml\n```\n\nComment: Glad to hear it's sorted \ud83d\udc4d ", + "Title: Question: Best practices for integrating external analysis APIs and tools?\nBody: Hello CAPEv2 Team,\n\nFirst, thank you for creating such a powerful and comprehensive malware analysis tool. After a detailed setup process, we have successfully deployed a stable and high-fidelity instance of CAPEv2 that is now fully operational.\n\nOur goal is to **extend the overall analysis** of the pipeline by integrating external open-source tools via their APIs. We want to build upon CAPE's excellent foundation to extract more granular details from submitted binaries, which will help us create a richer dataset for our research.\n\nFor example, we are looking to integrate functionalities that can:\n* Generate **Function Call Graphs (FCGs)** to map the relationships between functions.\n* Generate **Control Flow Graphs (CFGs)** to visualize the binary's internal logic.\n* Extract detailed **debug information** and symbol tables.\n* Perform advanced **string analysis** to flag suspicious or interesting patterns.\n\nWe would be very grateful for your guidance on the best way to approach this. Our main questions are:\n\n1. What is the recommended architecture for integrating external API calls into the CAPE processing workflow?\n2. Does CAPEv2 have a built-in plugin system or defined hooks where custom modules like these can be added?\n3. Are there any examples in the documentation or community contributions that we could reference to understand how to add a new processing module that interacts with an external analysis tool?\n\nThank you for your time and for your incredible work on this project. We would appreciate any direction you can provide as we look to extend CAPE's capabilities.\n\nBest regards,\nAhmar Husain\nComment: Hi Ahmar,\n\nI like the idea of extending the analysis - I think the question of whether an open-source tool should be integrated by code or by api largely boils down to the size and complexity of the tool to be integrated.\n\nTo take your example of advanced string analysis, cape already integrates floss and in general we would seek to integrate any prospective string analysis tool in this way unless its size or complexity precludes it.\n\nCall and control graphs might well be an example of something that sits on the fence. I've already seen such graphs directly integrated into the web interface of a derivative project - this works well with a lightweight disassembly engine. But recent developments in headless IDA for example might be more suited to a separate service and api integration.\n\nI'm not completely sure I know what you mean by debug information and symbol tables. PE header information, imported and exported functions and the like are already displayed in the web ui. This can of course be expanded to include any other headers or tables which might be useful.\n\nSo in general, we are of course very keen on open-source integration, leaning more towards direct integration where possible. If you can suggest specific tools or functionality we can look at how best they might be integrated. This is the first question, only after which does it make sense to look at the method of integration.\nComment: Closing as abandoned", + "Title: analyses' folder like procdump, memory and others are empty\nBody: sorry, i wanna know why when i go to opt/CAPEv2/storage/analyses/latest/ some folders are empty example (procdump and memory)...", + "Title: Fix task added_on and clock fields\nBody: None", + "Title: Bump django from 5.1.9 to 5.1.12\nBody: Bumps [django](https://github.com/django/django) from 5.1.9 to 5.1.12.\n
\nCommits\n
    \n
  • f71d9c3 [5.1.x] Bumped version for 5.1.12 release.
  • \n
  • 102965e [5.1.x] Fixed CVE-2025-57833 -- Protected FilteredRelation against SQL inject...
  • \n
  • 44cd014 [5.1.x] Added stub release notes and release date for 5.1.12 and 4.2.24.
  • \n
  • 0980178 [5.1.x] Fixed #36499 -- Adjusted utils_tests.test_html.TestUtilsHtml.test_str...
  • \n
  • 19e7b95 [5.1.x] Fixed test_utils.tests.HTMLEqualTests.test_parsing_errors following P...
  • \n
  • 9d9b3bc [5.1.x] Refs #36535 -- Doc'd that docutils < 0.22 is required.
  • \n
  • 37f6474 [5.1.x] Fixed GitHub Action that checks commit prefixes to fetch PR head corr...
  • \n
  • 3104593 [5.1.x] Added GitHub Action to enforce stable branch commit message prefix.
  • \n
  • 97c7537 [5.1.x] Added follow-up to CVE-2025-48432 to security archive.
  • \n
  • 353a6af [5.1.x] Post-release version bump.
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=django&package-manager=pip&previous-version=5.1.9&new-version=5.1.12)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\nYou can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/kevoreilly/CAPEv2/network/alerts).\n\n
", + "Title: Prevent mouse emulation on Windows VMs from clicking on Console windows\nBody: This avoids situations where the console is clicked, causing select-mode to be enabled and the process to be suspended. This ultimately leads to the timeout being hit and a detonation with little useful telemetry.\nComment: Hi Josh, thanks for this, I'm testing this currently. Do you happen to have any good examples to test with?\nComment: > Hi Josh, thanks for this, I'm testing this currently. Do you happen to have any good examples to test with?\r\n\r\nI tested it with a full screen console app (with/without the fix applied) - I suspect that's probably the easiest way.", + "Title: Bump pypdf from 5.2.0 to 6.0.0\nBody: Bumps [pypdf](https://github.com/py-pdf/pypdf) from 5.2.0 to 6.0.0.\n
\nRelease notes\n

Sourced from pypdf's releases.

\n
\n

Version 6.0.0, 2025-08-11

\n

What's new

\n

Security (SEC)

\n\n

Deprecations (DEP)

\n\n

New Features (ENH)

\n\n

Robustness (ROB)

\n\n

Developer Experience (DEV)

\n\n

Maintenance (MAINT)

\n\n

Full Changelog

\n

Version 5.9.0, 2025-07-27

\n

What's new

\n

New Features (ENH)

\n\n

Bug Fixes (BUG)

\n\n

Robustness (ROB)

\n\n

Documentation (DOC)

\n\n

Full Changelog

\n

Version 5.8.0, 2025-07-13

\n

What's new

\n

New Features (ENH)

\n\n

Bug Fixes (BUG)

\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from pypdf's changelog.

\n
\n

Version 6.0.0, 2025-08-11

\n

Security (SEC)

\n
    \n
  • Limit decompressed size for FlateDecode filter (#3430)
  • \n
\n

Deprecations (DEP)

\n
    \n
  • Drop Python 3.8 support (#3412)
  • \n
\n

New Features (ENH)

\n
    \n
  • Move BlackIs1 functionality to tiff_header (#3421)
  • \n
\n

Robustness (ROB)

\n
    \n
  • Skip Go-To actions without a destination (#3420)
  • \n
\n

Developer Experience (DEV)

\n
    \n
  • Update code style related libraries (#3414)
  • \n
  • Update mypy to 1.17.0 (#3413)
  • \n
  • Stop testing on Python 3.8 and start testing on Python 3.14 (#3411)
  • \n
\n

Maintenance (MAINT)

\n
    \n
  • Cleanup deprecations (#3424)
  • \n
\n

Full Changelog

\n

Version 5.9.0, 2025-07-27

\n

New Features (ENH)

\n
    \n
  • Automatically preserve links in added pages (#3298)
  • \n
  • Allow writing/updating all properties of an embedded file (#3374)
  • \n
\n

Bug Fixes (BUG)

\n
    \n
  • Fix XMP handling dropping indirect references (#3392)
  • \n
\n

Robustness (ROB)

\n
    \n
  • Deal with DecodeParms being empty list (#3388)
  • \n
\n

Documentation (DOC)

\n
    \n
  • Document how to read and modify XMP metadata (#3383)
  • \n
\n

Full Changelog

\n

Version 5.8.0, 2025-07-13

\n

New Features (ENH)

\n
    \n
  • Implement flattening for writer (#3312)
  • \n
\n

Bug Fixes (BUG)

\n
    \n
  • Unterminated object when using PdfWriter with incremental=True (#3345)
  • \n
\n

Robustness (ROB)

\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pypdf&package-manager=pip&previous-version=5.2.0&new-version=6.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\nYou can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/kevoreilly/CAPEv2/network/alerts).\n\n
", + "Title: gemini code base review\nBody: None", + "Title: bootstrap5 by gemini\nBody: written by gemini cli", + "Title: Add NightshadeC2, MonsterV2 Yara Rules\nBody: - Add NightshadeC2 Yara rule, see https://x.com/YungBinary/status/1963751038340534482\r\n- Update MonsterV2 yara rule to work for b869941a9c476585bbb8f48f7003d158c71e44038ceb2628cedb231493847775 and 666944b19c707afaa05453909d395f979a267b28ff43d90d143cd36f6b74b53e\r\n- Fixed symlink check in kvm-qemu.sh, \"-L\" argument should be used instead, fixes error:\r\n```\r\nln: failed to create symbolic link '/usr/bin/qemu-system-x86_64-spice': File exists\r\nln: failed to create symbolic link '/usr/bin/kvm-spice': File exists\r\nln: failed to create symbolic link '/usr/bin/kvm': File exists\r\n[-] Install failed\r\n```\r\n- Fixed missing qemu directory, create it first to fix error:\r\n```\r\n[-] Bios patching failed\r\n```\r\n\nComment: Would it be possible to have more than one pattern for the yara?\r\n\r\nI would say two or three patterns per yara is preferable as it's far less brittle and more future proof.\nComment: > Would it be possible to have more than one pattern for the yara?\r\n> \r\n> I would say two or three patterns per yara is preferable as it's far less brittle and more future proof.\r\n\r\nOkay got it, added more patterns for NightshadeC2 AKA CastleRAT\nComment: Are the changes to ``installer/kvm-qemu.sh`` intentional?\nComment: > Are the changes to `installer/kvm-qemu.sh` intentional?\r\n\r\nYes the script has bugs in it that these changes fix, see my PR notes:\r\n\r\n`Fixed symlink check in kvm-qemu.sh, \"-L\" argument should be used instead, fixes error:`\r\n\r\n`Fixed missing qemu directory, create it first to fix error:`", + "Title: Failed analysis for certain files\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nAll executable files are uploaded and detonated in guest vm.\n\n# Current Behavior\n\nThe error \"failed_analysis\" is reported in the web console for specific executables. Not every executable file fails and many are analyzed successfully.\n\n# Failure Information (for bugs)\n\nOnly certain executable files fail.\n\n## Steps to Reproduce\n\n1. Submit specific executable sample in Cape.\n\nHere is an Akira sample that fails: https://www.virustotal.com/gui/file/2647c28b0967b7923d7c857fa1bdc7687d8f816f9dc4906c6a6f66f687a6419a\n\n## Context\n\nKVM and CAPEv2 running on Ubuntu bare metal host.\n\nGuest is Windows 10 x64.\n\n## Failure Logs\n\nCaptured log trail for failure on sample upload:\n2025-09-03T19:16:10.540572+00:00 capehost python3[86302]: /opt/CAPEv2/lib/cuckoo/core/database.py:1304: SAWarning: Object of type not in session, add operation along 'Tag.tasks' won't proceed\n2025-09-03T19:16:10.540622+00:00 capehost python3[86302]: with self.session.begin_nested():\n2025-09-03T19:16:10.682297+00:00 capehost poetry[99192]: 2025-09-03 19:16:10,681 [lib.cuckoo.core.scheduler] INFO: Task #11: Failing unserviceable task\n\nComment: \"Image\"\nComment: do you have x64 tag/arch in machinery config?\nComment: yes, config shared below:\n\n[kvm]\nmachines = win10\ninterface = virbr1\ndsn = qemu:///system\n[win10]\nlabel = win10\nplatform = windows\nip = 192.168.100.13\narch = x86\nComment: That says x86 not x64! So no...\nComment: sorry! I meant no :)\n\nlooking at a similar previous issue (https://github.com/kevoreilly/CAPEv2/issues/2178), I noticed the version of sqlalchemy. The error looks similar but my issue is on upload and not all .exe fail to submit and process. Is this correct?\n\n name : sqlalchemy\n version : 2.0.41\n description : Database Abstraction Library\nComment: Sqlalchemy2 is not issue here, is your conf, you set x86, but pretend to run x64, that's why it can't find a proper vm\nComment: change to as your vm is x64\n```\narch = x64\ntags = x64,x86\n```\n\nthats all, restart as root `systemctl restart cape`\nComment: Thank you again for your help!\n\nThis is fixed with the arch change.", + "Title: Add AuraStealer yara\nBody: None", + "Title: Analysis\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\n1. File should be sent and executed at target guest host.\n2. Process tree should be displayed in report.\n3. Behavioral analysis should be displayed in report.\n\nBased on some reading I'm thinking this may be associated with the result server config but not totally clear as I'm not seeing any errors.\n\nUsing kvm on an Ubuntu 24.04 host. Guest is Windows 10.\n\nUsing 0.0.0.0 as the resultserver_ip in both kvm.conf and cuckoo.conf\n\n# Current Behavior\n\n1. File does not appear to be detonated in guest host.\n2. Missing process tree from analysis report.\n3. Missing behavioral analysis in report. \n4. Only static analysis is provided for the file.\n5. Analysis logs are missing from storage \n\nScreenshots are taken of guest and if I manually interact while running screenshots are taken of interaction.\n\n## Steps to Reproduce\n\n1. Submit sample to CAPEv2 via web interface.\n\n## Failure Logs\n\n\u25cf cape.service - CAPE\n Loaded: loaded (/usr/lib/systemd/system/cape.service; enabled; preset: enabled)\n Active: active (running) since Mon 2025-09-01 05:11:32 UTC; 17min ago\n Docs: https://github.com/kevoreilly/CAPEv2\n Main PID: 57529 (python)\n Tasks: 69 (limit: 18042)\n Memory: 250.7M (peak: 251.1M)\n CPU: 3.167s\n CGroup: /system.slice/cape.service\n \u2514\u250057529 /home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/bin/python cuckoo.py\n\nSep 01 05:11:33 capehost poetry[57529]: www.cuckoosandbox.org\nSep 01 05:11:33 capehost poetry[57529]: Copyright (c) 2010-2015\nSep 01 05:11:33 capehost poetry[57529]: CAPE: Config and Payload Extraction\nSep 01 05:11:33 capehost poetry[57529]: github.com/kevoreilly/CAPEv2\nSep 01 05:11:33 capehost poetry[57609]: /usr/bin/tcpdump\nSep 01 05:11:33 capehost poetry[57529]: 2025-09-01 05:11:33,967 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[kvm] with>\nSep 01 05:11:33 capehost poetry[57529]: 2025-09-01 05:11:33,967 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_cou>\nSep 01 05:11:33 capehost poetry[57529]: 2025-09-01 05:11:33,977 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\nSep 01 05:11:33 capehost poetry[57529]: 2025-09-01 05:11:33,989 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedS>\nSep 01 05:11:33 capehost poetry[57529]: 2025-09-01 05:11:33,991 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n\n[debug.txt](https://github.com/user-attachments/files/22071649/debug.txt)\nComment: thats wrong -> `Using 0.0.0.0 as the resultserver_ip in both kvm.conf and cuckoo.conf` server should have a specific IP, so you must to put that x.x.100.1 as result server in cuckoo.conf or in settings of each vm, better in cuckoo.conf\nComment: Quick response! Thank you for the support here.\n\nI tried this as well (replacing 0.0.0.0 with the interface IP address x.x100.1) and it was the same result. I commented out the resultserver lines in the kvm config.\n\nStill no luck...\nComment: Well then I would say for start provide more details, the main description\r\nis very poor, the connection with VM seems to work fine. What is in\r\nanalysis log? You can find it in storage/analysis/X/analysis.log\r\n\r\nAlso did you restart cape service after change result server? Otherwise it\r\nhas no effect\r\n\r\nEl lun, 1 sept 2025, 9:54, guser100 ***@***.***> escribi\u00f3:\r\n\r\n> *guser100* left a comment (kevoreilly/CAPEv2#2687)\r\n> \r\n>\r\n> Quick response! Thank you for the support here.\r\n>\r\n> I tried this as well (replacing 0.0.0.0 with the interface IP address\r\n> x.x100.1) and it was the same result. I commented out the resultserver\r\n> lines in the kvm config.\r\n>\r\n> Still no luck...\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Yup restarted all cape services. Sorry about the description. I don't really understand where the bug or misconfiguration is. I have redeployed from scratch for the third time and the same issue persists.\n\nThat is part of the problem, there is no analysis logs in /opt/CAPEv2/storage/analyses/ for the each of the analyzed samples (ie. 1, 2,3 etc.).\n\nHere is a corresponding error in the process.log file specific to the analysis logs missing:\n2025-09-01 07:55:45,911 [Task 11] [lib.cuckoo.common.integrations.virustotal] ERROR: VT: Request failed\n2025-09-01 07:55:46,456 [Task 11] [modules.processing.behavior] WARNING: Analysis results folder does not exist at path \"/opt/CAPEv2/storage/analyses/11/logs\"\n2025-09-01 07:55:46,457 [Task 11] [lib.cuckoo.core.plugins] INFO: Logs folder doesn't exist, maybe something with with analyzer folder, any change?\n2025-09-01 07:55:46,531 [Task 11] [dev_utils.mongodb] INFO: attempting to delete calls for 1 tasks\n2025-09-01 07:55:46,603 [root] INFO: Reports generation completed for Task #11\n\nNo identified errors in cuckoo.log file:\n2025-09-01 07:51:00,457 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[kvm] with max_machines_count=10\n2025-09-01 07:51:00,457 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\n2025-09-01 07:51:00,469 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-09-01 07:51:00,487 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-09-01 07:51:00,488 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n2025-09-01 07:51:00,498 [lib.cuckoo.core.machinery_manager] INFO: Task #11: found useable machine win10 (arch=x86, platform=windows)\n2025-09-01 07:51:00,498 [lib.cuckoo.core.scheduler] INFO: Task #11: Processing task\n2025-09-01 07:51:00,622 [lib.cuckoo.core.analysis_manager] INFO: Task #11: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_8xa1_bm7/FileZilla_3.69.3_win.exe'\n2025-09-01 07:51:11,869 [lib.cuckoo.core.analysis_manager] INFO: Task #11: Enabled route 'none'.\n2025-09-01 07:51:11,869 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\n2025-09-01 07:51:11,869 [modules.auxiliary.PolarProxy] INFO: PolarProxy module loaded\n2025-09-01 07:51:11,869 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\n2025-09-01 07:51:12,273 [lib.cuckoo.core.guest] INFO: Task #11: Starting analysis on guest (id=win10, ip=192.168.100.13)\n2025-09-01 07:51:12,277 [lib.cuckoo.core.guest] INFO: Task #11: Guest is running CAPE Agent 0.20 (id=win10, ip=192.168.100.13)\n2025-09-01 07:51:12,998 [lib.cuckoo.core.guest] INFO: Task #11: Uploading script files to guest (id=win10, ip=192.168.100.13)\n2025-09-01 07:51:14,005 [lib.cuckoo.core.guest] INFO: Task #11: Started capturing screenshots for win10\n2025-09-01 07:55:34,113 [lib.cuckoo.core.guest] INFO: Task #11: End of analysis reached! (id=win10, ip=192.168.100.13)\n2025-09-01 07:55:43,796 [lib.cuckoo.core.analysis_manager] INFO: Task #11: Completed analysis successfully.\n2025-09-01 07:55:43,798 [lib.cuckoo.core.analysis_manager] INFO: Task #11: analysis procedure completed\n\n\nComment: the connection between server and VM works, but i have no idea what is wrong, i guess something inside of the VM as is which send logs. maybe is firewall as was one user case, idk\nComment: Which logs would you like me to send?\n\nCan you link me to the issue where it was the user's firewall? I have confirmed all guest firewalls and Defender is totally disabled.\nComment: in that case user enabled firewall on server side, see this one https://github.com/kevoreilly/CAPEv2/issues/2685\nComment: That was the issue!\n\nThis should be documented in the readthedocs documentation.\n\nUFW was activated and I needed to add a rule to allow in from the virtual interface.\nComment: glad we gave with solution, i will add it to docs, but did you enable it? as by default it comes disabled, is why it not in docs\nComment: Yes, UFW was activated prior to deployment for environment purposes.\n\nThank you again for the help!\nComment: Yaw", + "Title: Failed to take screenshot\nBody: \n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n# Expected Behavior\n\nScreenshots taken by the agent should appear in the host path `/opt/CAPEv2/storage/analyses//shots/` after analysis completes.\n\n# Current Behavior\n\nAgent-based screenshots do not appear on the host. The analysis completes successfully, other artifacts are collected, but the `shots` folder on the host remains empty.\n\n# Failure Information (for bugs)\n\nScreenshots are taken successfully inside the VM manually using Python + Pillow (`ImageGrab.grab().save(...)`) but are not transmitted to the host via CAPEv2.\n\n## Steps to Reproduce\n\n1. Configure CAPEv2 with Proxmox machinery.\n2. Enable agent-based screenshots: `screenshots_windows = yes` in `auxiliary.conf`.\n3. Run analysis on a Windows 11 VM with the CAPE agent installed and running.\n4. Let analysis complete.\n5. Check `/opt/CAPEv2/storage/analyses//shots/` \u2014 it is empty.\n\n\n\n### Configs used\n\n**cuckoo.conf**\n```ini\nmachinery = proxmox\n# Enable screenshots of analysis machines while running.\nmachinery_screenshots = on\n\n**api.conf**\n# Pull screenshots from a specific task.\n[taskscreenshot]\nenabled = yes\nauth_only = no\nrps = 1/s\n#rpm = 100/m\n\n\n**reporting.conf**\n[reporthtml]\n# Standalone report, not requires CAPE webgui\nenabled = no\n# Include screenshots in report\nscreenshots = no\napicalls = no\n\n\n[reporthtmlsummary]\n# much smaller, faster report generation, omits API logs and is non-interactive\nenabled = no\n# Include screenshots in report\nscreenshots = no\n\n**auxiliary.conf**\nscreenshots_windows = yes\nscreenshots_linux = yes\n\n[QemuScreenshots]\n# Enable or disable the use of QEMU as screenshot capture [yes/no].\n# screenshots_linux and screenshots_windows must be disabled\nenabled = no\n\n\n### Failure logs\n\n2025-08-31 10:19:34,812 [lib.cuckoo.core.plugins] DEBUG: Started auxiliary module: Sniffer\n2025-08-31 10:19:34,822 [lib.cuckoo.core.guest] INFO: Task #37: Starting analysis on guest (id=cuckoo1, ip=192.168.10.11)\n2025-08-31 12:25:32,883 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:33,891 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:34,897 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:35,905 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:36,913 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:37,919 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-Sandbox:\n2025-08-31 12:25:38,926 [lib.cuckoo.core.analysis_manager] WARNING: Task #39: Failed to take screenshot of Win11-S...\nComment: hey, what is inside of the analysis.log? you can find it on webgui or in analysis folder, storage/analysis//analysis.log\nComment: \n**This is from one of the completed tasks with no screenshots**\n_/opt/CAPEv2/storage/analyses/24$ cat analysis.log_\n2025-08-24 15:56:33,749 [root] INFO: Date set to: 20250831T06:41:10, timeout set to: 300\n2025-08-31 06:41:10,008 [root] DEBUG: Starting analyzer from: C:\\qstr2giq\n2025-08-31 06:41:10,008 [root] DEBUG: Storing results at: C:\\iyeYqm\n2025-08-31 06:41:10,010 [root] DEBUG: Pipe server name: \\\\.\\PIPE\\NCFmMTalx\n2025-08-31 06:41:10,010 [root] DEBUG: Python path: C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32\n2025-08-31 06:41:10,010 [root] INFO: analysis running as an admin\n2025-08-31 06:41:10,012 [root] INFO: analysis package specified: \"edge\"\n2025-08-31 06:41:10,012 [root] DEBUG: importing analysis package module: \"modules.packages.edge\"...\n2025-08-31 06:41:10,022 [root] DEBUG: imported analysis package \"edge\"\n2025-08-31 06:41:10,022 [root] DEBUG: initializing analysis package \"edge\"...\n2025-08-31 06:41:10,022 [root] DEBUG: New location of moved file: googel.com\n2025-08-31 06:41:10,022 [root] INFO: Analyzer: Package modules.packages.edge does not specify a DLL option\n2025-08-31 06:41:10,022 [root] INFO: Analyzer: Package modules.packages.edge does not specify a DLL_64 option\n2025-08-31 06:41:10,022 [root] INFO: Analyzer: Package modules.packages.edge does not specify a loader option\n2025-08-31 06:41:10,022 [root] INFO: Analyzer: Package modules.packages.edge does not specify a loader_64 option\n2025-08-31 06:41:10,410 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.amsi\"\n2025-08-31 06:41:10,412 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.browser\"\n2025-08-31 06:41:10,414 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.browsermonitor\"\n2025-08-31 06:41:10,418 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.curtain\"\n2025-08-31 06:41:10,426 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.digisig\"\n2025-08-31 06:41:10,432 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.disguise\"\n2025-08-31 06:41:10,436 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.during_script\"\n2025-08-31 06:41:10,442 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.evtx\"\n2025-08-31 06:41:10,462 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.human\"\n2025-08-31 06:41:10,464 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.permissions\"\n2025-08-31 06:41:10,466 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.pre_script\"\n2025-08-31 06:41:10,468 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.procmon\"\n2025-08-31 06:41:10,472 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.recentfiles\"\n2025-08-31 06:41:10,476 [lib.api.screenshot] DEBUG: Importing 'PIL.ImageChops'\n2025-08-31 06:41:10,719 [lib.api.screenshot] DEBUG: Importing 'PIL.ImageGrab'\n2025-08-31 06:41:10,729 [lib.api.screenshot] DEBUG: Importing 'PIL.ImageDraw'\n2025-08-31 06:41:10,762 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.screenshots\"\n2025-08-31 06:41:10,764 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.sysmon\"\n2025-08-31 06:41:10,770 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.tlsdump\"\n2025-08-31 06:41:10,772 [root] DEBUG: Imported auxiliary module \"modules.auxiliary.usage\"\n2025-08-31 06:41:10,772 [root] DEBUG: Initialized auxiliary module \"Browser\"\n2025-08-31 06:41:10,774 [root] DEBUG: attempting to configure 'Browser' from data\n2025-08-31 06:41:10,774 [root] DEBUG: module Browser does not support data configuration, ignoring\n2025-08-31 06:41:10,774 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.browser\"...\n2025-08-31 06:41:10,776 [root] DEBUG: Started auxiliary module modules.auxiliary.browser\n2025-08-31 06:41:10,776 [root] DEBUG: Initialized auxiliary module \"Browsermonitor\"\n2025-08-31 06:41:10,776 [root] DEBUG: attempting to configure 'Browsermonitor' from data\n2025-08-31 06:41:10,776 [root] DEBUG: module Browsermonitor does not support data configuration, ignoring\n2025-08-31 06:41:10,776 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.browsermonitor\"...\n2025-08-31 06:41:10,776 [root] DEBUG: Started auxiliary module modules.auxiliary.browsermonitor\n2025-08-31 06:41:10,776 [root] DEBUG: Initialized auxiliary module \"Curtain\"\n2025-08-31 06:41:10,776 [root] DEBUG: attempting to configure 'Curtain' from data\n2025-08-31 06:41:10,778 [root] DEBUG: module Curtain does not support data configuration, ignoring\n2025-08-31 06:41:10,778 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.curtain\"...\n2025-08-31 06:41:10,778 [root] DEBUG: Started auxiliary module modules.auxiliary.curtain\n2025-08-31 06:41:10,778 [root] DEBUG: Initialized auxiliary module \"DigiSig\"\n2025-08-31 06:41:10,778 [root] DEBUG: attempting to configure 'DigiSig' from data\n2025-08-31 06:41:10,778 [root] DEBUG: module DigiSig does not support data configuration, ignoring\n2025-08-31 06:41:10,780 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.digisig\"...\n2025-08-31 06:41:10,780 [modules.auxiliary.digisig] DEBUG: Skipping authenticode validation, analysis is not a file\n2025-08-31 06:41:10,780 [root] DEBUG: Started auxiliary module modules.auxiliary.digisig\n2025-08-31 06:41:10,780 [root] DEBUG: Initialized auxiliary module \"Disguise\"\n2025-08-31 06:41:10,780 [root] DEBUG: attempting to configure 'Disguise' from data\n2025-08-31 06:41:10,780 [root] DEBUG: module Disguise does not support data configuration, ignoring\n2025-08-31 06:41:10,780 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.disguise\"...\n2025-08-31 06:41:10,786 [modules.auxiliary.disguise] INFO: Disguising GUID to 6a0a719b-2477-466f-a34f-a6f77313575d\n2025-08-31 06:41:10,786 [root] DEBUG: Started auxiliary module modules.auxiliary.disguise\n2025-08-31 06:41:10,786 [root] DEBUG: Initialized auxiliary module \"Evtx\"\n2025-08-31 06:41:10,786 [root] DEBUG: attempting to configure 'Evtx' from data\n2025-08-31 06:41:10,788 [root] DEBUG: module Evtx does not support data configuration, ignoring\n2025-08-31 06:41:10,788 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.evtx\"...\n2025-08-31 06:41:10,788 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Security State Change\" /success:enable /failure:enable\n2025-08-31 06:41:10,788 [root] DEBUG: Started auxiliary module modules.auxiliary.evtx\n2025-08-31 06:41:10,788 [root] DEBUG: Initialized auxiliary module \"Human\"\n2025-08-31 06:41:10,788 [root] DEBUG: attempting to configure 'Human' from data\n2025-08-31 06:41:10,788 [root] DEBUG: module Human does not support data configuration, ignoring\n2025-08-31 06:41:10,788 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.human\"...\n2025-08-31 06:41:10,792 [root] DEBUG: Started auxiliary module modules.auxiliary.human\n2025-08-31 06:41:10,792 [root] DEBUG: Initialized auxiliary module \"Permissions\"\n2025-08-31 06:41:10,792 [root] DEBUG: attempting to configure 'Permissions' from data\n2025-08-31 06:41:10,792 [root] DEBUG: module Permissions does not support data configuration, ignoring\n2025-08-31 06:41:10,792 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.permissions\"...\n2025-08-31 06:41:10,792 [root] DEBUG: Started auxiliary module modules.auxiliary.permissions\n2025-08-31 06:41:10,802 [root] DEBUG: Initialized auxiliary module \"Pre_script\"\n2025-08-31 06:41:10,802 [root] DEBUG: attempting to configure 'Pre_script' from data\n2025-08-31 06:41:10,802 [root] DEBUG: module Pre_script does not support data configuration, ignoring\n2025-08-31 06:41:10,804 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.pre_script\"...\n2025-08-31 06:41:10,804 [root] DEBUG: Started auxiliary module modules.auxiliary.pre_script\n2025-08-31 06:41:10,804 [root] DEBUG: Initialized auxiliary module \"Procmon\"\n2025-08-31 06:41:10,804 [root] DEBUG: attempting to configure 'Procmon' from data\n2025-08-31 06:41:10,804 [root] DEBUG: module Procmon does not support data configuration, ignoring\n2025-08-31 06:41:10,804 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.procmon\"...\n2025-08-31 06:41:10,806 [root] DEBUG: Started auxiliary module modules.auxiliary.procmon\n2025-08-31 06:41:10,808 [root] DEBUG: Initialized auxiliary module \"RecentFiles\"\n2025-08-31 06:41:10,808 [root] DEBUG: attempting to configure 'RecentFiles' from data\n2025-08-31 06:41:10,808 [root] DEBUG: module RecentFiles does not support data configuration, ignoring\n2025-08-31 06:41:10,808 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.recentfiles\"...\n2025-08-31 06:41:10,820 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\iLHlelGDGzB.ppt to disk.\n2025-08-31 06:41:10,927 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\rdDNWUsERFuDFF.doc to disk.\n2025-08-31 06:41:10,931 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\pqfIjvIxtzEpD.ppt to disk.\n2025-08-31 06:41:10,933 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\PEbfdOrvYLeqDiOsmRHB.pptx to disk.\n2025-08-31 06:41:10,941 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\UEgjIgGqNH.docx to disk.\n2025-08-31 06:41:10,945 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\eYejiZKBcmZJ.ppt to disk.\n2025-08-31 06:41:10,949 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\WzsXuOpiUP.docx to disk.\n2025-08-31 06:41:10,953 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\CpINuvQyTqQ.docm to disk.\n2025-08-31 06:41:10,956 [modules.auxiliary.recentfiles] DEBUG: Wrote 'recentfile' C:\\Users\\Alex Dan\\Documents\\lrUcDqVdwLiHX.ppt to disk.\n2025-08-31 06:41:10,956 [root] DEBUG: Started auxiliary module modules.auxiliary.recentfiles\n2025-08-31 06:41:10,958 [root] DEBUG: Initialized auxiliary module \"Screenshots\"\n2025-08-31 06:41:10,958 [root] DEBUG: attempting to configure 'Screenshots' from data\n2025-08-31 06:41:10,958 [root] DEBUG: module Screenshots does not support data configuration, ignoring\n2025-08-31 06:41:10,958 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.screenshots\"...\n2025-08-31 06:41:10,958 [root] DEBUG: Started auxiliary module modules.auxiliary.screenshots\n2025-08-31 06:41:10,958 [root] DEBUG: Initialized auxiliary module \"Sysmon\"\n2025-08-31 06:41:10,958 [root] DEBUG: attempting to configure 'Sysmon' from data\n2025-08-31 06:41:10,958 [root] DEBUG: module Sysmon does not support data configuration, ignoring\n2025-08-31 06:41:10,958 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.sysmon\"...\n2025-08-31 06:41:10,988 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Security System Extension\" /success:enable /failure:enable\n2025-08-31 06:41:11,200 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"System Integrity\" /success:enable /failure:enable\n2025-08-31 06:41:11,273 [root] WARNING: Cannot execute auxiliary module modules.auxiliary.sysmon: In order to use the Sysmon functionality, it is required to have the SMaster(64|32).exe file and sysmonconfig-export.xml file in the bin path. Note that the SMaster(64|32).exe files are just the standard Sysmon binaries renamed to avoid anti-analysis detection techniques.\n2025-08-31 06:41:11,273 [root] DEBUG: Initialized auxiliary module \"TLSDumpMasterSecrets\"\n2025-08-31 06:41:11,273 [root] DEBUG: attempting to configure 'TLSDumpMasterSecrets' from data\n2025-08-31 06:41:11,273 [root] DEBUG: module TLSDumpMasterSecrets does not support data configuration, ignoring\n2025-08-31 06:41:11,273 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.tlsdump\"...\n2025-08-31 06:41:11,277 [modules.auxiliary.tlsdump] INFO: lsass.exe found, pid 524\n2025-08-31 06:41:11,283 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\524.ini\n2025-08-31 06:41:11,325 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"IPsec Driver\" /success:disable /failure:disable\n2025-08-31 06:41:11,414 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other System Events\" /success:disable /failure:enable\n2025-08-31 06:41:11,511 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Logon\" /success:enable /failure:enable\n2025-08-31 06:41:11,596 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Logoff\" /success:enable /failure:enable\n2025-08-31 06:41:11,766 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Account Lockout\" /success:enable /failure:enable\n2025-08-31 06:41:11,840 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"IPsec Main Mode\" /success:disable /failure:disable\n2025-08-31 06:41:12,089 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"IPsec Quick Mode\" /success:disable /failure:disable\n2025-08-31 06:41:12,160 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"IPsec Extended Mode\" /success:disable /failure:disable\n2025-08-31 06:41:12,239 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Logon/Logoff Events\" /success:enable /failure:enable\n2025-08-31 06:41:12,293 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:12,293 [lib.api.process] INFO: Option 'tlsdump' with value '1' sent to monitor\n2025-08-31 06:41:12,297 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:12,320 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Network Policy Server\" /success:enable /failure:enable\n2025-08-31 06:41:12,386 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Special Logon\" /success:enable /failure:enable\n2025-08-31 06:41:12,451 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"File System\" /success:enable /failure:enable\n2025-08-31 06:41:12,481 [root] DEBUG: Loader: Injecting process 524 with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:12,483 [root] DEBUG: Loader: Copied config file C:\\qstr2giq\\dll\\524.ini to system path C:\\524.ini\n2025-08-31 06:41:12,485 [root] DEBUG: Loader: Failed to open process, PPLinject launch failed\n2025-08-31 06:41:12,485 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:12,491 [root] DEBUG: Started auxiliary module modules.auxiliary.tlsdump\n2025-08-31 06:41:12,491 [root] DEBUG: Initialized auxiliary module \"Usage\"\n2025-08-31 06:41:12,491 [root] DEBUG: attempting to configure 'Usage' from data\n2025-08-31 06:41:12,491 [root] DEBUG: module Usage does not support data configuration, ignoring\n2025-08-31 06:41:12,491 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.usage\"...\n2025-08-31 06:41:12,491 [root] DEBUG: Started auxiliary module modules.auxiliary.usage\n2025-08-31 06:41:12,495 [root] DEBUG: Initialized auxiliary module \"During_script\"\n2025-08-31 06:41:12,495 [root] DEBUG: attempting to configure 'During_script' from data\n2025-08-31 06:41:12,495 [root] DEBUG: module During_script does not support data configuration, ignoring\n2025-08-31 06:41:12,495 [root] DEBUG: Trying to start auxiliary module \"modules.auxiliary.during_script\"...\n2025-08-31 06:41:12,497 [root] DEBUG: Started auxiliary module modules.auxiliary.during_script\n2025-08-31 06:41:12,497 [root] INFO: Interactive mode enabled - injecting into explorer shell\n2025-08-31 06:41:12,499 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4840.ini\n2025-08-31 06:41:12,499 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:12,501 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:12,527 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Registry\" /success:enable /failure:enable\n2025-08-31 06:41:12,552 [root] DEBUG: Loader: Injecting process 4840 with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:12,594 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Kernel Object\" /success:enable /failure:enable\n2025-08-31 06:41:12,658 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"SAM\" /success:disable /failure:disable\n2025-08-31 06:41:12,751 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Certification Services\" /success:enable /failure:enable\n2025-08-31 06:41:12,812 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Handle Manipulation\" /success:disable /failure:disable\n2025-08-31 06:41:12,868 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Application Generated\" /success:enable /failure:enable\n2025-08-31 06:41:12,923 [root] DEBUG: 4840: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:12,923 [root] DEBUG: 4840: Disabling sleep skipping.\n2025-08-31 06:41:12,925 [root] DEBUG: 4840: Interactive desktop enabled.\n2025-08-31 06:41:12,925 [root] DEBUG: 4840: Dropped file limit defaulting to 100.\n2025-08-31 06:41:12,925 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"File Share\" /success:enable /failure:enable\n2025-08-31 06:41:12,925 [root] DEBUG: 4840: Interactive desktop - injecting Explorer Shell\n2025-08-31 06:41:12,937 [root] DEBUG: 4840: YaraInit: Compiled 42 rule files\n2025-08-31 06:41:12,939 [root] DEBUG: 4840: YaraInit: Compiled rules saved to file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:12,959 [root] DEBUG: 4840: YaraScan: Scanning 0x00007FF731FE0000, size 0x2a2000\n2025-08-31 06:41:12,977 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Filtering Platform Packet Drop\" /success:disable /failure:disable\n2025-08-31 06:41:12,984 [root] DEBUG: 4840: Monitor initialised: 64-bit capemon loaded in process 4840 at 0x00007FFB53590000, thread 2820, image base 0x00007FF731FE0000, stack from 0x00000000056F2000-0x0000000005700000\n2025-08-31 06:41:12,984 [root] DEBUG: 4840: Commandline: C:\\WINDOWS\\Explorer.EXE\n2025-08-31 06:41:12,986 [root] DEBUG: 4840: add_all_dlls_to_dll_ranges: skipping C:\\WINDOWS\\pyshellext.\n2025-08-31 06:41:13,026 [root] DEBUG: 4840: Hooked 69 out of 69 functions\n2025-08-31 06:41:13,026 [root] DEBUG: 4840: Syscall hook installed, syscall logging level 1\n2025-08-31 06:41:13,030 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Filtering Platform Connection\" /success:disable /failure:disable\n2025-08-31 06:41:13,040 [root] DEBUG: 4840: DLL loaded at 0x00007FFB7E4A0000: C:\\WINDOWS\\SYSTEM32\\Cabinet (0x2a000 bytes).\n2025-08-31 06:41:13,050 [root] DEBUG: InjectDllViaThread: Successfully injected Dll into process via RtlCreateUserThread.\n2025-08-31 06:41:13,052 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:13,054 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:13,057 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7E21000, size: 0x1000.\n2025-08-31 06:41:13,059 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7E11000, size: 0x1000.\n2025-08-31 06:41:13,061 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7E01000, size: 0x1000.\n2025-08-31 06:41:13,068 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DF1000, size: 0x3000.\n2025-08-31 06:41:13,070 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DE1000, size: 0x1000.\n2025-08-31 06:41:13,072 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DD1000, size: 0x1000.\n2025-08-31 06:41:13,074 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DC1000, size: 0x1000.\n2025-08-31 06:41:13,076 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DB1000, size: 0x1000.\n2025-08-31 06:41:13,078 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7DA1000, size: 0x1000.\n2025-08-31 06:41:13,078 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7D91000, size: 0x1000.\n2025-08-31 06:41:13,088 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Object Access Events\" /success:disable /failure:disable\n2025-08-31 06:41:13,097 [root] DEBUG: 4840: AllocationHandler: Adding allocation to tracked region list: 0x00007FF5E7D81000, size: 0x1000.\n2025-08-31 06:41:13,153 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Sensitive Privilege Use\" /success:disable /failure:disable\n2025-08-31 06:41:13,210 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Non Sensitive Privilege Use\" /success:disable /failure:disable\n2025-08-31 06:41:13,263 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Privilege Use Events\" /success:disable /failure:disable\n2025-08-31 06:41:13,333 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"RPC Events\" /success:enable /failure:enable\n2025-08-31 06:41:13,390 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Audit Policy Change\" /success:enable /failure:enable\n2025-08-31 06:41:13,448 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Authentication Policy Change\" /success:enable /failure:enable\n2025-08-31 06:41:13,527 [root] DEBUG: 4840: caller_dispatch: Added region at 0x00007FF731FE0000 to tracked regions list (combase::CoCreateInstance returns to 0x00007FF731FE53EA, thread 4324).\n2025-08-31 06:41:13,549 [root] DEBUG: 4840: YaraScan: Scanning 0x00007FF731FE0000, size 0x2a2000\n2025-08-31 06:41:13,553 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"MPSSVC Rule-Level Policy Change\" /success:disable /failure:disable\n2025-08-31 06:41:13,578 [root] DEBUG: 4840: ProcessImageBase: Main module image at 0x00007FF731FE0000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:41:13,658 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1112.ini\n2025-08-31 06:41:13,667 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:13,687 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:13,838 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Filtering Platform Policy Change\" /success:disable /failure:disable\n2025-08-31 06:41:13,885 [root] DEBUG: Loader: Injecting process 1112 with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:13,952 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Policy Change Events\" /success:disable /failure:enable\n2025-08-31 06:41:14,146 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"User Account Management\" /success:enable /failure:enable\n2025-08-31 06:41:14,213 [root] DEBUG: 1112: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:14,213 [root] DEBUG: 1112: Disabling sleep skipping.\n2025-08-31 06:41:14,215 [root] DEBUG: 1112: Interactive desktop enabled.\n2025-08-31 06:41:14,217 [root] DEBUG: 1112: Dropped file limit defaulting to 100.\n2025-08-31 06:41:14,219 [root] DEBUG: 1112: Services hook set enabled\n2025-08-31 06:41:14,235 [root] DEBUG: 1112: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:14,275 [root] DEBUG: 1112: Monitor initialised: 64-bit capemon loaded in process 1112 at 0x00007FFB53590000, thread 10128, image base 0x00007FF6E25C0000, stack from 0x00000073893F5000-0x0000007389400000\n2025-08-31 06:41:14,277 [root] DEBUG: 1112: Commandline: C:\\WINDOWS\\system32\\svchost.exe -k DcomLaunch -p\n2025-08-31 06:41:14,300 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Computer Account Management\" /success:enable /failure:enable\n2025-08-31 06:41:14,336 [root] DEBUG: 1112: Hooked 69 out of 69 functions\n2025-08-31 06:41:14,338 [root] INFO: Loaded monitor into process with pid 1112\n2025-08-31 06:41:14,338 [root] DEBUG: InjectDllViaThread: Successfully injected Dll into process via RtlCreateUserThread.\n2025-08-31 06:41:14,340 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:14,376 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:14,464 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Security Group Management\" /success:enable /failure:enable\n2025-08-31 06:41:14,609 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Distribution Group Management\" /success:enable /failure:enable\n2025-08-31 06:41:14,704 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Application Group Management\" /success:enable /failure:enable\n2025-08-31 06:41:14,870 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Account Management Events\" /success:enable /failure:enable\n2025-08-31 06:41:14,985 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Directory Service Access\" /success:enable /failure:enable\n2025-08-31 06:41:15,086 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Directory Service Changes\" /success:enable /failure:enable\n2025-08-31 06:41:15,183 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Directory Service Replication\" /success:disable /failure:enable\n2025-08-31 06:41:15,268 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Detailed Directory Service Replication\" /success:disable /failure:disable\n2025-08-31 06:41:15,351 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Credential Validation\" /success:enable /failure:enable\n2025-08-31 06:41:15,478 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Kerberos Service Ticket Operations\" /success:enable /failure:enable\n2025-08-31 06:41:15,571 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Other Account Logon Events\" /success:enable /failure:enable\n2025-08-31 06:41:15,669 [modules.auxiliary.evtx] DEBUG: Enabling advanced logging -> auditpol /set /subcategory:\"Kerberos Authentication Service\" /success:enable /failure:enable\n2025-08-31 06:41:15,788 [modules.auxiliary.evtx] DEBUG: Wiping Application\n2025-08-31 06:41:15,905 [modules.auxiliary.evtx] DEBUG: Wiping HardwareEvents\n2025-08-31 06:41:16,020 [modules.auxiliary.evtx] DEBUG: Wiping Internet Explorer\n2025-08-31 06:41:16,119 [modules.auxiliary.evtx] DEBUG: Wiping Key Management Service\n2025-08-31 06:41:16,226 [modules.auxiliary.evtx] DEBUG: Wiping OAlerts\n2025-08-31 06:41:16,354 [modules.auxiliary.evtx] DEBUG: Wiping Security\n2025-08-31 06:41:16,485 [modules.auxiliary.evtx] DEBUG: Wiping Setup\n2025-08-31 06:41:16,663 [modules.auxiliary.evtx] DEBUG: Wiping System\n2025-08-31 06:41:16,772 [modules.auxiliary.evtx] DEBUG: Wiping Windows PowerShell\n2025-08-31 06:41:16,898 [modules.auxiliary.evtx] DEBUG: Wiping Microsoft-Windows-Sysmon/Operational\n2025-08-31 06:41:19,941 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 8972: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:41:19,943 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 8972\n2025-08-31 06:41:19,943 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\8972.ini\n2025-08-31 06:41:20,014 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:20,020 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:20,065 [root] DEBUG: Loader: Injecting process 8972 (thread 6536) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:20,068 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:20,068 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:20,073 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:20,082 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 8972\n2025-08-31 06:41:20,082 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\8972.ini\n2025-08-31 06:41:20,082 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:20,088 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:20,105 [root] DEBUG: Loader: Injecting process 8972 (thread 6536) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:20,107 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:41:20,107 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:20,111 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:21,346 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 9832: C:\\Windows\\System32\\RuntimeBroker.exe, ImageBase: 0x00007FF788F90000\n2025-08-31 06:41:21,346 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 9832\n2025-08-31 06:41:21,348 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9832.ini\n2025-08-31 06:41:21,350 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:21,356 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:21,387 [root] DEBUG: Loader: Injecting process 9832 (thread 5212) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:21,387 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:21,389 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:21,393 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:21,397 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 9832\n2025-08-31 06:41:21,399 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9832.ini\n2025-08-31 06:41:21,399 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:21,403 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:21,423 [root] DEBUG: Loader: Injecting process 9832 (thread 5212) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:21,425 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:21,425 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:21,429 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:21,464 [root] DEBUG: 9832: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:21,464 [root] DEBUG: 9832: Interactive desktop enabled.\n2025-08-31 06:41:21,466 [root] DEBUG: 9832: Dropped file limit defaulting to 100.\n2025-08-31 06:41:21,472 [root] DEBUG: 9832: Disabling sleep skipping.\n2025-08-31 06:41:21,474 [root] DEBUG: 9832: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:21,492 [root] DEBUG: 9832: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:41:21,494 [root] DEBUG: 9832: Monitor initialised: 64-bit capemon loaded in process 9832 at 0x00007FFB53590000, thread 5212, image base 0x00007FF788F90000, stack from 0x00000014ABF34000-0x00000014ABF40000\n2025-08-31 06:41:21,496 [root] DEBUG: 9832: Commandline: C:\\Windows\\System32\\RuntimeBroker.exe -Embedding\n2025-08-31 06:41:21,524 [root] DEBUG: 9832: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:41:21,587 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:41:21,587 [root] DEBUG: 9832: set_hooks: Unable to hook LockResource\n2025-08-31 06:41:21,611 [root] DEBUG: 9832: Hooked 616 out of 617 functions\n2025-08-31 06:41:21,613 [root] DEBUG: 9832: Syscall hook installed, syscall logging level 1\n2025-08-31 06:41:21,617 [root] INFO: Loaded monitor into process with pid 9832\n2025-08-31 06:41:21,621 [root] DEBUG: 9832: caller_dispatch: Added region at 0x00007FF788F90000 to tracked regions list (ntdll::NtSetInformationThread returns to 0x00007FF788F990F2, thread 5212).\n2025-08-31 06:41:21,623 [root] DEBUG: 9832: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:41:21,625 [root] DEBUG: 9832: ProcessImageBase: Main module image at 0x00007FF788F90000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:41:21,844 [root] INFO: Announced starting service \"b'Winmgmt'\"\n2025-08-31 06:41:21,846 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\708.ini\n2025-08-31 06:41:21,848 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:21,854 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:21,886 [root] DEBUG: Loader: Injecting process 708 with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:21,890 [root] DEBUG: Loader: Copied config file C:\\qstr2giq\\dll\\708.ini to system path C:\\708.ini\n2025-08-31 06:41:21,937 [root] DEBUG: Loader: Failed to open process, PPLinject launch failed\n2025-08-31 06:41:21,937 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,080 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 1160: \\\\?\\C:\\Windows\\System32\\SecurityHealthHost.exe, ImageBase: 0x00007FF63CDC0000\n2025-08-31 06:41:22,082 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 1160\n2025-08-31 06:41:22,084 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1160.ini\n2025-08-31 06:41:22,091 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:22,097 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:22,135 [root] DEBUG: Loader: Injecting process 1160 (thread 1164) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,145 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:41:22,147 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:41:22,149 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,161 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 1160\n2025-08-31 06:41:22,161 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1160.ini\n2025-08-31 06:41:22,163 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:22,168 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:22,206 [root] DEBUG: Loader: Injecting process 1160 (thread 1164) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,218 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:41:22,220 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:41:22,220 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,846 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 4188: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:41:22,848 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 4188\n2025-08-31 06:41:22,850 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4188.ini\n2025-08-31 06:41:22,850 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:22,856 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:22,898 [root] DEBUG: Loader: Injecting process 4188 (thread 7080) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,900 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:22,904 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,914 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:22,922 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 4188\n2025-08-31 06:41:22,924 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4188.ini\n2025-08-31 06:41:22,924 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:22,931 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:22,955 [root] DEBUG: Loader: Injecting process 4188 (thread 7080) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,957 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:41:22,957 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:22,963 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:23,441 [root] INFO: Restarting WMI Service\n2025-08-31 06:41:23,695 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 3904: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:41:23,695 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 3904\n2025-08-31 06:41:23,695 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\3904.ini\n2025-08-31 06:41:23,697 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:23,705 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:23,725 [root] DEBUG: Loader: Injecting process 3904 (thread 3616) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:23,727 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:23,727 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:23,731 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:23,736 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 3904\n2025-08-31 06:41:23,738 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\3904.ini\n2025-08-31 06:41:23,738 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:23,745 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:23,762 [root] DEBUG: Loader: Injecting process 3904 (thread 3616) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:23,762 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:41:23,762 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:23,766 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:23,790 [root] DEBUG: 3904: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:23,792 [root] DEBUG: 3904: Interactive desktop enabled.\n2025-08-31 06:41:23,792 [root] DEBUG: 3904: Dropped file limit defaulting to 100.\n2025-08-31 06:41:23,798 [root] DEBUG: 3904: Disabling sleep skipping.\n2025-08-31 06:41:23,800 [root] DEBUG: 3904: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:23,818 [root] DEBUG: 3904: YaraScan: Scanning 0x00007FF77A6A0000, size 0xb000\n2025-08-31 06:41:23,818 [root] DEBUG: 3904: Monitor initialised: 64-bit capemon loaded in process 3904 at 0x00007FFB53590000, thread 3616, image base 0x00007FF77A6A0000, stack from 0x0000004746DC5000-0x0000004746DD0000\n2025-08-31 06:41:23,818 [root] DEBUG: 3904: Commandline: \"C:\\WINDOWS\\system32\\backgroundTaskHost.exe\" -ServerName:WindowsBackup.AppXnn6fnh4raxtmg45ba8k2f4ykb7n0k3y4.mca\n2025-08-31 06:41:23,841 [root] DEBUG: 3904: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:41:23,903 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:41:23,905 [root] DEBUG: 3904: set_hooks: Unable to hook LockResource\n2025-08-31 06:41:23,918 [root] DEBUG: 3904: Hooked 616 out of 617 functions\n2025-08-31 06:41:23,918 [root] DEBUG: 3904: Syscall hook installed, syscall logging level 1\n2025-08-31 06:41:23,924 [root] INFO: Loaded monitor into process with pid 3904\n2025-08-31 06:41:23,924 [root] DEBUG: 3904: caller_dispatch: Added region at 0x00007FF77A6A0000 to tracked regions list (kernel32::SetUnhandledExceptionFilter returns to 0x00007FF77A6A1381, thread 3616).\n2025-08-31 06:41:23,926 [root] DEBUG: 3904: YaraScan: Scanning 0x00007FF77A6A0000, size 0xb000\n2025-08-31 06:41:23,926 [root] DEBUG: 3904: ProcessImageBase: Main module image at 0x00007FF77A6A0000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:41:24,191 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 2816: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe, ImageBase: 0x00007FF7ACE10000\n2025-08-31 06:41:24,193 [root] INFO: Announced 64-bit process name: WmiPrvSE.exe pid: 2816\n2025-08-31 06:41:24,193 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\2816.ini\n2025-08-31 06:41:24,195 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:25,583 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:25,607 [root] DEBUG: Loader: Injecting process 2816 (thread 9144) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:25,607 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:25,607 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:25,611 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:25,617 [root] INFO: Announced 64-bit process name: WmiPrvSE.exe pid: 2816\n2025-08-31 06:41:25,617 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\2816.ini\n2025-08-31 06:41:25,617 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:25,623 [root] DEBUG: package modules.packages.edge does not support configure, ignoring\n2025-08-31 06:41:25,623 [root] WARNING: configuration error for package modules.packages.edge: error importing data.packages.edge: No module named 'data.packages'\n2025-08-31 06:41:25,623 [lib.core.compound] INFO: C:\\Users\\ALEXDA~1\\AppData\\Local\\Temp already exists, skipping creation\n2025-08-31 06:41:25,631 [lib.api.process] INFO: Successfully executed process from path \"C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\" with arguments \"\"googel.com\"\" with pid 4308\n2025-08-31 06:41:25,633 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4308.ini\n2025-08-31 06:41:25,633 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:25,641 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:25,658 [root] DEBUG: Loader: Injecting process 4308 (thread 9184) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:25,658 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:25,660 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:25,664 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:26,221 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:26,237 [root] DEBUG: Loader: Injecting process 2816 (thread 9144) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:26,239 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:26,239 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:26,243 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:26,271 [root] DEBUG: 2816: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:26,271 [root] DEBUG: 2816: Interactive desktop enabled.\n2025-08-31 06:41:26,271 [root] DEBUG: 2816: Dropped file limit defaulting to 100.\n2025-08-31 06:41:26,273 [root] DEBUG: 2816: Disabling sleep skipping.\n2025-08-31 06:41:26,275 [root] DEBUG: 2816: Services hook set enabled\n2025-08-31 06:41:26,281 [root] DEBUG: 2816: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:26,311 [root] DEBUG: 2816: Monitor initialised: 64-bit capemon loaded in process 2816 at 0x00007FFB53590000, thread 9144, image base 0x00007FF7ACE10000, stack from 0x0000007618140000-0x0000007618150000\n2025-08-31 06:41:26,313 [root] DEBUG: 2816: Commandline: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe -secured -Embedding\n2025-08-31 06:41:26,358 [root] DEBUG: 2816: Hooked 69 out of 69 functions\n2025-08-31 06:41:26,358 [root] INFO: Loaded monitor into process with pid 2816\n2025-08-31 06:41:26,372 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9AD80000: C:\\WINDOWS\\SYSTEM32\\kernel.appcore (0x1a000 bytes).\n2025-08-31 06:41:26,374 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9C300000: C:\\WINDOWS\\System32\\bcryptPrimitives (0x99000 bytes).\n2025-08-31 06:41:26,378 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DE20000: C:\\WINDOWS\\System32\\clbcatq (0xa8000 bytes).\n2025-08-31 06:41:26,382 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6656.ini\n2025-08-31 06:41:26,382 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:26,386 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:26,404 [root] DEBUG: Loader: Injecting process 6656 with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:26,412 [root] DEBUG: 6656: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:26,412 [root] DEBUG: 6656: Disabling sleep skipping.\n2025-08-31 06:41:26,414 [root] DEBUG: 6656: Interactive desktop enabled.\n2025-08-31 06:41:26,414 [root] DEBUG: 6656: Dropped file limit defaulting to 100.\n2025-08-31 06:41:26,416 [root] DEBUG: 6656: Services hook set enabled\n2025-08-31 06:41:26,420 [root] DEBUG: 6656: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:26,435 [root] DEBUG: 6656: Monitor initialised: 64-bit capemon loaded in process 6656 at 0x00007FFB53590000, thread 10016, image base 0x00007FF6E25C0000, stack from 0x0000007C539F5000-0x0000007C53A00000\n2025-08-31 06:41:26,435 [root] DEBUG: 6656: Commandline: C:\\WINDOWS\\system32\\svchost.exe -k netsvcs -p\n2025-08-31 06:41:26,469 [root] DEBUG: 6656: Hooked 69 out of 69 functions\n2025-08-31 06:41:26,471 [root] INFO: Loaded monitor into process with pid 6656\n2025-08-31 06:41:26,471 [root] DEBUG: InjectDllViaThread: Successfully injected Dll into process via RtlCreateUserThread.\n2025-08-31 06:41:26,473 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:26,477 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:27,677 [lib.api.process] INFO: Successfully resumed \n2025-08-31 06:41:27,747 [root] DEBUG: 4308: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:27,749 [root] DEBUG: 4308: Interactive desktop enabled.\n2025-08-31 06:41:27,749 [root] DEBUG: 4308: Dropped file limit defaulting to 100.\n2025-08-31 06:41:27,761 [root] DEBUG: 4308: Edge-specific hook-set enabled.\n2025-08-31 06:41:27,769 [root] DEBUG: 4308: Disabling sleep skipping.\n2025-08-31 06:41:27,771 [root] DEBUG: 4308: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:27,789 [root] DEBUG: 4308: Monitor initialised: 64-bit capemon loaded in process 4308 at 0x00007FFB53590000, thread 9184, image base 0x00007FF7D2850000, stack from 0x000000F42D3F5000-0x000000F42D400000\n2025-08-31 06:41:27,791 [root] DEBUG: 4308: Commandline: \"C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\" \"googel.com\"\n2025-08-31 06:41:27,816 [root] DEBUG: 4308: Hooked 2 out of 2 functions\n2025-08-31 06:41:27,816 [root] DEBUG: 4308: Syscall hook installed, syscall logging level 1\n2025-08-31 06:41:27,820 [root] INFO: Loaded monitor into process with pid 4308\n2025-08-31 06:41:27,822 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9C300000: C:\\WINDOWS\\System32\\bcryptprimitives (0x99000 bytes).\n2025-08-31 06:41:27,882 [root] DEBUG: 4308: InstrumentationCallback: Added region at 0x00007FFB73CB0000 to tracked regions list (thread 9184).\n2025-08-31 06:41:27,882 [root] DEBUG: 4308: DLL loaded at 0x00007FFB918E0000: C:\\WINDOWS\\SYSTEM32\\version (0xb000 bytes).\n2025-08-31 06:41:27,892 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9EB00000: C:\\WINDOWS\\System32\\shcore (0xeb000 bytes).\n2025-08-31 06:41:27,914 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9C4F0000: C:\\WINDOWS\\System32\\wintypes (0x168000 bytes).\n2025-08-31 06:41:27,914 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9D3B0000: C:\\WINDOWS\\System32\\SHELL32 (0x715000 bytes).\n2025-08-31 06:41:27,922 [root] DEBUG: 4308: DLL loaded at 0x00007FFB99C80000: C:\\WINDOWS\\SYSTEM32\\windows.storage (0x845000 bytes).\n2025-08-31 06:41:27,931 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9EB00000: C:\\WINDOWS\\System32\\SHCORE (0xeb000 bytes).\n2025-08-31 06:41:27,969 [root] DEBUG: 4308: DLL loaded at 0x00007FFB9AD80000: C:\\WINDOWS\\SYSTEM32\\kernel.appcore (0x1a000 bytes).\n2025-08-31 06:41:28,526 [root] DEBUG: 2816: DLL loaded at 0x00007FFB90EC0000: C:\\WINDOWS\\system32\\wbem\\wbemprox (0x12000 bytes).\n2025-08-31 06:41:28,544 [root] DEBUG: 2816: DLL loaded at 0x00007FFB8EBA0000: C:\\WINDOWS\\system32\\wbem\\wbemsvc (0x15000 bytes).\n2025-08-31 06:41:28,594 [root] DEBUG: 2816: DLL loaded at 0x00007FFB97280000: C:\\WINDOWS\\system32\\wbem\\wmiutils (0x1f000 bytes).\n2025-08-31 06:41:28,632 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9BDF0000: C:\\WINDOWS\\SYSTEM32\\powrprof (0x4e000 bytes).\n2025-08-31 06:41:28,633 [root] DEBUG: 2816: DLL loaded at 0x00007FFB8DB90000: C:\\WINDOWS\\SYSTEM32\\framedynos (0x4f000 bytes).\n2025-08-31 06:41:28,634 [root] DEBUG: 2816: DLL loaded at 0x00007FFB57210000: C:\\WINDOWS\\system32\\wbem\\cimwin32 (0x1bb000 bytes).\n2025-08-31 06:41:28,637 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9BDD0000: C:\\WINDOWS\\SYSTEM32\\UMPDC (0x14000 bytes).\n2025-08-31 06:41:28,652 [root] DEBUG: 2816: DLL loaded at 0x00007FFB547A0000: C:\\WINDOWS\\SYSTEM32\\TpmCoreProvisioning (0x196000 bytes).\n2025-08-31 06:41:28,653 [root] DEBUG: 2816: DLL loaded at 0x00007FFB91530000: C:\\Windows\\System32\\wbem\\Win32_TPM (0x17000 bytes).\n2025-08-31 06:41:28,663 [root] DEBUG: 2816: DLL loaded at 0x00007FFB96380000: C:\\WINDOWS\\SYSTEM32\\tbs (0x18000 bytes).\n2025-08-31 06:41:28,685 [root] INFO: Process with pid 4308 appears to have terminated\n2025-08-31 06:41:28,709 [root] DEBUG: 2816: DLL loaded at 0x00007FFB992A0000: C:\\WINDOWS\\SYSTEM32\\wtsapi32 (0x16000 bytes).\n2025-08-31 06:41:28,717 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9BD40000: C:\\WINDOWS\\SYSTEM32\\WINSTA (0x67000 bytes).\n2025-08-31 06:41:28,735 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9BAB0000: C:\\WINDOWS\\SYSTEM32\\cfgmgr32 (0x5f000 bytes).\n2025-08-31 06:41:28,736 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9BB40000: C:\\WINDOWS\\SYSTEM32\\DEVOBJ (0x2d000 bytes).\n2025-08-31 06:41:28,746 [root] DEBUG: 2816: DLL loaded at 0x000001F5AF420000: C:\\WINDOWS\\SYSTEM32\\WMI (0x3000 bytes).\n2025-08-31 06:41:28,755 [root] DEBUG: 2816: DLL loaded at 0x00007FFB95A30000: C:\\WINDOWS\\SYSTEM32\\wmiclnt (0x13000 bytes).\n2025-08-31 06:41:28,768 [root] DEBUG: 2816: DLL loaded at 0x00007FFB91EF0000: C:\\WINDOWS\\SYSTEM32\\NETAPI32 (0x1a000 bytes).\n2025-08-31 06:41:28,787 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9A760000: C:\\WINDOWS\\SYSTEM32\\SAMCLI (0x1b000 bytes).\n2025-08-31 06:41:28,797 [root] DEBUG: 2816: DLL loaded at 0x00007FFB91370000: C:\\WINDOWS\\SYSTEM32\\SRVCLI (0x29000 bytes).\n2025-08-31 06:41:28,812 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9A780000: C:\\WINDOWS\\SYSTEM32\\NETUTILS (0xd000 bytes).\n2025-08-31 06:41:28,821 [root] DEBUG: 2816: DLL loaded at 0x00007FFB89380000: C:\\WINDOWS\\SYSTEM32\\LOGONCLI (0x46000 bytes).\n2025-08-31 06:41:28,838 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99100000: C:\\WINDOWS\\SYSTEM32\\SCHEDCLI (0xd000 bytes).\n2025-08-31 06:41:28,846 [root] DEBUG: 2816: DLL loaded at 0x00007FFB94010000: C:\\WINDOWS\\SYSTEM32\\WKSCLI (0x1b000 bytes).\n2025-08-31 06:41:28,854 [root] DEBUG: 2816: DLL loaded at 0x00007FFB96C00000: C:\\WINDOWS\\SYSTEM32\\DSROLE (0xb000 bytes).\n2025-08-31 06:41:32,212 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,214 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,220 [root] DEBUG: 2816: DLL loaded at 0x00007FFB993F0000: C:\\WINDOWS\\SYSTEM32\\dxcore (0x47000 bytes).\n2025-08-31 06:41:32,226 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,230 [root] DEBUG: 2816: DLL loaded at 0x00007FFB993F0000: C:\\WINDOWS\\SYSTEM32\\dxcore (0x47000 bytes).\n2025-08-31 06:41:32,234 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9C660000: C:\\WINDOWS\\System32\\WINTRUST (0x7f000 bytes).\n2025-08-31 06:41:32,238 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9B5B0000: C:\\WINDOWS\\SYSTEM32\\MSASN1 (0x13000 bytes).\n2025-08-31 06:41:32,240 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,247 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,251 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,259 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,263 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,301 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,305 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,313 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,317 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,323 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,327 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,333 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,337 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,368 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,370 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,378 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,382 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,388 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:41:32,391 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:41:32,397 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:32,401 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:41:56,452 [root] INFO: Added new file to list with pid None and path C:\\Users\\Alex Dan\\AppData\\Local\\Microsoft\\Windows\\Explorer\\iconcache_idx.db\n2025-08-31 06:41:56,927 [root] INFO: Added new file to list with pid None and path C:\\Users\\Alex Dan\\AppData\\Local\\microsoft\\Edge\\user data\\Default\\edge profile.ico\n2025-08-31 06:41:56,931 [root] INFO: Added new file to list with pid None and path C:\\Users\\Alex Dan\\AppData\\Local\\Microsoft\\Windows\\Explorer\\iconcache_48.db\n2025-08-31 06:41:56,955 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 4612: \\\\?\\C:\\Windows\\System32\\SecurityHealthHost.exe, ImageBase: 0x00007FF79B710000\n2025-08-31 06:41:56,957 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 4612\n2025-08-31 06:41:56,957 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4612.ini\n2025-08-31 06:41:56,961 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:56,969 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:57,064 [root] DEBUG: Loader: Injecting process 4612 (thread 1068) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:57,074 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:41:57,074 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:41:57,076 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:57,086 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 4612\n2025-08-31 06:41:57,086 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\4612.ini\n2025-08-31 06:41:57,087 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:57,095 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:57,113 [root] DEBUG: Loader: Injecting process 4612 (thread 1068) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:57,123 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:41:57,125 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:41:57,125 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:58,057 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 6836: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:41:58,058 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 6836\n2025-08-31 06:41:58,059 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6836.ini\n2025-08-31 06:41:58,060 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:58,068 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:58,087 [root] DEBUG: Loader: Injecting process 6836 (thread 6868) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:58,088 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:58,089 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:58,092 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:58,097 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 6836\n2025-08-31 06:41:58,098 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6836.ini\n2025-08-31 06:41:58,098 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:41:58,106 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:41:58,125 [root] DEBUG: Loader: Injecting process 6836 (thread 6868) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:58,126 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:41:58,126 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:41:58,130 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:41:58,186 [root] DEBUG: 6836: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:41:58,187 [root] DEBUG: 6836: Interactive desktop enabled.\n2025-08-31 06:41:58,188 [root] DEBUG: 6836: Dropped file limit defaulting to 100.\n2025-08-31 06:41:58,198 [root] DEBUG: 6836: Disabling sleep skipping.\n2025-08-31 06:41:58,207 [root] DEBUG: 6836: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:41:58,224 [root] DEBUG: 6836: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:41:58,225 [root] DEBUG: 6836: Monitor initialised: 64-bit capemon loaded in process 6836 at 0x00007FFB53590000, thread 6868, image base 0x00007FF6C0730000, stack from 0x0000000845DA5000-0x0000000845DB0000\n2025-08-31 06:41:58,225 [root] DEBUG: 6836: Commandline: C:\\WINDOWS\\system32\\DllHost.exe /Processid:{AB8902B4-09CA-4BB6-B78D-A8F59079A8D5}\n2025-08-31 06:41:58,253 [root] DEBUG: 6836: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:41:58,314 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:41:58,314 [root] DEBUG: 6836: set_hooks: Unable to hook LockResource\n2025-08-31 06:41:58,338 [root] DEBUG: 6836: Hooked 616 out of 617 functions\n2025-08-31 06:41:58,338 [root] DEBUG: 6836: Syscall hook installed, syscall logging level 1\n2025-08-31 06:41:58,344 [root] INFO: Loaded monitor into process with pid 6836\n2025-08-31 06:41:58,346 [root] DEBUG: 6836: caller_dispatch: Added region at 0x00007FF6C0730000 to tracked regions list (ntdll::NtAllocateVirtualMemory returns to 0x00007FF6C0731112, thread 6868).\n2025-08-31 06:41:58,346 [root] DEBUG: 6836: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:41:58,348 [root] DEBUG: 6836: ProcessImageBase: Main module image at 0x00007FF6C0730000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:42:03,731 [root] DEBUG: 6656: DLL loaded at 0x00007FFB8D730000: C:\\WINDOWS\\system32\\wbem\\ncprov (0x20000 bytes).\n2025-08-31 06:42:13,472 [root] DEBUG: 4840: DLL loaded at 0x00007FFB8DA60000: C:\\WINDOWS\\System32\\WSCAPI (0x46000 bytes).\n2025-08-31 06:42:13,473 [root] DEBUG: 4840: DLL loaded at 0x00007FFB575D0000: C:\\WINDOWS\\System32\\wscui.cpl (0x1a000 bytes).\n2025-08-31 06:42:13,474 [root] DEBUG: 4840: DLL loaded at 0x00007FFB56600000: C:\\WINDOWS\\System32\\wscinterop (0x3a000 bytes).\n2025-08-31 06:42:13,533 [root] DEBUG: 4840: DLL loaded at 0x00007FFB8DB90000: C:\\WINDOWS\\System32\\framedynos (0x4f000 bytes).\n2025-08-31 06:42:13,535 [root] DEBUG: 4840: DLL loaded at 0x00007FFB521E0000: C:\\WINDOWS\\System32\\werconcpl (0xc5000 bytes).\n2025-08-31 06:42:13,548 [root] DEBUG: 4840: DLL loaded at 0x00007FFB99650000: C:\\WINDOWS\\System32\\wer (0xf2000 bytes).\n2025-08-31 06:42:13,572 [root] DEBUG: 4840: DLL loaded at 0x00007FFB56A20000: C:\\WINDOWS\\System32\\hcproviders (0x16000 bytes).\n2025-08-31 06:42:13,586 [root] DEBUG: 4840: DLL loaded at 0x00007FFB96D70000: C:\\WINDOWS\\SYSTEM32\\wevtapi (0x4c000 bytes).\n2025-08-31 06:42:13,595 [root] DEBUG: 4840: DLL loaded at 0x00007FFB461C0000: C:\\Windows\\System32\\IEProxy (0xc2000 bytes).\n2025-08-31 06:42:41,368 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 6664: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:42:41,370 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 6664\n2025-08-31 06:42:41,376 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6664.ini\n2025-08-31 06:42:41,378 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:42:41,384 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:42:41,408 [root] DEBUG: Loader: Injecting process 6664 (thread 6888) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:41,410 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:42:41,410 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:41,414 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:42:41,418 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 6664\n2025-08-31 06:42:41,418 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6664.ini\n2025-08-31 06:42:41,420 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:42:41,426 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:42:41,442 [root] DEBUG: Loader: Injecting process 6664 (thread 6888) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:41,444 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:42:41,444 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:41,447 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:42:41,479 [root] DEBUG: 6664: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:42:41,479 [root] DEBUG: 6664: Interactive desktop enabled.\n2025-08-31 06:42:41,479 [root] DEBUG: 6664: Dropped file limit defaulting to 100.\n2025-08-31 06:42:41,485 [root] DEBUG: 6664: Disabling sleep skipping.\n2025-08-31 06:42:41,489 [root] DEBUG: 6664: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:42:41,505 [root] DEBUG: 6664: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:42:41,505 [root] DEBUG: 6664: Monitor initialised: 64-bit capemon loaded in process 6664 at 0x00007FFB53590000, thread 6888, image base 0x00007FF6C0730000, stack from 0x000000D305B24000-0x000000D305B30000\n2025-08-31 06:42:41,507 [root] DEBUG: 6664: Commandline: C:\\WINDOWS\\system32\\DllHost.exe /Processid:{338B40F9-9D68-4B53-A793-6B9AA0C5F63B}\n2025-08-31 06:42:41,526 [root] DEBUG: 6664: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:42:41,572 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:42:41,572 [root] DEBUG: 6664: set_hooks: Unable to hook LockResource\n2025-08-31 06:42:41,584 [root] DEBUG: 6664: Hooked 616 out of 617 functions\n2025-08-31 06:42:41,586 [root] DEBUG: 6664: Syscall hook installed, syscall logging level 1\n2025-08-31 06:42:41,592 [root] INFO: Loaded monitor into process with pid 6664\n2025-08-31 06:42:46,744 [root] INFO: Process with pid 6664 appears to have terminated\n2025-08-31 06:42:54,578 [root] INFO: Announced starting service \"b'WaaSMedicSvc'\"\n2025-08-31 06:42:57,284 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 6428: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe, ImageBase: 0x00007FF7ACE10000\n2025-08-31 06:42:57,286 [root] INFO: Announced 64-bit process name: WmiPrvSE.exe pid: 6428\n2025-08-31 06:42:57,288 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6428.ini\n2025-08-31 06:42:57,292 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:42:58,530 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:42:58,574 [root] DEBUG: Loader: Injecting process 6428 (thread 5512) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:58,576 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:42:58,578 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:58,584 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:42:58,592 [root] INFO: Announced 64-bit process name: WmiPrvSE.exe pid: 6428\n2025-08-31 06:42:58,594 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\6428.ini\n2025-08-31 06:42:58,596 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:42:59,707 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:42:59,737 [root] DEBUG: Loader: Injecting process 6428 (thread 5512) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:59,739 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:42:59,739 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:42:59,745 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:42:59,806 [root] DEBUG: 6428: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:42:59,808 [root] DEBUG: 6428: Interactive desktop enabled.\n2025-08-31 06:42:59,808 [root] DEBUG: 6428: Dropped file limit defaulting to 100.\n2025-08-31 06:42:59,812 [root] DEBUG: 6428: Disabling sleep skipping.\n2025-08-31 06:42:59,814 [root] DEBUG: 6428: Services hook set enabled\n2025-08-31 06:42:59,824 [root] DEBUG: 6428: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:42:59,840 [root] DEBUG: 6428: Monitor initialised: 64-bit capemon loaded in process 6428 at 0x00007FFB53590000, thread 5512, image base 0x00007FF7ACE10000, stack from 0x000000D194E70000-0x000000D194E80000\n2025-08-31 06:42:59,842 [root] DEBUG: 6428: Commandline: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe -secured -Embedding\n2025-08-31 06:42:59,881 [root] DEBUG: 6428: Hooked 69 out of 69 functions\n2025-08-31 06:42:59,882 [root] INFO: Loaded monitor into process with pid 6428\n2025-08-31 06:42:59,903 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9AD80000: C:\\WINDOWS\\SYSTEM32\\kernel.appcore (0x1a000 bytes).\n2025-08-31 06:42:59,905 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9C300000: C:\\WINDOWS\\System32\\bcryptPrimitives (0x99000 bytes).\n2025-08-31 06:42:59,911 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9DE20000: C:\\WINDOWS\\System32\\clbcatq (0xa8000 bytes).\n2025-08-31 06:42:59,921 [root] DEBUG: 6428: DLL loaded at 0x00007FFB90EC0000: C:\\WINDOWS\\system32\\wbem\\wbemprox (0x12000 bytes).\n2025-08-31 06:42:59,943 [root] DEBUG: 6428: DLL loaded at 0x00007FFB8EBA0000: C:\\WINDOWS\\system32\\wbem\\wbemsvc (0x15000 bytes).\n2025-08-31 06:42:59,987 [root] DEBUG: 6428: DLL loaded at 0x00007FFB97280000: C:\\WINDOWS\\system32\\wbem\\wmiutils (0x1f000 bytes).\n2025-08-31 06:43:00,024 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55A50000: C:\\WINDOWS\\system32\\wbem\\NetAdapterCim (0xbb000 bytes).\n2025-08-31 06:43:00,040 [root] DEBUG: 6428: DLL loaded at 0x00007FFB8E5E0000: C:\\WINDOWS\\SYSTEM32\\miutils (0x61000 bytes).\n2025-08-31 06:43:00,042 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56FA0000: C:\\WINDOWS\\system32\\wmitomi (0x33000 bytes).\n2025-08-31 06:43:00,115 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9BAB0000: C:\\WINDOWS\\SYSTEM32\\cfgmgr32 (0x5f000 bytes).\n2025-08-31 06:43:00,115 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56460000: C:\\WINDOWS\\SYSTEM32\\NetSetupApi (0x2a000 bytes).\n2025-08-31 06:43:00,135 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9CB80000: C:\\WINDOWS\\System32\\NSI (0xa000 bytes).\n2025-08-31 06:43:00,137 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:00,137 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:00,388 [root] DEBUG: 6428: DLL loaded at 0x00007FFB9A7C0000: C:\\WINDOWS\\SYSTEM32\\IPHLPAPI (0x33000 bytes).\n2025-08-31 06:43:00,505 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:00,505 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:01,592 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:01,592 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:01,783 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,785 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,793 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,797 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,805 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,807 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,815 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,819 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,864 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,866 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,876 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,880 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,884 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,888 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,896 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,900 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,939 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,943 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,947 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,951 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,959 [root] DEBUG: 2816: DLL loaded at 0x00007FFB994A0000: C:\\WINDOWS\\SYSTEM32\\dxgi (0x12e000 bytes).\n2025-08-31 06:43:01,963 [root] DEBUG: 2816: DLL loaded at 0x00007FFB99440000: C:\\WINDOWS\\SYSTEM32\\directxdatabasehelper (0x5d000 bytes).\n2025-08-31 06:43:01,971 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:01,975 [root] DEBUG: 2816: DLL loaded at 0x00007FFB9DF60000: C:\\WINDOWS\\System32\\setupapi (0x486000 bytes).\n2025-08-31 06:43:02,655 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:02,655 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:03,707 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:03,707 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:04,753 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:04,755 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:05,808 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:05,810 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:06,874 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:06,876 [root] DEBUG: 6428: DLL loaded at 0x00007FFB55740000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:07,941 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:07,943 [root] DEBUG: 6428: DLL loaded at 0x00007FFB516F0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:09,021 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:09,021 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:10,092 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:10,094 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:11,204 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:11,204 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:12,288 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:12,290 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:13,365 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:13,367 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:14,433 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:14,435 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:15,504 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:15,504 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:17,588 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:17,590 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:19,681 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:19,681 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:21,451 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 9048: C:\\Windows\\System32\\RuntimeBroker.exe, ImageBase: 0x00007FF788F90000\n2025-08-31 06:43:21,453 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 9048\n2025-08-31 06:43:21,455 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9048.ini\n2025-08-31 06:43:21,457 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:43:21,468 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:43:21,498 [root] DEBUG: Loader: Injecting process 9048 (thread 6556) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:21,498 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:43:21,500 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:21,504 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:43:21,508 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 9048\n2025-08-31 06:43:21,510 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9048.ini\n2025-08-31 06:43:21,538 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:43:21,556 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:43:21,575 [root] DEBUG: Loader: Injecting process 9048 (thread 6556) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:21,577 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:43:21,577 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:21,583 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:43:21,625 [root] DEBUG: 9048: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:43:21,626 [root] DEBUG: 9048: Interactive desktop enabled.\n2025-08-31 06:43:21,628 [root] DEBUG: 9048: Dropped file limit defaulting to 100.\n2025-08-31 06:43:21,633 [root] DEBUG: 9048: Disabling sleep skipping.\n2025-08-31 06:43:21,637 [root] DEBUG: 9048: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:43:21,655 [root] DEBUG: 9048: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:43:21,659 [root] DEBUG: 9048: Monitor initialised: 64-bit capemon loaded in process 9048 at 0x00007FFB53590000, thread 6556, image base 0x00007FF788F90000, stack from 0x000000EC834A4000-0x000000EC834B0000\n2025-08-31 06:43:21,659 [root] DEBUG: 9048: Commandline: C:\\Windows\\System32\\RuntimeBroker.exe -Embedding\n2025-08-31 06:43:21,693 [root] DEBUG: 9048: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:43:21,750 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:43:21,752 [root] DEBUG: 9048: set_hooks: Unable to hook LockResource\n2025-08-31 06:43:21,754 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:21,756 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:21,772 [root] DEBUG: 9048: Hooked 616 out of 617 functions\n2025-08-31 06:43:21,774 [root] DEBUG: 9048: Syscall hook installed, syscall logging level 1\n2025-08-31 06:43:21,782 [root] INFO: Loaded monitor into process with pid 9048\n2025-08-31 06:43:21,786 [root] DEBUG: 9048: caller_dispatch: Added region at 0x00007FF788F90000 to tracked regions list (ntdll::NtAllocateVirtualMemoryEx returns to 0x00007FF788F990F2, thread 6556).\n2025-08-31 06:43:21,786 [root] DEBUG: 9048: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:43:21,789 [root] DEBUG: 9048: ProcessImageBase: Main module image at 0x00007FF788F90000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:43:22,274 [root] INFO: Process with pid 9832 appears to have terminated\n2025-08-31 06:43:22,832 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:22,834 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:23,922 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:23,924 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:24,293 [root] INFO: Process with pid 3904 appears to have terminated\n2025-08-31 06:43:25,028 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:25,030 [root] DEBUG: 6428: DLL loaded at 0x00007FFB515D0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:26,113 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:26,115 [root] DEBUG: 6428: DLL loaded at 0x00007FFB528F0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:27,164 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:27,167 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:28,177 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 8420: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:43:28,179 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 8420\n2025-08-31 06:43:28,181 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\8420.ini\n2025-08-31 06:43:28,187 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:43:28,203 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:43:28,230 [root] DEBUG: Loader: Injecting process 8420 (thread 1100) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:28,232 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:43:28,234 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:28,240 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:43:28,244 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:28,246 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 8420\n2025-08-31 06:43:28,246 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:28,250 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\8420.ini\n2025-08-31 06:43:28,250 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:43:28,266 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:43:28,288 [root] DEBUG: Loader: Injecting process 8420 (thread 1100) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:28,290 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:43:28,292 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:43:28,296 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:43:28,325 [root] DEBUG: 8420: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:43:28,325 [root] DEBUG: 8420: Interactive desktop enabled.\n2025-08-31 06:43:28,327 [root] DEBUG: 8420: Dropped file limit defaulting to 100.\n2025-08-31 06:43:28,335 [root] DEBUG: 8420: Disabling sleep skipping.\n2025-08-31 06:43:28,337 [root] DEBUG: 8420: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:43:28,357 [root] INFO: Process with pid 6836 appears to have terminated\n2025-08-31 06:43:28,357 [root] DEBUG: 8420: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:43:28,361 [root] DEBUG: 8420: Monitor initialised: 64-bit capemon loaded in process 8420 at 0x00007FFB53590000, thread 1100, image base 0x00007FF6C0730000, stack from 0x000000564FDC4000-0x000000564FDD0000\n2025-08-31 06:43:28,363 [root] DEBUG: 8420: Commandline: C:\\WINDOWS\\system32\\DllHost.exe /Processid:{AB8902B4-09CA-4BB6-B78D-A8F59079A8D5}\n2025-08-31 06:43:28,392 [root] DEBUG: 8420: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:43:28,452 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:43:28,454 [root] DEBUG: 8420: set_hooks: Unable to hook LockResource\n2025-08-31 06:43:28,472 [root] DEBUG: 8420: Hooked 616 out of 617 functions\n2025-08-31 06:43:28,474 [root] DEBUG: 8420: Syscall hook installed, syscall logging level 1\n2025-08-31 06:43:28,480 [root] INFO: Loaded monitor into process with pid 8420\n2025-08-31 06:43:28,485 [root] DEBUG: 8420: caller_dispatch: Added region at 0x00007FF6C0730000 to tracked regions list (ntdll::NtSetInformationThread returns to 0x00007FF6C0731112, thread 1100).\n2025-08-31 06:43:28,487 [root] DEBUG: 8420: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:43:28,489 [root] DEBUG: 8420: ProcessImageBase: Main module image at 0x00007FF6C0730000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:43:30,309 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:30,311 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:31,394 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:31,396 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:32,525 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:32,527 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:33,637 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:33,639 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:34,704 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:34,706 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:35,776 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:35,776 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:36,895 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:36,897 [root] DEBUG: 6428: DLL loaded at 0x00007FFB85290000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:38,978 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:38,980 [root] DEBUG: 6428: DLL loaded at 0x00007FFB56890000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:41,054 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:41,057 [root] DEBUG: 6428: DLL loaded at 0x00007FFB528F0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:42,130 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:42,132 [root] DEBUG: 6428: DLL loaded at 0x00007FFB528F0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:43,218 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:43,222 [root] DEBUG: 6428: DLL loaded at 0x00007FFB528F0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:43,859 [root] DEBUG: 4840: DLL loaded at 0x00007FFB51940000: C:\\Windows\\System32\\Windows.CloudStore.Schema.DesktopShell (0xb3000 bytes).\n2025-08-31 06:43:43,896 [root] DEBUG: 4840: caller_dispatch: Added region at 0x000000001A680000 to tracked regions list (ntdll::NtOpenFile returns to 0x000000001A9C7070, thread 8052).\n2025-08-31 06:43:43,896 [root] DEBUG: 4840: DumpPEsInRange: Scanning range 0x000000001A680000 - 0x000000001A6807EB.\n2025-08-31 06:43:43,898 [root] DEBUG: 4840: ScanForDisguisedPE: Size too small: 0x7eb bytes\n2025-08-31 06:43:43,912 [lib.common.results] INFO: Uploading file C:\\iyeYqm\\CAPE\\4840_369004343431082025 to CAPE\\eb41d613e52f63da4d1436d810bc8bbd80ec3ffb03f835fb3e81dc485b280163; Size is 2027; Max size: 100000000\n2025-08-31 06:43:44,320 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:44,322 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:44,361 [root] DEBUG: 4840: DumpMemory: Payload successfully created: C:\\iyeYqm\\CAPE\\4840_369004343431082025 (size 2027 bytes)\n2025-08-31 06:43:44,365 [root] DEBUG: 4840: DumpRegion: Dumped entire allocation from 0x000000001A680000, size 4096 bytes.\n2025-08-31 06:43:44,371 [root] DEBUG: 4840: ProcessTrackedRegion: Dumped region at 0x000000001A680000.\n2025-08-31 06:43:44,375 [root] DEBUG: 4840: YaraScan: Scanning 0x000000001A680000, size 0x7eb\n2025-08-31 06:43:45,420 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:45,422 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:46,491 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:46,493 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:47,572 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:47,574 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:48,643 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:48,644 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:49,708 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:49,709 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:50,808 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:50,810 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:51,920 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:51,922 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:53,038 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:53,039 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:54,127 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:54,131 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:55,198 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:55,198 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:56,272 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:56,274 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:57,343 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:57,345 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:58,407 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:58,409 [root] DEBUG: 6428: DLL loaded at 0x00007FFB51710000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:43:59,486 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:43:59,488 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:00,588 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:00,590 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:01,759 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:01,761 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:02,879 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:02,881 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:03,942 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:03,944 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:04,992 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:04,996 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:06,040 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:06,043 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:07,083 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:07,085 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:08,132 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:08,134 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:09,179 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:09,181 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:11,225 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:11,227 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:12,700 [root] DEBUG: 4840: DLL loaded at 0x00007FFB8A150000: \\\\?\\C:\\Windows\\System32\\SecurityHealthProxyStub (0x2d000 bytes).\n2025-08-31 06:44:12,751 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 3744: \\\\?\\C:\\Windows\\System32\\SecurityHealthHost.exe, ImageBase: 0x00007FF7ACCE0000\n2025-08-31 06:44:12,753 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 3744\n2025-08-31 06:44:12,755 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\3744.ini\n2025-08-31 06:44:12,759 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:12,773 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:12,804 [root] DEBUG: Loader: Injecting process 3744 (thread 9072) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:12,814 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:44:12,816 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:44:12,818 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:12,826 [root] INFO: Announced 64-bit process name: SecurityHealthHost.exe pid: 3744\n2025-08-31 06:44:12,830 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\3744.ini\n2025-08-31 06:44:12,830 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:12,846 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:12,864 [root] DEBUG: Loader: Injecting process 3744 (thread 9072) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:12,873 [root] DEBUG: InjectDllViaIAT: Failed to allocate region in target process for new import table.\n2025-08-31 06:44:12,875 [root] DEBUG: Error 5 (0x5) - InjectDllViaQueuedAPC: Failed to allocate buffer in target: Access is denied.\n2025-08-31 06:44:12,877 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:13,271 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:13,273 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:15,364 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:15,366 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:16,447 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:16,449 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:17,671 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:17,673 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:18,901 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:18,905 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:20,043 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:20,045 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:21,203 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:21,205 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:22,300 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:22,302 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:23,393 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:23,395 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:24,453 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:24,455 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:25,520 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:25,522 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:26,598 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:26,600 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:27,735 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:27,735 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:28,804 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:28,806 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:29,881 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:29,883 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:30,961 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:30,963 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:32,032 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:32,034 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:33,098 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:33,100 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:34,207 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:34,209 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:35,309 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:35,311 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:36,376 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:36,378 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:37,543 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:37,545 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:38,632 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:38,634 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:39,726 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:39,728 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:40,879 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:40,881 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:42,095 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:42,097 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:43,188 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:43,190 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:44,287 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:44,289 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:45,356 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:45,358 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:46,551 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:46,553 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:47,728 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:47,732 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:48,809 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:48,811 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:49,907 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:49,911 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:50,661 [root] DEBUG: 2816: DLL loaded at 0x00007FFB8C2B0000: C:\\WINDOWS\\System32\\Win32_DeviceGuard (0x20000 bytes).\n2025-08-31 06:44:50,667 [root] DEBUG: 2816: DLL loaded at 0x00007FFB8E5E0000: C:\\WINDOWS\\SYSTEM32\\miutils (0x61000 bytes).\n2025-08-31 06:44:50,669 [root] DEBUG: 2816: DLL loaded at 0x00007FFB56FA0000: C:\\WINDOWS\\system32\\wmitomi (0x33000 bytes).\n2025-08-31 06:44:50,700 [root] DEBUG: 2816: DLL loaded at 0x00007FFB90EC0000: C:\\WINDOWS\\system32\\wbem\\wbemprox (0x12000 bytes).\n2025-08-31 06:44:50,991 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:50,993 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:52,053 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:52,053 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:52,318 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 9952: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:44:52,319 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 9952\n2025-08-31 06:44:52,321 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9952.ini\n2025-08-31 06:44:52,325 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:52,341 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:52,416 [root] DEBUG: Loader: Injecting process 9952 (thread 8392) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:52,418 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:44:52,420 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:52,426 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:44:52,436 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 9952\n2025-08-31 06:44:52,436 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9952.ini\n2025-08-31 06:44:52,438 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:52,452 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:52,482 [root] DEBUG: Loader: Injecting process 9952 (thread 8392) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:52,484 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:44:52,486 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:52,490 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:44:53,136 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:53,138 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:54,250 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:54,252 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:55,316 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:55,318 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:56,389 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:56,389 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:57,488 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:57,488 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:58,449 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 2352: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:44:58,451 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 2352\n2025-08-31 06:44:58,453 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\2352.ini\n2025-08-31 06:44:58,457 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:58,480 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:58,544 [root] DEBUG: Loader: Injecting process 2352 (thread 2368) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:58,546 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:44:58,546 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:58,552 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:44:58,561 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 2352\n2025-08-31 06:44:58,561 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\2352.ini\n2025-08-31 06:44:58,563 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:44:58,565 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:58,569 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:44:58,585 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:44:58,608 [root] DEBUG: Loader: Injecting process 2352 (thread 2368) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:58,609 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:44:58,611 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:44:58,615 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:44:58,687 [root] DEBUG: 2352: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:44:58,689 [root] DEBUG: 2352: Interactive desktop enabled.\n2025-08-31 06:44:58,691 [root] DEBUG: 2352: Dropped file limit defaulting to 100.\n2025-08-31 06:44:58,705 [root] DEBUG: 2352: Disabling sleep skipping.\n2025-08-31 06:44:58,750 [root] DEBUG: 2352: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:44:58,770 [root] DEBUG: 2352: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:44:58,772 [root] DEBUG: 2352: Monitor initialised: 64-bit capemon loaded in process 2352 at 0x00007FFB53590000, thread 2368, image base 0x00007FF6C0730000, stack from 0x00000094850F4000-0x0000009485100000\n2025-08-31 06:44:58,774 [root] DEBUG: 2352: Commandline: C:\\WINDOWS\\system32\\DllHost.exe /Processid:{AB8902B4-09CA-4BB6-B78D-A8F59079A8D5}\n2025-08-31 06:44:58,793 [root] DEBUG: 2352: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:44:58,825 [root] INFO: Process with pid 8420 appears to have terminated\n2025-08-31 06:44:58,861 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:44:58,863 [root] DEBUG: 2352: set_hooks: Unable to hook LockResource\n2025-08-31 06:44:58,883 [root] DEBUG: 2352: Hooked 616 out of 617 functions\n2025-08-31 06:44:58,884 [root] DEBUG: 2352: Syscall hook installed, syscall logging level 1\n2025-08-31 06:44:58,892 [root] INFO: Loaded monitor into process with pid 2352\n2025-08-31 06:44:58,896 [root] DEBUG: 2352: caller_dispatch: Added region at 0x00007FF6C0730000 to tracked regions list (ntdll::NtAllocateVirtualMemory returns to 0x00007FF6C0731112, thread 2368).\n2025-08-31 06:44:58,900 [root] DEBUG: 2352: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:44:58,902 [root] DEBUG: 2352: ProcessImageBase: Main module image at 0x00007FF6C0730000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:44:59,666 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:44:59,668 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:00,913 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:00,915 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:02,058 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:02,060 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:03,217 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:03,219 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:04,328 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:04,330 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:04,861 [root] DEBUG: 6656: api-cap: CoCreateInstance hook disabled due to count: 5000\n2025-08-31 06:45:05,516 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:05,518 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:06,597 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:06,600 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:07,817 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:07,819 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:08,901 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:08,901 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:09,992 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:09,993 [root] DEBUG: 6428: DLL loaded at 0x00007FFB570C0000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:10,525 [root] INFO: Announced starting service \"b'WaaSMedicSvc'\"\n2025-08-31 06:45:11,066 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:11,068 [root] DEBUG: 6428: DLL loaded at 0x00007FFB54850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:11,913 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 1120: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:11,915 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 1120\n2025-08-31 06:45:11,919 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1120.ini\n2025-08-31 06:45:12,151 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:12,151 [root] DEBUG: 6428: DLL loaded at 0x00007FFB52830000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:12,881 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 7112: C:\\Program Files\\WindowsApps\\Microsoft.WidgetsPlatformRuntime_1.6.9.0_x64__8wekyb3d8bbwe\\WidgetService\\WidgetService.exe, ImageBase: 0x00007FF638400000\n2025-08-31 06:45:12,885 [root] INFO: Announced 64-bit process name: WidgetService.exe pid: 7112\n2025-08-31 06:45:12,889 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\7112.ini\n2025-08-31 06:45:12,895 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:12,938 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:12,954 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:13,017 [root] DEBUG: Loader: Injecting process 1120 (thread 3576) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,021 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:13,023 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,027 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:13,037 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 1120\n2025-08-31 06:45:13,039 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1120.ini\n2025-08-31 06:45:13,039 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:13,053 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:13,079 [root] DEBUG: Loader: Injecting process 1120 (thread 3576) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,081 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:13,083 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,088 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:13,233 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:13,237 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:13,560 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 9348: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:13,564 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 9348\n2025-08-31 06:45:13,568 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9348.ini\n2025-08-31 06:45:13,572 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:13,592 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:13,624 [root] DEBUG: Loader: Injecting process 9348 (thread 8352) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,626 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:13,626 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,633 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:13,646 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 9348\n2025-08-31 06:45:13,648 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\9348.ini\n2025-08-31 06:45:13,649 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:13,673 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:13,701 [root] DEBUG: Loader: Injecting process 9348 (thread 8352) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,703 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:13,705 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:13,709 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:14,309 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:14,311 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:14,372 [root] DEBUG: 2816: api-cap: NtSetInformationFile hook disabled due to count: 5000\n2025-08-31 06:45:15,046 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:15,075 [root] DEBUG: Loader: Injecting process 7112 (thread 3824) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:15,081 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:15,083 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:15,087 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:15,095 [root] INFO: Announced 64-bit process name: WidgetService.exe pid: 7112\n2025-08-31 06:45:15,097 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\7112.ini\n2025-08-31 06:45:15,100 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:15,377 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:15,381 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:16,446 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:16,448 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:17,030 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:17,062 [root] DEBUG: Loader: Injecting process 7112 (thread 3824) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:17,062 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:17,064 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:17,068 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:17,530 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:17,532 [root] DEBUG: 6428: DLL loaded at 0x00007FFB54850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:17,831 [root] DEBUG: 7112: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:45:17,833 [root] DEBUG: 7112: Interactive desktop enabled.\n2025-08-31 06:45:17,835 [root] DEBUG: 7112: Dropped file limit defaulting to 100.\n2025-08-31 06:45:17,846 [root] DEBUG: 7112: Disabling sleep skipping.\n2025-08-31 06:45:17,850 [root] DEBUG: 7112: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:45:17,866 [root] DEBUG: 7112: YaraScan: Scanning 0x00007FF638400000, size 0x3a670\n2025-08-31 06:45:17,870 [root] DEBUG: 7112: Monitor initialised: 64-bit capemon loaded in process 7112 at 0x00007FFB53590000, thread 3824, image base 0x00007FF638400000, stack from 0x000000B3C82F4000-0x000000B3C8300000\n2025-08-31 06:45:17,872 [root] DEBUG: 7112: Commandline: \"C:\\Program Files\\WindowsApps\\Microsoft.WidgetsPlatformRuntime_1.6.9.0_x64__8wekyb3d8bbwe\\WidgetService\\WidgetService.exe\" -RegisterProcessAsComServer -Embedding\n2025-08-31 06:45:17,935 [root] DEBUG: 7112: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:45:18,005 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:45:18,007 [root] DEBUG: 7112: set_hooks: Unable to hook LockResource\n2025-08-31 06:45:18,028 [root] DEBUG: 7112: Hooked 616 out of 617 functions\n2025-08-31 06:45:18,028 [root] DEBUG: 7112: Syscall hook installed, syscall logging level 1\n2025-08-31 06:45:18,036 [root] INFO: Loaded monitor into process with pid 7112\n2025-08-31 06:45:18,038 [root] DEBUG: 7112: caller_dispatch: Added region at 0x00007FF638400000 to tracked regions list (kernel32::LoadLibraryExW returns to 0x00007FF63841A987, thread 3824).\n2025-08-31 06:45:18,042 [root] DEBUG: 7112: YaraScan: Scanning 0x00007FF638400000, size 0x3a670\n2025-08-31 06:45:18,046 [root] DEBUG: 7112: ProcessImageBase: Main module image at 0x00007FF638400000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:45:18,618 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:18,620 [root] DEBUG: 6428: DLL loaded at 0x00007FFB54850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:19,744 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:19,748 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:20,815 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:20,817 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:21,586 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 5504: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:45:21,588 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 5504\n2025-08-31 06:45:21,590 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\5504.ini\n2025-08-31 06:45:21,596 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:21,622 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:21,630 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 1568: C:\\Windows\\System32\\RuntimeBroker.exe, ImageBase: 0x00007FF788F90000\n2025-08-31 06:45:21,632 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 1568\n2025-08-31 06:45:21,634 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1568.ini\n2025-08-31 06:45:21,640 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:21,665 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:21,734 [root] DEBUG: Loader: Injecting process 5504 (thread 8924) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,736 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:21,738 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,744 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:21,750 [root] INFO: Announced 64-bit process name: dllhost.exe pid: 5504\n2025-08-31 06:45:21,752 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\5504.ini\n2025-08-31 06:45:21,754 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:21,756 [root] DEBUG: Loader: Injecting process 1568 (thread 9476) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,762 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:21,764 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,768 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:21,776 [root] INFO: Announced 64-bit process name: RuntimeBroker.exe pid: 1568\n2025-08-31 06:45:21,778 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:21,780 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\1568.ini\n2025-08-31 06:45:21,782 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:21,809 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:21,827 [root] DEBUG: Loader: Injecting process 5504 (thread 8924) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,829 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:21,831 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,847 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:21,851 [root] DEBUG: Loader: Injecting process 1568 (thread 9476) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,855 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:21,857 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:21,865 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:21,922 [root] DEBUG: 5504: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:45:21,926 [root] DEBUG: 1568: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:45:21,928 [root] DEBUG: 5504: Interactive desktop enabled.\n2025-08-31 06:45:21,931 [root] DEBUG: 1568: Interactive desktop enabled.\n2025-08-31 06:45:21,932 [root] DEBUG: 5504: Dropped file limit defaulting to 100.\n2025-08-31 06:45:21,934 [root] DEBUG: 1568: Dropped file limit defaulting to 100.\n2025-08-31 06:45:21,941 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:21,944 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:21,944 [root] DEBUG: 5504: Disabling sleep skipping.\n2025-08-31 06:45:21,946 [root] DEBUG: 1568: Disabling sleep skipping.\n2025-08-31 06:45:21,950 [root] DEBUG: 5504: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:45:21,954 [root] DEBUG: 1568: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:45:21,989 [root] DEBUG: 5504: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:45:21,993 [root] DEBUG: 1568: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:45:21,995 [root] DEBUG: 5504: Monitor initialised: 64-bit capemon loaded in process 5504 at 0x00007FFB53590000, thread 8924, image base 0x00007FF6C0730000, stack from 0x00000060732F5000-0x0000006073300000\n2025-08-31 06:45:21,995 [root] DEBUG: 5504: Commandline: C:\\WINDOWS\\system32\\DllHost.exe /Processid:{69B7FE84-6361-4423-B948-1D64820B1E96}\n2025-08-31 06:45:22,007 [root] DEBUG: 1568: Monitor initialised: 64-bit capemon loaded in process 1568 at 0x00007FFB53590000, thread 9476, image base 0x00007FF788F90000, stack from 0x00000087ABCB4000-0x00000087ABCC0000\n2025-08-31 06:45:22,011 [root] DEBUG: 1568: Commandline: C:\\Windows\\System32\\RuntimeBroker.exe -Embedding\n2025-08-31 06:45:22,027 [root] DEBUG: 5504: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:45:22,045 [root] DEBUG: 1568: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:45:22,100 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:45:22,106 [root] DEBUG: 5504: set_hooks: Unable to hook LockResource\n2025-08-31 06:45:22,120 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:45:22,124 [root] DEBUG: 1568: set_hooks: Unable to hook LockResource\n2025-08-31 06:45:22,126 [root] DEBUG: 5504: Hooked 616 out of 617 functions\n2025-08-31 06:45:22,128 [root] DEBUG: 5504: Syscall hook installed, syscall logging level 1\n2025-08-31 06:45:22,135 [root] INFO: Loaded monitor into process with pid 5504\n2025-08-31 06:45:22,139 [root] DEBUG: 5504: caller_dispatch: Added region at 0x00007FF6C0730000 to tracked regions list (ntdll::NtAllocateVirtualMemory returns to 0x00007FF6C0731112, thread 8924).\n2025-08-31 06:45:22,143 [root] DEBUG: 5504: YaraScan: Scanning 0x00007FF6C0730000, size 0xb000\n2025-08-31 06:45:22,145 [root] DEBUG: 1568: Hooked 616 out of 617 functions\n2025-08-31 06:45:22,147 [root] DEBUG: 5504: ProcessImageBase: Main module image at 0x00007FF6C0730000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:45:22,147 [root] DEBUG: 1568: Syscall hook installed, syscall logging level 1\n2025-08-31 06:45:22,155 [root] INFO: Loaded monitor into process with pid 1568\n2025-08-31 06:45:22,157 [root] DEBUG: 1568: caller_dispatch: Added region at 0x00007FF788F90000 to tracked regions list (ntdll::NtAllocateVirtualMemoryEx returns to 0x00007FF788F990F2, thread 9476).\n2025-08-31 06:45:22,163 [root] DEBUG: 1568: YaraScan: Scanning 0x00007FF788F90000, size 0x20000\n2025-08-31 06:45:22,169 [root] DEBUG: 1568: ProcessImageBase: Main module image at 0x00007FF788F90000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:45:22,219 [root] INFO: Process with pid 9048 appears to have terminated\n2025-08-31 06:45:23,055 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:23,057 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:24,129 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:24,131 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57920000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:24,208 [root] DEBUG: 6656: CreateProcessHandler: Injection info set for new process 7232: \\\\?\\C:\\WINDOWS\\system32\\wbem\\WMIADAP.EXE, ImageBase: 0x00007FF799630000\n2025-08-31 06:45:24,212 [root] INFO: Announced 64-bit process name: WMIADAP.exe pid: 7232\n2025-08-31 06:45:24,214 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\7232.ini\n2025-08-31 06:45:24,218 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:25,253 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:25,255 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:25,319 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 5856: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:25,323 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 5856\n2025-08-31 06:45:25,323 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\5856.ini\n2025-08-31 06:45:25,327 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:25,347 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:25,467 [root] DEBUG: Loader: Injecting process 5856 (thread 8308) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:25,467 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:25,471 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:25,475 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:25,489 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 5856\n2025-08-31 06:45:25,491 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\5856.ini\n2025-08-31 06:45:25,493 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:25,509 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:25,538 [root] DEBUG: Loader: Injecting process 5856 (thread 8308) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:25,540 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:25,544 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:25,549 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:26,423 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:26,427 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:26,757 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:26,787 [root] DEBUG: Loader: Injecting process 7232 (thread 9908) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:26,791 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:26,795 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:26,799 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:26,809 [root] INFO: Announced 64-bit process name: WMIADAP.exe pid: 7232\n2025-08-31 06:45:26,813 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\7232.ini\n2025-08-31 06:45:26,815 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:27,531 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:27,535 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:29,074 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:29,097 [root] DEBUG: Loader: Injecting process 7232 (thread 9908) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:29,099 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:29,101 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:29,105 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:29,141 [root] DEBUG: 7232: Python path set to 'C:\\Users\\Alex Dan\\AppData\\Local\\Programs\\Python\\Python310-32'.\n2025-08-31 06:45:29,143 [root] DEBUG: 7232: Interactive desktop enabled.\n2025-08-31 06:45:29,147 [root] DEBUG: 7232: Dropped file limit defaulting to 100.\n2025-08-31 06:45:29,161 [root] DEBUG: 7232: Disabling sleep skipping.\n2025-08-31 06:45:29,163 [root] DEBUG: 7232: YaraInit: Compiled rules loaded from existing file C:\\qstr2giq\\data\\yara\\capemon.yac\n2025-08-31 06:45:29,186 [root] DEBUG: 7232: YaraScan: Scanning 0x00007FF799630000, size 0x39000\n2025-08-31 06:45:29,190 [root] DEBUG: 7232: Monitor initialised: 64-bit capemon loaded in process 7232 at 0x00007FFB53590000, thread 9908, image base 0x00007FF799630000, stack from 0x000000C4D3900000-0x000000C4D3910000\n2025-08-31 06:45:29,192 [root] DEBUG: 7232: Commandline: wmiadap.exe /F /T /R\n2025-08-31 06:45:29,224 [root] DEBUG: 7232: hook_api: LdrpCallInitRoutine export address 0x00007FFB9ED72980 obtained via GetFunctionAddress\n2025-08-31 06:45:29,299 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-31 06:45:29,303 [root] DEBUG: 7232: set_hooks: Unable to hook LockResource\n2025-08-31 06:45:29,326 [root] DEBUG: 7232: Hooked 616 out of 617 functions\n2025-08-31 06:45:29,329 [root] DEBUG: 7232: Syscall hook installed, syscall logging level 1\n2025-08-31 06:45:29,335 [root] INFO: Loaded monitor into process with pid 7232\n2025-08-31 06:45:29,348 [root] DEBUG: 7232: caller_dispatch: Added region at 0x00007FF799630000 to tracked regions list (kernel32::SetUnhandledExceptionFilter returns to 0x00007FF799631A61, thread 9908).\n2025-08-31 06:45:29,349 [root] DEBUG: 7232: YaraScan: Scanning 0x00007FF799630000, size 0x39000\n2025-08-31 06:45:29,353 [root] DEBUG: 7232: ProcessImageBase: Main module image at 0x00007FF799630000 unmodified (entropy change 0.000000e+00)\n2025-08-31 06:45:29,617 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:29,619 [root] DEBUG: 6428: DLL loaded at 0x00007FFB54850000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:30,687 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:30,689 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:31,841 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:31,843 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:32,454 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 10680: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:32,456 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10680\n2025-08-31 06:45:32,458 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10680.ini\n2025-08-31 06:45:32,464 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:32,481 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:32,540 [root] DEBUG: Loader: Injecting process 10680 (thread 10684) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:32,544 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:32,546 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:32,556 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:32,613 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10680\n2025-08-31 06:45:32,615 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10680.ini\n2025-08-31 06:45:32,619 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:32,639 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:32,686 [root] DEBUG: Loader: Injecting process 10680 (thread 10684) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:32,690 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:32,692 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:32,702 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:33,055 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:33,058 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:34,135 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:34,137 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:34,273 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 11000: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:34,277 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11000\n2025-08-31 06:45:34,279 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11000.ini\n2025-08-31 06:45:34,283 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:34,311 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:34,335 [root] DEBUG: Loader: Injecting process 11000 (thread 11004) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:34,339 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:34,341 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:34,345 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:34,354 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11000\n2025-08-31 06:45:34,356 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11000.ini\n2025-08-31 06:45:34,356 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:34,376 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:34,398 [root] DEBUG: Loader: Injecting process 11000 (thread 11004) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:34,400 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:34,402 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:34,406 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:35,249 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:35,253 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:35,380 [root] DEBUG: 6656: api-cap: CoGetClassObject hook disabled due to count: 5000\n2025-08-31 06:45:35,767 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 11260: C:\\WINDOWS\\system32\\BackgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:35,775 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11260\n2025-08-31 06:45:35,779 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11260.ini\n2025-08-31 06:45:35,789 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:35,804 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:35,831 [root] DEBUG: Loader: Injecting process 11260 (thread 1164) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:35,835 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:35,836 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:35,841 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:35,853 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11260\n2025-08-31 06:45:35,853 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11260.ini\n2025-08-31 06:45:35,855 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:35,876 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:35,894 [root] DEBUG: Loader: Injecting process 11260 (thread 1164) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:35,894 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:35,900 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:35,906 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:36,347 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:36,349 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:37,424 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:37,428 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:38,485 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:38,487 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:39,564 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:39,566 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:40,625 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:40,627 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:41,692 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:41,694 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:42,774 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:42,776 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:43,842 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:43,844 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:44,905 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:44,905 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:45,711 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 11172: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:45,714 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11172\n2025-08-31 06:45:45,715 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11172.ini\n2025-08-31 06:45:45,721 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:45,739 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:45,763 [root] DEBUG: Loader: Injecting process 11172 (thread 11176) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:45,767 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:45,769 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:45,775 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:45,781 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 11172\n2025-08-31 06:45:45,783 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\11172.ini\n2025-08-31 06:45:45,784 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:45,800 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:45,821 [root] DEBUG: Loader: Injecting process 11172 (thread 11176) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:45,821 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:45,827 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:45,831 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:45,968 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:45,970 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:47,028 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:47,032 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:48,094 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:48,098 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:49,149 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:49,151 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:50,215 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:50,217 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:51,321 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:51,323 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:52,392 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:52,392 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:53,441 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:53,443 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:54,502 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:54,506 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:55,559 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:55,561 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:56,634 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:56,638 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:56,956 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 10276: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:56,968 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10276\n2025-08-31 06:45:56,970 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10276.ini\n2025-08-31 06:45:56,974 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:56,994 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:57,021 [root] DEBUG: Loader: Injecting process 10276 (thread 11176) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,021 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:57,025 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,029 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:57,041 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10276\n2025-08-31 06:45:57,043 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10276.ini\n2025-08-31 06:45:57,047 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:57,063 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:57,081 [root] DEBUG: Loader: Injecting process 10276 (thread 11176) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,085 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:57,085 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,088 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:57,457 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 10352: C:\\WINDOWS\\system32\\backgroundTaskHost.exe, ImageBase: 0x00007FF77A6A0000\n2025-08-31 06:45:57,461 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10352\n2025-08-31 06:45:57,461 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10352.ini\n2025-08-31 06:45:57,465 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:57,487 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:57,509 [root] DEBUG: Loader: Injecting process 10352 (thread 6208) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,513 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-31 06:45:57,517 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,521 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:57,531 [root] INFO: Announced 64-bit process name: backgroundTaskHost.exe pid: 10352\n2025-08-31 06:45:57,533 [lib.api.process] INFO: Monitor config for : C:\\qstr2giq\\dll\\10352.ini\n2025-08-31 06:45:57,535 [lib.api.process] INFO: Option 'interactive' with value '1' sent to monitor\n2025-08-31 06:45:57,550 [lib.api.process] INFO: 64-bit DLL to inject is C:\\qstr2giq\\dll\\lkbYcofh.dll, loader C:\\qstr2giq\\bin\\gwHcFnLW.exe\n2025-08-31 06:45:57,570 [root] DEBUG: Loader: Injecting process 10352 (thread 6208) with C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,570 [root] DEBUG: InjectDllViaIAT: This image has already been patched.\n2025-08-31 06:45:57,574 [root] DEBUG: Successfully injected DLL C:\\qstr2giq\\dll\\lkbYcofh.dll.\n2025-08-31 06:45:57,578 [lib.api.process] INFO: Injected into 64-bit \n2025-08-31 06:45:57,744 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:57,746 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:58,824 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:58,826 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:45:59,907 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:45:59,909 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:00,969 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:00,969 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:02,030 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:02,032 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:03,097 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:03,099 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:04,179 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:04,181 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:05,263 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:05,265 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:06,354 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:06,356 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:07,428 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:07,430 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:08,495 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:08,496 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:09,616 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:09,618 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:11,712 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:11,714 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:13,785 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:13,787 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:15,843 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:15,845 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:16,907 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:16,909 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:17,969 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:17,971 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:19,039 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:19,041 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:20,186 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:20,188 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:21,254 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:21,256 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:22,325 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:22,327 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:23,391 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:23,393 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:24,472 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:24,475 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:25,583 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:25,593 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:26,646 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:26,654 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:27,722 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:27,726 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:27,963 [root] INFO: Analysis timeout hit, terminating analysis\n2025-08-31 06:46:27,965 [root] DEBUG: 1112: Terminate Event: Attempting to dump process 1112\n2025-08-31 06:46:27,965 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:27,969 [root] DEBUG: 1112: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:46:27,979 [root] DEBUG: 1112: Terminate Event: monitor shutdown complete for process 1112\n2025-08-31 06:46:27,979 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:46:27,981 [root] INFO: Terminate event set for process 1112\n2025-08-31 06:46:27,983 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:27,985 [root] DEBUG: 2816: Terminate Event: Attempting to dump process 2816\n2025-08-31 06:46:27,991 [root] DEBUG: 2816: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:46:27,999 [root] DEBUG: 2816: Terminate Event: Shutdown complete for process 2816 but failed to inform analyzer.\n2025-08-31 06:46:28,773 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 2784: C:\\WINDOWS\\system32\\DllHost.exe, ImageBase: 0x00007FF6C0730000\n2025-08-31 06:46:28,787 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:28,792 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:29,873 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:29,875 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:30,938 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:30,942 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:32,015 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:32,017 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:33,005 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:46:33,006 [root] INFO: Terminate event set for process 2816\n2025-08-31 06:46:33,006 [root] INFO: Terminating process 2816 before shutdown\n2025-08-31 06:46:33,006 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:33,092 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:33,096 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:34,013 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:34,158 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:34,163 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:35,028 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:35,276 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:35,278 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:36,043 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:36,365 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:36,367 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:37,058 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:37,435 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:37,437 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:38,073 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:38,521 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:38,525 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:39,090 [lib.api.process] INFO: Successfully terminated \n2025-08-31 06:46:39,090 [root] INFO: Waiting for process 2816 to exit\n2025-08-31 06:46:39,604 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:39,606 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:40,104 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:40,108 [root] DEBUG: 6656: Terminate Event: Attempting to dump process 6656\n2025-08-31 06:46:40,110 [root] DEBUG: 6656: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:46:40,118 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:46:40,119 [root] INFO: Terminate event set for process 6656\n2025-08-31 06:46:40,119 [root] DEBUG: 6656: Terminate Event: monitor shutdown complete for process 6656\n2025-08-31 06:46:40,121 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:40,123 [root] DEBUG: 6428: Terminate Event: Attempting to dump process 6428\n2025-08-31 06:46:40,129 [root] DEBUG: 6428: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:46:40,135 [root] DEBUG: 6428: Terminate Event: Shutdown complete for process 6428 but failed to inform analyzer.\n2025-08-31 06:46:40,692 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:40,698 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:42,770 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:42,772 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57750000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:43,755 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 1192: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe, ImageBase: 0x00007FF7ACE10000\n2025-08-31 06:46:44,846 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:44,850 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:45,137 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:46:45,137 [root] INFO: Terminate event set for process 6428\n2025-08-31 06:46:45,137 [root] INFO: Terminating process 6428 before shutdown\n2025-08-31 06:46:45,137 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:45,903 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:45,905 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:46,153 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:46,992 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:46,996 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:47,168 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:48,086 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:48,088 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:48,183 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:49,198 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:49,247 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:49,249 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:50,214 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:50,428 [root] DEBUG: 6428: DLL loaded at 0x00007FFB970F0000: C:\\WINDOWS\\SYSTEM32\\WINNSI (0xf000 bytes).\n2025-08-31 06:46:50,429 [root] DEBUG: 6428: DLL loaded at 0x00007FFB57320000: C:\\WINDOWS\\SYSTEM32\\NetSetupEngine (0xe1000 bytes).\n2025-08-31 06:46:51,228 [lib.api.process] INFO: Successfully terminated \n2025-08-31 06:46:51,230 [root] INFO: Waiting for process 6428 to exit\n2025-08-31 06:46:51,497 [root] DEBUG: 1112: CreateProcessHandler: Injection info set for new process 4360: C:\\WINDOWS\\system32\\wbem\\wmiprvse.exe, ImageBase: 0x00007FF7ACE10000\n2025-08-31 06:46:52,245 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:52,247 [root] DEBUG: 7112: Terminate Event: Attempting to dump process 7112\n2025-08-31 06:46:52,255 [root] DEBUG: 7112: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:46:55,477 [root] INFO: Added new file to list with pid None and path C:\\Users\\Alex Dan\\AppData\\Local\\Microsoft\\GameDVR\\KnownGameList.bin\n2025-08-31 06:46:55,532 [root] DEBUG: 1112: DLL loaded at 0x00007FFB9BB70000: C:\\WINDOWS\\system32\\DPAPI (0xa000 bytes).\n2025-08-31 06:46:57,262 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:46:57,264 [root] INFO: Terminate event set for process 7112\n2025-08-31 06:46:57,276 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:46:57,278 [root] DEBUG: 1568: Terminate Event: Attempting to dump process 1568\n2025-08-31 06:46:57,282 [root] DEBUG: 1568: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:47:02,289 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:47:02,293 [root] INFO: Terminate event set for process 1568\n2025-08-31 06:47:02,293 [root] INFO: Terminating process 1568 before shutdown\n2025-08-31 06:47:02,293 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:03,305 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:04,320 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:05,335 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:06,340 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:07,355 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:08,373 [lib.api.process] INFO: Successfully terminated \n2025-08-31 06:47:08,373 [root] INFO: Waiting for process 1568 to exit\n2025-08-31 06:47:09,385 [lib.api.process] INFO: Terminate event set for \n2025-08-31 06:47:09,387 [root] DEBUG: 7232: Terminate Event: Attempting to dump process 7232\n2025-08-31 06:47:09,392 [root] DEBUG: 7232: DoProcessDump: Skipping process dump as code is identical on disk.\n2025-08-31 06:47:09,400 [lib.api.process] INFO: Termination confirmed for \n2025-08-31 06:47:09,401 [root] INFO: Terminate event set for process 7232\n2025-08-31 06:47:09,403 [root] DEBUG: 7232: Terminate Event: monitor shutdown complete for process 7232\n2025-08-31 06:47:09,403 [root] INFO: Terminating process 7232 before shutdown\n2025-08-31 06:47:09,407 [root] INFO: Waiting for process 7232 to exit\nComment: so error msg itself is from\n```\n[QemuScreenshots]\n# Enable or disable the use of QEMU as screenshot capture [yes/no].\n# screenshots_linux and screenshots_windows must be disabled\nenabled = no\n```\n\nbut you have it `off` that strange. we can see proper init and start of pillow based screenshots, but no screenshots uploaded for some reason, logs doesnt say why. so my `guess` is that library is not installed under proper user or wrong versio if python+library\nComment: Name: Pillow\nVersion: 9.5.0\nSumnary: Python Imaging Library (Fork)\nHome\u2014page: https://python-pillow.org\nAuthor: Jeffrey A. Clark (Alex)\nAuthor\u2014email : actark@actark.net\nLicense: HPND\nLocation: c:\\users\\alex dan\\appdata\\local\\programs\\python\\python310-32\\Iib\\site-packages\nRequires :\nRequired\u2014by :\n\n\nPython --versoin\nPython 3.10.6\nComment: If it's helpful, I can send you a new analysis.log for a new task with the same configs listed in the main post. I think the one I sent, was during testing with different modifications, I don't recall tbh.\nComment: Nah with versions is fine. That are correct. I don't see where issue is\r\nsadly\r\n\r\nEl lun, 1 sept 2025, 10:25, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* left a comment (kevoreilly/CAPEv2#2686)\r\n> \r\n>\r\n> If it's helpful, I can send you a new analysis.log for a new task with the\r\n> same configs listed in the main post. I think the one I sent I was testing\r\n> with different modifications, I don't recall tbh.\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Just thinking out loud\u2014when an analysis is submitted, a randomly named folder gets created on C:\\. Inside it, there\u2019s a folder called shots. If I manually drop screenshots in there, shouldn\u2019t they be sent to the result server at the end of the analysis? Or are they handled differently?\nComment: Yes they should\r\n\r\nEl lun, 1 sept 2025, 10:46, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* left a comment (kevoreilly/CAPEv2#2686)\r\n> \r\n>\r\n> Just thinking out loud\u2014when an analysis is submitted, a randomly named\r\n> folder gets created on C:. Inside it, there\u2019s a folder called shots. If I\r\n> manually drop screenshots in there, shouldn\u2019t they be sent to the result\r\n> server at the end of the analysis? Or are they handled differently?\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Unfortunately, dropped files were not pulled back to the result server.\nThe thing is screenshot functionality was working fine before, not sure at which point it stopped working, but around that time I changed some configs (not related to screenshots) and it broke.\nI will try to revert some changes, and I might revert everything to the default configs and check.\n\nWill update here.\nComment: yep would be very interesting to spot what is wrong\nComment: After reverting to the default configs, screenshots worked. BTW I kept machinery_screenshots **off**. If I understand correctly it uses the machinery (proxmox) to take screenshot, it was **on** earlier, so it might be the cause.\n\nAnyways, I'll close this issue for now, and will apply my configs one by one by one if screenshot failed again will send her what option caused this.\n\nThank you!\nComment: Glad that you solved it, not proxmon, qemu/kvm", + "Title: Do not get Behavioural Analysis\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nI should get behavioural analysis.\n\n# Current Behavior\n\nI do not get behavioural information.\nThe problem exists since installation for any sample.\n\n# Failure Information (for bugs)\n\n## Context\nThe only coonfig files I changed were cuckoo.conf and kvm.conf.\n\nThe snapshot runs as expected when a sample is sent.\nIt seems that i cant communicate to host from my VM, although I can ping host from VM. Result server ip is correct.\nAt first I thought it had something to do with my VM so I created a new one but the issue persisted.\n\n## Failure Logs\nProcessor Logs\nAug 26 18:40:42 mantsrv3 poetry[2005]: 2025-08-26 18:40:42,933 [root] INFO: Processing analysis data for Task #35\nAug 26 18:40:43 mantsrv3 poetry[4028]: 2025-08-26 18:40:43,978 [Task 35] [modules.processing.behavior] WARNING: Analysis results folder does not exist at path \"/opt/CAPEv2/storage/analyses/35/logs\"\nAug 26 18:40:43 mantsrv3 poetry[4028]: 2025-08-26 18:40:43,991 [Task 35] [modules.processing.suricata] WARNING: Suricata: Failed to find usable Suricata log file\nAug 26 18:40:43 mantsrv3 poetry[4028]: 2025-08-26 18:40:43,994 [Task 35] [lib.cuckoo.core.plugins] INFO: Logs folder doesn't exist, maybe something with with analyzer folder, any change?\nAug 26 18:40:46 mantsrv3 poetry[4028]: 2025-08-26 18:40:46,280 [Task 35] [dev_utils.mongodb] INFO: attempting to delete calls for 1 tasks\nAug 26 18:40:46 mantsrv3 poetry[2005]: 2025-08-26 18:40:46,408 [root] INFO: Reports generation completed for Task #2435 \n\ncuckoo logs\n2025-08-26 18:36:10,816 [lib.cuckoo.core.machinery_manager] INFO: Task #35: found useable machine win10 (arch=x64, platform=windows)\n2025-08-26 18:36:10,816 [lib.cuckoo.core.scheduler] INFO: Task #35: Processing task\n2025-08-26 18:36:10,868 [lib.cuckoo.core.analysis_manager] INFO: Task #35: File already exists at '/opt/CAPEv2/storage/binaries/3dde38fa1f49b57f64f1176cda0e51f20eaf1cf5d455223cf4e679481a4a982e'\n2025-08-26 18:36:10,869 [lib.cuckoo.core.analysis_manager] INFO: Task #35: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_nb7aej19/firefox_installer.exe'\n2025-08-26 18:36:17,197 [lib.cuckoo.core.analysis_manager] INFO: Task #35: Enabled route 'none'.\n2025-08-26 18:36:17,197 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\n2025-08-26 18:36:17,197 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\n2025-08-26 18:36:17,216 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 5085 (interface=virbr1, host=192.168.100.60, dump path=/opt/CAPEv2/storage/analyses/35/dump.pcap)\n2025-08-26 18:36:17,267 [lib.cuckoo.core.guest] INFO: Task #35: Starting analysis on guest (id=win10, ip=192.168.100.60)\n2025-08-26 18:36:17,293 [lib.cuckoo.core.guest] INFO: Task #35: Guest is running CAPE Agent 0.20 (id=win10, ip=192.168.100.60)\n2025-08-26 18:36:19,079 [lib.cuckoo.core.guest] INFO: Task #35: Uploading script files to guest (id=win10, ip=192.168.100.60)\n2025-08-26 18:40:39,947 [lib.cuckoo.core.guest] INFO: Task #35: End of analysis reached! (id=win10, ip=192.168.100.60)\n2025-08-26 18:40:40,758 [lib.cuckoo.core.analysis_manager] INFO: Task #35: Completed analysis successfully.\n2025-08-26 18:40:40,765 [lib.cuckoo.core.analysis_manager] INFO: Task #35: analysis procedure completed\n\n\nI deeply appreciate any help concerning my issue!\nComment: hey, hm if result server is correct, then it should be something wrong with the VM. does your VM has hardcoded ip address?\ndid your try to run `cuckoo.py -d`?\nComment: Yes my ip is hardcoded and static.\n\nRunning cuckoo.py -d doesnt show anything suspicious i guess:\n```text\n2025-08-27 11:56:37,280 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[kvm] with max_machines_count=10 \n2025-08-27 11:56:37,280 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited \n2025-08-27 11:56:37,307 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10-2\n2025-08-27 11:56:37,318 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-08-27 11:56:37,374 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-08-27 11:56:37,376 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks 11:57:24 [72/922]\n^[[B^[[B^[[B2025-08-27 11:57:17,788 [lib.cuckoo.core.machinery_manager] INFO: Task #36: found useable machine win10 (arch=x64, platfor\nm=windows) \n2025-08-27 11:57:17,788 [lib.cuckoo.core.scheduler] INFO: Task #36: Processing task \n2025-08-27 11:57:17,847 [lib.cuckoo.core.analysis_manager] INFO: Task #36: File already exists at '/opt/CAPEv2/storage/binaries/3dde38\nfa1f49b57f64f1176cda0e51f20eaf1cf5d455223cf4e679481a4a982e' \n2025-08-27 11:57:17,848 [lib.cuckoo.core.analysis_manager] INFO: Task #36: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_onlki97i/\nfirefox_installer.exe' \n2025-08-27 11:57:17,848 [lib.cuckoo.common.abstracts] DEBUG: Starting machine win10-2 \n2025-08-27 11:57:17,854 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10-2 \n2025-08-27 11:57:17,866 [lib.cuckoo.common.abstracts] DEBUG: Using snapshot snapshot1 for virtual machine win10-2 \n2025-08-27 11:57:24,014 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10-2 \n2025-08-27 11:57:24,048 [lib.cuckoo.core.resultserver] DEBUG: Task #36: The associated machine IP is 192.168.100.60 \n2025-08-27 11:57:24,073 [lib.cuckoo.core.analysis_manager] INFO: Task #36: Enabled route 'none'. \n2025-08-27 11:57:24,074 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded \n2025-08-27 11:57:24,074 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded \n/usr/bin/tcpdump \n2025-08-27 11:57:24,093 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 15873 (interface=virbr1, host=192.168.100.60, dump \npath=/opt/CAPEv2/storage/analyses/36/dump.pcap) \n2025-08-27 11:57:24,094 [lib.cuckoo.core.plugins] DEBUG: Started auxiliary module: Sniffer \n2025-08-27 11:57:24,097 [lib.cuckoo.common.objects] DEBUG: file type set using basic heuristics for: /tmp/cuckoo-tmp/upload_onlki97i/f\nirefox_installer.exe \n2025-08-27 11:57:24,229 [lib.cuckoo.core.guest] INFO: Task #36: Starting analysis on guest (id=win10, ip=192.168.100.60)\n2025-08-27 11:57:24,259 [lib.cuckoo.core.guest] INFO: Task #36: Guest is running CAPE Agent 0.20 (id=win10, ip=192.168.100.60)\n2025-08-27 11:57:24,325 [lib.cuckoo.core.guest] DEBUG: Task #36: Uploading analyzer to guest (id=win10, ip=192.168.100.60, size=173486\n91)\n2025-08-27 11:57:26,089 [lib.cuckoo.core.guest] INFO: Task #36: Uploading script files to guest (id=win10, ip=192.168.100.60)\n2025-08-27 11:57:31,161 [lib.cuckoo.core.guest] DEBUG: Task #36: Analysis is still running (id=win10, ip=192.168.100.60)\n2025-08-27 12:01:46,961 [lib.cuckoo.core.guest] INFO: Task #36: End of analysis reached! (id=win10, ip=192.168.100.60)\n2025-08-27 12:01:47,043 [lib.cuckoo.core.plugins] DEBUG: Stopped auxiliary module: Sniffer\n2025-08-27 12:01:47,164 [lib.cuckoo.core.resultserver] DEBUG: Task #36: Stopped tracking machine 192.168.100.60\n2025-08-27 12:01:47,165 [lib.cuckoo.common.abstracts] DEBUG: Stopping machine win10-2\n2025-08-27 12:01:47,165 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10-2\n2025-08-27 12:01:47,723 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10-2\n2025-08-27 12:01:47,763 [lib.cuckoo.core.analysis_manager] INFO: Task #36: Completed analysis successfully.\n2025-08-27 12:01:47,770 [lib.cuckoo.core.analysis_manager] INFO: Task #36: analysis procedure completed\n```\nComment: can you please check the analysis.log inside of the aanalysis folder: `storage/analysis/36/analysis.log` something like that, post full log please\nComment: There is no such log, from what I understand that is the problem:\n```\n/opt/CAPEv2/storage/analyses/36$ ls\nbinary dump.pcap files.json reports scripts selfextracted\n```\n\nComment: that means that you have issues with your VM. \n\ndo debug that use this\n\n```\nHow to debug analyzer and any script that executes inside of the virtual machine\nEnsure that you stopped all required services as systemctl stop cape.service\nStart cape as CAPE_DBG=1 python3 cuckoo.py -d\nAdd a new task, DISABLE human interaction emulation checkbox, set very high timeout like 1000\nOnce task start, core will upload analyzer folder to virtual machine\nAttach to virtual machine(virt-manager)\nStart analyzer.py by hand in cmd.exe with admin privilages:\nEx: c:\\windows\\py.exe c:\\tmp\\analyzer.py\nYou will see what fails, and if you don't, add more debugging lines or attach pdb or any other tool that you like\n```\nComment: I did that and I got this \n\n\"Image\"\nComment: what is the content of the confg file inside of that folder?, what is the config content of your machinery ?\nComment: kvm.conf is as follows:\n```\n[kvm]\nmachines = win10 \n \ninterface = virbr1 \ndsn = qemu:///system \n\n[win10]\nlabel = win10-2\nplatform = windows\nip = 192.168.100.60\narch = x64\n```\n\nThe analysis.conf is:\n\n\"Image\"\n\n\"Image\"\nComment: the IP is clearly there. maybe you have any bad char after the resultserver ip? like new line, check that with hex editor for example. because it looks fine i have no idea what could be wrong\nComment: I checked it for any additional characters nothing came up.\nI guess I will move on to reinstalling cape (kvm doesn't need reinstallation right?)\nTo reinstall cape do I just rerun the script?\n\nThanks for all your help and quick replies!\nComment: kvm doesn't need. the issue is inside of the VM i would say, i guess the good would to add ip validation on startup to see if there is any bad char\nComment: I tried reinstalling cape and retyping config files - but the issue persists.\n\nCould the problem somehow be that I'm using ubuntu server?\nAlso I downloaded the agent from inside the VM from here https://github.com/kevoreilly/CAPEv2/blob/master/agent/agent.py could this cause a mixup?\nComment: Now im thinking, could the firewall interfere somehow and block communication?\nComment: Most of us in production using Ubuntu server. It doesn't matter how you put agent till is that file.\n\nYes firewall could block. But it would block everything instead of just part as it all goes to the same port\nComment: On ufw logs I found instances of this:\n2025-08-28T19:06:21.316957+00:00 dserpsrv3 kernel: [UFW BLOCK] IN=virbr1 OUT= MAC=52:54:00:9e:67:db:52:54:00:f7:03:43:08:00 SRC=192.168.100.50 DST=192.168.100.1 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=11198 DF PROTO=TCP SPT=49762 DPT=2042 WINDOW=64240 RES=0x00 SYN URGP=0\n\nThat confirms that the firewall is behind the problem correct?\n\nComment: That is part of problem for sure\r\n\r\nEl vie, 29 ago 2025, 12:07, Manolis Tzagakis ***@***.***>\r\nescribi\u00f3:\r\n\r\n> *Xenonas* left a comment (kevoreilly/CAPEv2#2685)\r\n> \r\n>\r\n> On ufw logs I found instances of this:\r\n> 2025-08-28T19:06:21.316957+00:00 dserpsrv3 kernel: [UFW BLOCK] IN=virbr1\r\n> OUT= MAC=52:54:00:9e:67:db:52:54:00:f7:03:43:08:00 SRC=192.168.100.50\r\n> DST=192.168.100.1 LEN=52 TOS=0x00 PREC=0x00 TTL=128 ID=11198 DF PROTO=TCP\r\n> SPT=49762 DPT=2042 WINDOW=64240 RES=0x00 SYN URGP=0\r\n>\r\n> That confirms that the firewall is behind the problem correct?\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Allowing traffic through virbr1 solved the issue.\n\nThanks so much for your help!\n\nComment: Glad that you solved it. I have question for you, did you enable the\r\nfirewall? As by default is off. Just to get the idea of root of the issue\r\nso maybe in future I can make a default trouble checker\r\n\r\nEl vie, 29 ago 2025, 12:19, Manolis Tzagakis ***@***.***>\r\nescribi\u00f3:\r\n\r\n> *Xenonas* left a comment (kevoreilly/CAPEv2#2685)\r\n> \r\n>\r\n> Allowing traffic through virbr1 solved the issue.\r\n>\r\n> Thanks so much for your help!\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Yes I enabled the firewall.\n\nLooking back I should have known the cause was there, but I thought since I could ping from inside the VM there should be no problem with it.\nComment: ok well it still a good to add check for that. Glad that you solved your issue. let us know if you have any other", + "Title: Crash during detonation of ransomware via capev2, no crash when detonating manually on same machine\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nSample should be able to run fully unobstructed and run commands that are required without crashing\n\n# Current Behavior\n\nThe sample crashes at a powershell command that works. \n\ni can run the command manually on the same machine without issues, and the sample runs properly and fully when i run it manually within the same machine.\n\non 7-22-2025 i recall it detonating properly without problems.\n\nthe error i get from the sample is:\n\n```\nUnhandled Exception: System.TypeInitializationException: The type initializer for 'System.Management.Automation.AmsiUtils' threw an exception. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.\n at Microsoft.Win32.NativeMethods.EnumProcessModules(SafeProcessHandle handle, IntPtr modules, Int32 size, Int32& needed)\n at System.Diagnostics.NtProcessManager.GetModuleInfos(Int32 processId, Boolean firstModuleOnly)\n at System.Diagnostics.NtProcessManager.GetFirstModuleInfo(Int32 processId)\n at System.Diagnostics.Process.get_MainModule()\n at System.Management.Automation.PsUtils.GetMainModule(Process targetProcess)\n at System.Management.Automation.AmsiUtils.Init()\n at System.Management.Automation.AmsiUtils.CheckAmsiInit()\n at System.Management.Automation.AmsiUtils..cctor()\n --- End of inner exception stack trace ---\n at System.Management.Automation.AmsiUtils.Init()\n at System.Management.Automation.Runspaces.EarlyStartup.<>c.b__0_1()\n at System.Threading.Tasks.Task.InnerInvoke()\n at System.Threading.Tasks.Task.Execute()\n at System.Threading.Tasks.Task.ExecutionContextCallback(Object obj)\n at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\n at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)\n at System.Threading.Tasks.Task.ExecuteEntry(Boolean bPreventDoubleExecution)\n at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()\n at System.Threading.ThreadPoolWorkQueue.Dispatch()\n at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()\n```\n\n## Steps to Reproduce\n\nNot sure, but i can send the sample.\n\n## Context\n\nBehavior crashing:\n[powershell.exe](javascript:show_tab('process_4864');) 4864 \"powershell\" $logs = Get-WinEvent -ListLog * | Where-Object {$_.RecordCount} | Select-Object -ExpandProperty LogName ; ForEach ( $l in $logs | Sort | Get-Unique ) {[System.Diagnostics.Eventing.Reader ...(truncated)\n[WerFault.exe](javascript:show_tab('process_372');) 372 -u -p 4864 -s 1336\n\nSo this exact same sample worked previously, the only difference is rebuilt new machines and updated capev2 to the latest version \n\n## Failure Logs\n\n```\nUnhandled Exception: System.TypeInitializationException: The type initializer for 'System.Management.Automation.AmsiUtils' threw an exception. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.\n at Microsoft.Win32.NativeMethods.EnumProcessModules(SafeProcessHandle handle, IntPtr modules, Int32 size, Int32& needed)\n at System.Diagnostics.NtProcessManager.GetModuleInfos(Int32 processId, Boolean firstModuleOnly)\n at System.Diagnostics.NtProcessManager.GetFirstModuleInfo(Int32 processId)\n at System.Diagnostics.Process.get_MainModule()\n at System.Management.Automation.PsUtils.GetMainModule(Process targetProcess)\n at System.Management.Automation.AmsiUtils.Init()\n at System.Management.Automation.AmsiUtils.CheckAmsiInit()\n at System.Management.Automation.AmsiUtils..cctor()\n --- End of inner exception stack trace ---\n at System.Management.Automation.AmsiUtils.Init()\n at System.Management.Automation.Runspaces.EarlyStartup.<>c.b__0_1()\n at System.Threading.Tasks.Task.InnerInvoke()\n at System.Threading.Tasks.Task.Execute()\n at System.Threading.Tasks.Task.ExecutionContextCallback(Object obj)\n at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)\n at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)\n at System.Threading.Tasks.Task.ExecuteEntry(Boolean bPreventDoubleExecution)\n at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()\n at System.Threading.ThreadPoolWorkQueue.Dispatch()\n at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()\n```\nim not seeing any errors in the cuckoo logs\n\nLet me know if you need more information.\nComment: updated to latest version, but stills errors out due to protected memory\nComment: Please share the sample or hash.\nComment: Is there a way to make it private / send it privately?\nComment: yes, see email at the bottom https://github.com/kevoreilly/CAPEv2?tab=security-ov-file#readme\nComment: Sent!\nComment: I can't see any detonation issues here!\n\n\"Image\"\n\n\"Image\"\nComment: Interesting, I'll see if it fixes itself if i reinstall.\n\nwould it be safe for me to save my configs and just run the capev2.sh again? or should i nuke the whole installation first then rerun?\nComment: its fine, is what i do with configs. but the issue i would bet is inside of VM instead of cape server side, as it says can't access some memory inside of the VM\nComment: So, i have been playing with cpu features and my machines dont present themselves as virtualized , the sample also detects the vms as physical, i have a feeling this could be related\n\ni'll report back , thanks for clarifying!\nComment: you are welcome. let us know what you find. i know this is frustrating, but each case is unique \nComment: Yep, it looks like that was the culprit, removed the feature and now it goes through. \n\ni guess they handle memory differently for physical machines...\n\nso just to be sure, this isnt inherently an issue with capev2 , but more so how the sample interacts with the vm based on the CPU and virtualization status?\nComment: Kevin uses hardened VM, is why we patch KVM/qemu on source level and there is no issue with access to memory\nComment: Do any of you use ``` \"\"``` flag?\n\nthats what was giving me issues, it started working after removal -- not sure if its taken into account in the patching. When you say its why you patch in kvm/qemu, you mean in the kvm-qemu.sh? or on your own in a separate process etc.\n\n\nComment: Nah, we disable that on command line at the bottom\nComment: I think my blog posts are linked in docs\r\n\r\nEl vie, 29 ago 2025, 18:19, boredchilada ***@***.***>\r\nescribi\u00f3:\r\n\r\n> *boredchilada* left a comment (kevoreilly/CAPEv2#2684)\r\n> \r\n>\r\n> Do any of you use \"\" flag?\r\n>\r\n> thats what was giving me issues, it started working after removal -- not\r\n> sure if its taken into account in the patching. When you say its why you\r\n> patch in kvm/qemu, you mean in the kvm-qemu.sh? or on your own in a\r\n> separate process etc.\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", + "Title: Improve logging\nBody: Print full traceback instead of only the last occuring error (which is often caused by previous ones)\r\nAs an example, errors thrown by machineries are stripped of their details before being printed to the user (as seen on #2682).\nComment: thank you", + "Title: \"CuckooCriticalError: Error initializing machines\" on physical machinery without any debug log indication\nBody: Hello,\nI'm struggling with the setup of CAPEv2 using `physical` machinery.\nWhen CAPE tries to start, it abruptly stops with a `CuckooCriticalError: Error initializing machines`. There is no debug log before that crash that seems to be helpful in fixing it.\n\n```\nuser@capev2:/opt/CAPEv2$ sudo -u cape -g cape /etc/poetry/bin/poetry run python cuckoo.py -d\n\n .-----------------.\n | Cuckoo Sandbox? |\n | OH NOES! |\\ '-.__.-'\n '-----------------' \\ /oo |--.--,--,--.\n \\_.-'._i__i__i_.'\n \"\"\"\"\"\"\"\"\"\n\n Cuckoo Sandbox 2.4-CAPE\n www.cuckoosandbox.org\n Copyright (c) 2010-2015\n\n CAPE: Config and Payload Extraction\n github.com/kevoreilly/CAPEv2\n\n2025-08-25 09:15:40,352 [root] DEBUG: Importing modules...\n2025-08-25 09:15:40,355 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageChops.difference'\n2025-08-25 09:15:40,355 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageDraw'\n2025-08-25 09:15:40,357 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.Image'\n2025-08-25 09:15:40,939 [root] DEBUG: Missed file extra/msft-public-ips.csv. Get a fresh copy from https://www.microsoft.com/en-us/download/details.aspx?id=53602\n2025-08-25 09:15:41,076 [root] DEBUG: Imported \"auxiliary\" modules:\n2025-08-25 09:15:41,077 [root] DEBUG: \t |-- [...long list stripped, full output below...]\n2025-08-25 09:15:41,082 [root] DEBUG: Imported \"processing\" modules:\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- [...long list stripped, full output below...]\n2025-08-25 09:15:41,083 [root] DEBUG: Imported \"signatures\" modules:\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- [...long list stripped, full output below...]\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"reporting\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t |-- BinGraph\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- JsonDump\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"feeds\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- AbuseCH_SSL\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"machinery\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- Physical\n2025-08-25 09:15:41,170 [root] DEBUG: Checking for locked tasks...\n/usr/bin/tcpdump\n2025-08-25 09:15:41,186 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[physical] with max_machines_count=10\n2025-08-25 09:15:41,186 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\nHost AIB-GDECI1-P has MAC 0a:00:27:00:00:05\n2025-08-25 09:15:41,390 [root] CRITICAL: CuckooCriticalError: Error initializing machines\nuser@capev2:/opt/CAPEv2$ \n```\n\n# Expected Behavior\n\nCAPE starts or at least outputs a meaninful error to help debug its configuration.\n\n# Current Behavior\n\nCAPE outputs no error in debug logs and stops abruptly with \"CuckooCriticalError: Error initializing machines\" \n\n# Failure Information (for bugs)\n\n## Steps to Reproduce\n\n1. Set up two networks :\n - an internet access network (192.168.100.x/24, gateway 192.168.100.1)\n - a detonation network (10.0.0.x/24)\n2. Set up three machines :\n - a machine running FOG Project connected on both networks (192.168.100.20 & 10.0.0.1)\n - a detonation machine with network boot enabled connected to the detonation network only (network boot uses DHCP, Windows has static 10.0.0.10)\n - a machine running CAPEv2 (installed through `cape2.sh base`) connected to both networks (192.168.100.25 & 10.0.0.2)\n3. Write config files as follow :\n```yaml\n# /opt/CAPEv2/conf/cuckoo.conf\n[cuckoo]\nmachinery = physical\nmachinery_screenshots = on\nfreespace = 15000\n\n[resultserver]\nip = 0.0.0.0\nport = 2042\n```\n```\n# /opt/CAPEv2/conf/physical.conf\n[physical]\nmachines = physical01\ninterface = enp7s0\ntype = pure\n\n[fog]\nhostname = 10.0.0.1\napikey = [redacted]\nuser_apikey = [redacted]\n\n[physical01]\nlabel = physical01\nplatform = windows\nip = 10.0.0.10\nresultserver_ip = 10.0.0.2\nresultserver_port = 2042\narch = x86\n```\n4. Try to start CAPEv2\n\n## Context\n\nPlease provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. Operating system version, bitness, installed software versions, test sample details/hash/binary (if applicable).\n\n| Question | Answer\n|------------------|--------------------\n| Git commit | f41ba50535a8ae20137f49ecd1dc3447159155ba\n| OS version | Ubuntu 22.04.5 LTS\n\n## Failure Logs\n\n
\n Show logs\n\n```\nuser@capev2:/opt/CAPEv2$ sudo -u cape -g cape /etc/poetry/bin/poetry run python cuckoo.py -d\n\n .-----------------.\n | Cuckoo Sandbox? |\n | OH NOES! |\\ '-.__.-'\n '-----------------' \\ /oo |--.--,--,--.\n \\_.-'._i__i__i_.'\n \"\"\"\"\"\"\"\"\"\n\n Cuckoo Sandbox 2.4-CAPE\n www.cuckoosandbox.org\n Copyright (c) 2010-2015\n\n CAPE: Config and Payload Extraction\n github.com/kevoreilly/CAPEv2\n\n2025-08-25 09:15:40,352 [root] DEBUG: Importing modules...\n2025-08-25 09:15:40,355 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageChops.difference'\n2025-08-25 09:15:40,355 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageDraw'\n2025-08-25 09:15:40,357 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.Image'\n2025-08-25 09:15:40,939 [root] DEBUG: Missed file extra/msft-public-ips.csv. Get a fresh copy from https://www.microsoft.com/en-us/download/details.aspx?id=53602\n2025-08-25 09:15:41,076 [root] DEBUG: Imported \"auxiliary\" modules:\n2025-08-25 09:15:41,077 [root] DEBUG: \t |-- AzSniffer\n2025-08-25 09:15:41,081 [root] DEBUG: \t |-- Mitmdump\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- PolarProxy\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- QEMUScreenshots\n2025-08-25 09:15:41,082 [root] DEBUG: \t `-- Sniffer\n2025-08-25 09:15:41,082 [root] DEBUG: Imported \"processing\" modules:\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- CAPE\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- AnalysisInfo\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- Autoruns\n2025-08-25 09:15:41,082 [root] DEBUG: \t |-- BehaviorAnalysis\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- Debug\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- HollowsHunter\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- NetworkAnalysis\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- ProcessMemory\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- script_log_processing\n2025-08-25 09:15:41,083 [root] DEBUG: \t `-- UrlAnalysis\n2025-08-25 09:15:41,083 [root] DEBUG: Imported \"signatures\" modules:\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- AntiAnalysisTLSSection\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- ClamAV\n2025-08-25 09:15:41,083 [root] DEBUG: \t |-- KnownVirustotal\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- BadCerts\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- BadSSLCerts\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- ZeusP2P\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- ZeusURL\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- BinaryTriggeredYARA\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- AthenaHttp\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- DirtJumper\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- Drive\n2025-08-25 09:15:41,084 [root] DEBUG: \t |-- Drive2\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- Madness\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- HTMLPhisher_2\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FamilyProxyBack\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPAAntiAnalysis\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPACollection\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPAcommunication\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPACompiler\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPADataManipulation\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPAExecutable\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPAHostInteration\n2025-08-25 09:15:41,085 [root] DEBUG: \t |-- FlareCAPAcommunication\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPALib\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPALinking\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPALoadCode\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPAMalwareFamily\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPANursery\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPAPersistence\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPARuntime\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- FlareCAPATargeting\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- ThreatFox\n2025-08-25 09:15:41,086 [root] DEBUG: \t |-- Log4j\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- MimicsExtension\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkCountryDistribution\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkMultipleDirectIPConnections\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkCnCHTTP\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkHTTPPOST\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkIPEXE\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkDGA\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkDGAFraunhofer\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkDynDNS\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkExcessiveUDP\n2025-08-25 09:15:41,087 [root] DEBUG: \t |-- NetworkHTTP\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkICMP\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkIRC\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkOpenProxy\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkP2P\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkQuestionableHost\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkQuestionableHttpPath\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkQuestionableHttpsPath\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- NetworkSMTP\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- TorGateway\n2025-08-25 09:15:41,088 [root] DEBUG: \t |-- BuildLangID\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- ResourceLangID\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- overlay\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- PackerUnknownPESectionName\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- ASPackPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- AspireCryptPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- BedsProtectorPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- ConfuserPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- EnigmaPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- PackerEntropy\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- MPressPacked\n2025-08-25 09:15:41,089 [root] DEBUG: \t |-- NatePacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- NsPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- SmartAssemblyPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- SpicesPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- ThemidaPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- ThemidaPackedSection\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- TitanPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- UPXCompressed\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- VMPPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- YodaPacked\n2025-08-25 09:15:41,090 [root] DEBUG: \t |-- PDF_Annot_URLs_Checker\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- Polymorphic\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- PunchPlusPlusPCREs\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- Procmem_Yara\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- CheckIP\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- Authenticode\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- InvalidAuthenticodeSignature\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- DotNetAnomaly\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- Static_Java\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- Static_PDF\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- ContainsPEOverlay\n2025-08-25 09:15:41,091 [root] DEBUG: \t |-- PEAnomaly\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- PECompileTimeStomping\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- StaticPEPDBPath\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- RATConfig\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- VersionInfoAnomaly\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- StealthNetwork\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- SuricataAlert\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- suspiciousHRML_Body\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- suspiciousHTML_Filename\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- suspiciousHTML_Title\n2025-08-25 09:15:41,092 [root] DEBUG: \t |-- VolDevicetree1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolHandles1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolLdrModules1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolLdrModules2\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolMalfind1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolMalfind2\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolModscan1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolSvcscan1\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolSvcscan2\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- VolSvcscan3\n2025-08-25 09:15:41,093 [root] DEBUG: \t |-- WHOIS_Create\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- DisableDriverViaBlocklist\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- DisableDriverViaHVCIDisallowedImages\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- DisableHypervisorProtectedCodeIntegrity\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- PendingFileRenameOperations\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AccessesMailslot\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AccessesNetlogonRegkey\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AccessesPublicFolder\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AccessesSysvol\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- WritesSysvol\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AddsAdminUser\n2025-08-25 09:15:41,094 [root] DEBUG: \t |-- AddsUser\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- OverwritesAdminPassword\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- anomalous_deletefile\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- AntiAnalysisDetectFile\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- AntiAnalysisDetectReg\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- QihooDetectLibs\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- AhnlabDetectLibs\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- AvastDetectLibs\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- BitdefenderDetectLibs\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- BullguardDetectLibs\n2025-08-25 09:15:41,095 [root] DEBUG: \t |-- ModifiesAttachmentManager\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- AntiAVDetectFile\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- AntiAVDetectReg\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- EmsisoftDetectLibs\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- QurbDetectLibs\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- AntiAVServiceStop\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- AntiAVSRP\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- AntiAVWhitespace\n2025-08-25 09:15:41,096 [root] DEBUG: \t |-- antidebug_addvectoredexceptionhandler\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- APIOverrideDetectLibs\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_checkremotedebuggerpresent\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_debugactiveprocess\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- AntiDBGDevices\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_gettickcount\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_guardpages\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_ntcreatethreadex\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- BullguardDetectLibs\n2025-08-25 09:15:41,097 [root] DEBUG: \t |-- antidebug_ntsetinformationthread\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- antidebug_outputdebugstring\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- antidebug_setunhandledexceptionfilter\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- AntiDBGWindows\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- AntiEmuWinDefend\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- WineDetectReg\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- WineDetectFunc\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- AntiSandboxCheckUserdomain\n2025-08-25 09:15:41,098 [root] DEBUG: \t |-- AntiCuckoo\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- CuckooDetectFiles\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- CuckooCrash\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- AntiSandboxForegroundWindow\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- FortinetDetectFiles\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- SandboxJoeAnubisDetectFiles\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- HookMouse\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- MouseMovementDetect\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- AntiSandboxRestart\n2025-08-25 09:15:41,099 [root] DEBUG: \t |-- SandboxieDetectLibs\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- AntisandboxSboxieMutex\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- AntiSandboxSboxieObjects\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- AntiSandboxScriptTimer\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- AntiSandboxSleep\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- SunbeltDetectFiles\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- SunbeltDetectLibs\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- AntiSandboxSuspend\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- ThreatTrackDetectFiles\n2025-08-25 09:15:41,100 [root] DEBUG: \t |-- Unhook\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- BochsDetectKeys\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMDirectoryObjects\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMBios\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMCPU\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- DiskInformation\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- SetupAPIDiskInformation\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMDiskReg\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMSCSI\n2025-08-25 09:15:41,101 [root] DEBUG: \t |-- AntiVMServices\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- AntiVMSystem\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- HyperVDetectKeys\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- AntiVMChecksAvailableMemory\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- NetworkAdapters\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- ParallelsDetectKeys\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- DetectVirtualizationViaRecentFiles\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- VBoxDetectDevices\n2025-08-25 09:15:41,102 [root] DEBUG: \t |-- VBoxDetectFiles\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VBoxDetectKeys\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VBoxDetectLibs\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VBoxDetectProvname\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VBoxDetectWindow\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VMwareDetectDevices\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VMwareDetectEvent\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VMwareDetectFiles\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VMwareDetectKeys\n2025-08-25 09:15:41,103 [root] DEBUG: \t |-- VMwareDetectLibs\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- VMwareDetectMutexes\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- VPCDetectFiles\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- VPCDetectKeys\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- VPCDetectMutex\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- XenDetectKeys\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- APISpamming\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- api_uuidfromstringa\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- AsyncRatMutex\n2025-08-25 09:15:41,104 [root] DEBUG: \t |-- GulpixBehavior\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- KetricanRegkeys\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- OkrumMutexes\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- Cridex\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- Geodo\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- Prinimalka\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- SpyEyeMutexes\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- ZeusMutexes\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- BCDEditCommand\n2025-08-25 09:15:41,105 [root] DEBUG: \t |-- BitcoinOpenCL\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- AccessesPrimaryPartition\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- Bootkit\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- DirectHDDAccess\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- EnumeratesPhysicalDrives\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- PhysicalDriveAccess\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- PotentialOverWriteMBR\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- SuspiciousIoctlSCSIPassthough\n2025-08-25 09:15:41,106 [root] DEBUG: \t |-- SuspiciusIOControlCodes\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- Ruskill\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- BrowserAddon\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- ChromiumBrowserExtensionDirectory\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- BrowserHelperObject\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- BrowserNeeded\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- ModifyProxy\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- BrowserScanbox\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- BrowserSecurity\n2025-08-25 09:15:41,107 [root] DEBUG: \t |-- browser_startpage\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- FirefoxDisablesProcessPerTab\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- IEDisablesProcessPerTab\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- OdbcconfBypass\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- RegSrv32SquiblydooDLLLoad\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- SquiblydooBypass\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- SquiblytwoBypass\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- BypassFirewall\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- ChecksUACStatus\n2025-08-25 09:15:41,108 [root] DEBUG: \t |-- UACBypassCMSTP\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- UACBypassCMSTPCOM\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- UACBypassDelegateExecuteSdclt\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- UACBypassEventvwr\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- UACBypassFodhelper\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- UACBypassWindowsBackup\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- CAPEExtractedContent\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- CarberpMutexes\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- ClearsLogs\n2025-08-25 09:15:41,109 [root] DEBUG: \t |-- ClickfraudCookies\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- ClickfraudVolume\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- CmdlineObfuscation\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- CmdlineSwitches\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- CmdlineTerminate\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- CommandLineForFilesWildCard\n2025-08-25 09:15:41,110 [root] DEBUG: \t |-- CommandLineHTTPLink\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- CommandLineLongString\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- CommandLineReversedHTTPLink\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- LongCommandline\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- PowershellRenamedCommandLine\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- SystemAccountDiscoveryCMD\n2025-08-25 09:15:41,111 [root] DEBUG: \t |-- SystemCurrentlyLoggedinUserCMD\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- SystemInfoDiscoveryCMD\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- SystemInfoDiscoveryPWSH\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- SystemNetworkDiscoveryCMD\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- SystemNetworkDiscoveryPWSH\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- SystemUserDiscoveryCMD\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- CompilesDotNetCode\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- QueriesComputerName\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- QueriesUserName\n2025-08-25 09:15:41,112 [root] DEBUG: \t |-- CopiesSelf\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- CreatesExe\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- CreatesLargeKey\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- CreatesNullValue\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- AccessWindowsPasswordsVault\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- CredWiz\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- EnablesWDigest\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- VaultCmd\n2025-08-25 09:15:41,113 [root] DEBUG: \t |-- DumpLSAViaWindowsErrorReporting\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- FileCredentialStoreAccess\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- FileCredentialStoreWrite\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- KerberosCredentialAccessViaRubeus\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- LsassCredentialDumping\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- RegistryCredentialDumping\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- RegistryCredentialStoreAccess\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- RegistryLSASecretsAccess\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- ComsvcsCredentialDump\n2025-08-25 09:15:41,114 [root] DEBUG: \t |-- CriticalProcess\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CryptGenKey\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CryptominingStratumCommand\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- MINERS\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CVE_2014_6332\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CVE2015_2419_JS\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CVE_2016_0189\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CVE_2016_7200\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- CypherITMutexes\n2025-08-25 09:15:41,115 [root] DEBUG: \t |-- DarkCometRegkeys\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DatopLoader\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DeadConnect\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DeadLink\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DebugsSelf\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DecoyDocument\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DecoyImage\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DeepFreezeMutex\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DeletesExecutedFiles\n2025-08-25 09:15:41,116 [root] DEBUG: \t |-- DeletesExecutedFiles\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DeletesSelf\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DeletesShadowCopies\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DeletesSystemStateBackup\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DEPBypass\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DEPDisable\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DisablesAppLaunch\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DisablesAutomaticAppTermination\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DisablesAppVirtualiztion\n2025-08-25 09:15:41,117 [root] DEBUG: \t |-- DisablesBackups\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesBrowserWarn\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesContextMenus\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesCPLDisplay\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesCrashdumps\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesMappedDrivesAutodisconnect\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesEventLogging\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisableFolderOptions\n2025-08-25 09:15:41,118 [root] DEBUG: \t |-- DisablesNotificationCenter\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesPowerOptions\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesRestoreDefaultState\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisableRunCommand\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesSecurity\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesSmartScreen\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesSPDY\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesStartMenuSearch\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesSystemRestore\n2025-08-25 09:15:41,119 [root] DEBUG: \t |-- DisablesUAC\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWER\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWFP\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- AddWindowsDefenderExclusions\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWindowsDefender\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWindowsDefenderDISM\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWindowsDefenderLogging\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- RemovesWindowsDefenderContextMenu\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- WindowsDefenderPowerShell\n2025-08-25 09:15:41,120 [root] DEBUG: \t |-- DisablesWindowsFileProtection\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DisablesWindowsUpdate\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DisablesWindowsFirewall\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DllLoadUncommonFileTypes\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DocScriptEXEDrop\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- AdfindDomainEnumeration\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DomainEnumerationCommands\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- AndromutMutexes\n2025-08-25 09:15:41,121 [root] DEBUG: \t |-- DownloaderCabby\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- GuLoaderAPIs\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- PhorpiexMutexes\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- ProtonBotMutexes\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- DriverFilterManager\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- DriverLoad\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- Dropper\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- EXEDropper_JS\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- dynamic_function_loading\n2025-08-25 09:15:41,122 [root] DEBUG: \t |-- DLLArchiveExecution\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- LNKArchiveExecution\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- ScriptArchiveExecution\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- EncryptedIOC\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- Excel4MacroUrls\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- Crash\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- ProcessCreationSuspiciousLocation\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- exploit_getbasekerneladdress\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- exploit_gethaldispatchtable\n2025-08-25 09:15:41,123 [root] DEBUG: \t |-- ExploitHeapspray\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- EscalatePrivilegeViaNTLMRelay\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- SpoolerAccess\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- SpoolerSvcStart\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- KoadicAPIs\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- KoadicNetworkActivity\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- Modiloader_APIs\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- MappedDrivesUAC\n2025-08-25 09:15:41,124 [root] DEBUG: \t |-- SystemMetrics\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- Generic_Phish\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- HidesRecycleBinIcon\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- HTTP_Request\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- ApocalypseStealerFileBehavior\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- ArkeiFiles\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- AzorultMutexes\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- BitcoinWallet\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- BrowserStealer\n2025-08-25 09:15:41,125 [root] DEBUG: \t |-- InfostealerBrowserPassword\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- CookiesStealer\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- CryptBotFiles\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- CryptBotNetwork\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- EchelonFiles\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- FTPStealer\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- IMStealer\n2025-08-25 09:15:41,126 [root] DEBUG: \t |-- KeyLogger\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- EmailStealer\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- MassLoggerArtifacts\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- MassLoggerFiles\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- MassLoggerVersion\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- PoullightFiles\n2025-08-25 09:15:41,127 [root] DEBUG: \t |-- PurpleWaveMutexes\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- PurpleWaveNetworkAcivity\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- QuilClipperMutexes\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- QuilClipperNetworkBehavior\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- QulabFiles\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- QulabMutexes\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- RaccoonInfoStealerMutex\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- raccoon\n2025-08-25 09:15:41,128 [root] DEBUG: \t |-- CapturesScreenshot\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- vidar\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionCRT\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionExplorer\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionExtension\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionNetworkTraffic\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionRUNPE\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- InjectionRWX\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- injection_themeinitapihook\n2025-08-25 09:15:41,129 [root] DEBUG: \t |-- ThreadManipulationRemoteProcess\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- Internet_Dropper\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- EscalatePrivilegeViaNamedPipe\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- IPC_NamedPipe\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- JS_Phish\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- JS_SuspiciousRedirect\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- LOLBAS_EvadeExecutionViaASPNetCompiler\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- LOLBAS_EvadeExecutionViaDeviceCredentialDeployment\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- LOLBAS_EvadeExecutionViaFilterManagerControl\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- LOLBAS_EvadeExecutionViaIntelGFXDownloadWrapper\n2025-08-25 09:15:41,130 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaAppVLP\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaCDB\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaInternetExplorerExporter\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaOpenSSH\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaPcalua\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaPesterPSModule\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaRunExeHelperUtility\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaScriptRunner\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryViaTTDinject\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteBinaryVisualStudioLiveShare\n2025-08-25 09:15:41,131 [root] DEBUG: \t |-- LOLBAS_ExecuteMsiexecViaExplorer\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_ExecutePSViaSyncappvpublishingserver\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_ExecuteRemoteMSIViaDevinit\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_ExecuteSuspiciousPowerShellViaRunscripthelper\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_ExecuteSuspiciousPowerShellViaSQLPS\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_IndirectCommandExecutionViaConsoleWindowHost\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_PerformMaliciousActivitiesViaHeadlessBrowser\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_RegisterDLLViaCertOC\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_RegisterDLLViaMSIEXEC\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_RegisterDLLViaOdbcconf\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- LOLBAS_ScriptletProxyExecutionViaPubprn\n2025-08-25 09:15:41,132 [root] DEBUG: \t |-- malicious_dynamic_function_loading\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- EncryptPCInfo\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- EnryptDataAgentTeslaHTTP\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- EnryptDataAgentTeslaHTTPT2\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- EnryptDataNanoCore\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- MartiansIE\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- MartiansOffice\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- ReadsMemoryRemoteProcess\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- MimicsAgent\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- MimicsFiletime\n2025-08-25 09:15:41,133 [root] DEBUG: \t |-- MimicsIcon\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- MasqueradesProcessName\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- MimikatzModules\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- QuilMinerNetworkBehavior\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- AMSIBypassViaCOMRegistry\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- AccessAutoLogonsViaRegistry\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- AccessBootKeyViaRegistry\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- CreateSuspiciousLNKFiles\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- CredentialAccessViaWindowsCredentialHistory\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- DLLHijackingViaMicrosoftExchange\n2025-08-25 09:15:41,134 [root] DEBUG: \t |-- DLLHijackingViaWaaSMedicSvcCOMTypeLib\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- ExecuteFileDownloadedViaOpenSSH\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- ExecuteSafeModeFromSuspiciousProcess\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- ExecuteScriptsViaMicrosoftManagementConsole\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- ExecuteSuspiciousProcessesViaWindowsMSSQLService\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- ExecutionFromSelfExtractingArchive\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- IPAddressDiscoveryViaTrustedProgram\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- LoadDLLViaControlPanel\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- MSOfficeCMDRCE\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- MountCopyToWebDavShare\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- NetworkConnectionViaSuspiciousProcess\n2025-08-25 09:15:41,135 [root] DEBUG: \t |-- PotentialLocationDiscoveryViaUnusualProcess\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- PotentialProtocolTunnelingViaLegitUtilities\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- PotentialProtocolTunnelingViaQEMU\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- StoreExecutableRegistry\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- SuspiciousExecutionViaDotnetRemoting\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- SuspiciousExecutionViaMicrosoftExchangeTransportAgent\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- SuspiciousJavaExecutionViaWinScripts\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- SuspiciousScheduledTaskCreationviaMasqueradedXMLFile\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- UsesRestartManagerForSuspiciousActivities\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- ModifiesCerts\n2025-08-25 09:15:41,136 [root] DEBUG: \t |-- DotNetCLRUsageLogKnob\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- Modifies_HostFile\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- ModifiesOEMInformation\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- ModifySecurityCenterWarnings\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- ModifiesUACNotify\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- ModifiesDesktopWallpaper\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- ZoneID\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- move_file_on_reboot\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- Multiple_UA\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- NetworkAnomaly\n2025-08-25 09:15:41,137 [root] DEBUG: \t |-- NetworkBIND\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSArchive\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSFreeWebHosting\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSGeneric\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSInteractsh\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSOpenSource\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSPasteSite\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSPayload\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSServiceInterface\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSSocialMedia\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSTelegram\n2025-08-25 09:15:41,138 [root] DEBUG: \t |-- NetworkCnCHTTPSTempStorageSite\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkCnCHTTPSTempURLDNS\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkCnCHTTPSURLShortenerSite\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkCnCHTTPSUserAgent\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkCnCSMTPSExfil\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkCnCSMTPSGeneric\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkDNSBlockChain\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkDNSIDN\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkDNSOpenNIC\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkDNSPasteSite\n2025-08-25 09:15:41,139 [root] DEBUG: \t |-- NetworkDNSReverseProxy\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDNSSuspiciousQueryType\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDNSTempFileService\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDNSTempURLDNS\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDNSTunnelingRequest\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDNSURLShortener\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDOHTLS\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- Suspicious_TLD\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkDocumentHTTP\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- ExplorerHTTP\n2025-08-25 09:15:41,140 [root] DEBUG: \t |-- NetworkFakeUserAgent\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- LegitDomainAbuse\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- NetworkDocumentFile\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- NetworkEXE\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- Tor\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- TorHiddenService\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- Office_Code_Page\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- OfficeAddinLoading\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- OfficeCOMLoad\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- OfficeDotNetLoad\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- OfficeMSHTMLLoad\n2025-08-25 09:15:41,141 [root] DEBUG: \t |-- OfficePerfKey\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeVBLLoad\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeWMILoad\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeCVE201711882\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeCVE201711882Network\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeCVE202140444\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeCVE202140444M2\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficeFlashLoad\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- OfficePostScript\n2025-08-25 09:15:41,142 [root] DEBUG: \t |-- Office_Macro\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- ChangesTrustCenter_settings\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- DisablesVBATrustAccess\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- OfficeMacroAutoExecution\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- OfficeMacroIOC\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- OfficeMacroMaliciousPredition\n2025-08-25 09:15:41,143 [root] DEBUG: \t |-- OfficeMacroSuspicious\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFASLRBypass\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFAnomalyCharacterSet\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFAnomalyVersion\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFEmbeddedContent\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFEmbeddedOfficeFile\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- RTFExploitStatic\n2025-08-25 09:15:41,144 [root] DEBUG: \t |-- OfficeSecurity\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- OfficeAnamalousFeature\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- OfficeDDECommand\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- OfficeSuspiciousProcesses\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- OfficeWriteEXE\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- ArmadilloMutex\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- ArmadilloRegKey\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- ADS\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- PersistenceViaAutodialDLLRegistry\n2025-08-25 09:15:41,145 [root] DEBUG: \t |-- Autorun\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- Autorun_scheduler\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceSafeBoot\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceBootexecute\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceRegistryScript\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceIFEO\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceSilentProcessExit\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceRDPRegistry\n2025-08-25 09:15:41,146 [root] DEBUG: \t |-- PersistenceRDPShadowing\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PersistenceService\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PersistenceShimDatabase\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowerpoolMutexes\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowerShellNetworkConnection\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowerShellScriptBlockLogging\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowershellCommandSuspicious\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowershellDownload\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowershellRenamed\n2025-08-25 09:15:41,147 [root] DEBUG: \t |-- PowershellRequest\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- PowershellReversed\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- PowershellVariableObfuscation\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- PreventsSafeboot\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- CmdlineProcessDiscovery\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- CreateToolhelp32SnapshotProcessModuleEnumeration\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- EnumeratesRunningProcesses\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- ProcessInterest\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- ProcessNeeded\n2025-08-25 09:15:41,148 [root] DEBUG: \t |-- MassDataEncryption\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- CryptoMixMutexes\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- DharmaMutexes\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- RansomwareDMALocker\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- RansomwareExtensions\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- RansomwareFileModifications\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- RansomwareFiles\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- FonixMutexes\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- GandCrabMutexes\n2025-08-25 09:15:41,149 [root] DEBUG: \t |-- GermanWiperMutexes\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- MedusaLockerMutexes\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- MedusaLockerRegkeys\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- RansomwareMessage\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- NemtyMutexes\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- NemtyNetworkActivity\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- NemtyNote\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- NemtyRegkeys\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- PYSAMutexes\n2025-08-25 09:15:41,150 [root] DEBUG: \t |-- RansomwareRadamant\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- RansomwareRecyclebin\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- RevilMutexes\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- RevilRegkey\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- SatanMutexes\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- SnakeRansomMutexes\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- sodinokibi\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- StopRansomMutexes\n2025-08-25 09:15:41,151 [root] DEBUG: \t |-- StopRansomwareCMD\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- StopRansomwareRegistry\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- RansomwareSTOPDJVU\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BeebusMutexes\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BlackNETMutexes\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BlackRATAPIs\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BlackRATMutexes\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BlackRATNetworkActivity\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- BlackRATRegistryKeys\n2025-08-25 09:15:41,152 [root] DEBUG: \t |-- CRATMutexes\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- DCRatAPIs\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- DCRatFiles\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- DCRatMutex\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- FynloskiMutexes\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- KaraganyEventObjects\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- KaraganyFiles\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- LimeRATMutexes\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- LimeRATRegkeys\n2025-08-25 09:15:41,153 [root] DEBUG: \t |-- LodaRATFileBehavior\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- LuminosityRAT\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- ModiRATBehavior\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- NanocoreRAT\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- netwire\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- NjratRegkeys\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- ObliquekRATFiles\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- ObliquekRATMutexes\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- ObliquekRATNetworkActivity\n2025-08-25 09:15:41,154 [root] DEBUG: \t |-- OrcusRAT\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- ParallaxMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- PcClientMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- PlugxMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- PoisonIvyMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- QuasarMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- RatsnifMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- SennaMutexes\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- SpynetRat\n2025-08-25 09:15:41,155 [root] DEBUG: \t |-- TrochilusRATAPIs\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- VenomRAT\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- WarzoneRATFiles\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- WarzoneRATRegkeys\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- XpertRATFiles\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- XpertRATMutexes\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- XtremeMutexes\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- ReadsSelf\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- Recon_Beacon\n2025-08-25 09:15:41,156 [root] DEBUG: \t |-- Fingerprint\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- InstalledApps\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- SystemInfo\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- Accesses_RecycleBin\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- RemcosFiles\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- RemcosMutexes\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- RemcosRegkeys\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- RemcosShellCodeDynamicWrapperX\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- RDPTCPKey\n2025-08-25 09:15:41,157 [root] DEBUG: \t |-- UsesRDPClip\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- UsesRemoteDesktopSession\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesNetworkingIcon\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesPinnedPrograms\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesSecurityAndMaintenanceIcon\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesStartMenuDefaults\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesUsernameStartMenu\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- RemovesZoneIdADS\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- SpicyHotPotBehavior\n2025-08-25 09:15:41,158 [root] DEBUG: \t |-- ScriptCreatedProcess\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- ScriptNetworkActvity\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- SuspiciousJSScript\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- JavaScriptTimer\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- Secure_Login_Phish\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- SecurityXploded_Modules\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- GetClipboardData\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- SetsAutoconfigURL\n2025-08-25 09:15:41,159 [root] DEBUG: \t |-- InstallsWinpcap\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- SpoofsProcname\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- CreatesAutorunInf\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- StackPivot\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- StackPivotFileCreated\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- StackPivotProcessCreate\n2025-08-25 09:15:41,160 [root] DEBUG: \t |-- StealingClipboardData\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthChildProc\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthFile\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthHiddenExtension\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthHiddenReg\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthHideNotifications\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthSystemProcName\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthTimeout\n2025-08-25 09:15:41,161 [root] DEBUG: \t |-- StealthWebHistory\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- Hidden_Window\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- sysinternals_psexec\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- sysinternals_tools\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- LanguageCheckReg\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- QueriesKeyboardLayout\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- QueriesLocaleAPI\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- TampersETW\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- LSATampering\n2025-08-25 09:15:41,162 [root] DEBUG: \t |-- TampersPowerShellLogging\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- Flame\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- TerminatesRemoteProcess\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- TerritorialDisputeSIGs\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- TrickBotTaskDelete\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- TrickBotMutexes\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- FleerCivetMutexes\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- LokibotMutexes\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- UrsnifBehavior\n2025-08-25 09:15:41,163 [root] DEBUG: \t |-- UpatreFiles\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- UpatreMutexes\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- UserEnum\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- ADFind\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- UsesMSProtocol\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- Virus\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- NeshtaFiles\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- NeshtaMutexes\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- NeshtaRegKeys\n2025-08-25 09:15:41,164 [root] DEBUG: \t |-- RenamerMutexes\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- Webmail_Phish\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- OWAWebShellFiles\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- WebShellFiles\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- WebShellProcesses\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- PersistsDotNetDevUtility\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- SpwansDotNetDevUtiliy\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- AltersWindowsUtility\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- DotNETCSCBuild\n2025-08-25 09:15:41,165 [root] DEBUG: \t |-- MavInjectLolbin\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- MultipleExplorerInstances\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- OverwritesAccessibilityUtility\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- PotentialLateralMovementViaSMBEXEC\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- PotentialWebShellViaScreenConnectServer\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- ScriptToolExecuted\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- SuspiciousCertutilUse\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- SuspiciousCommandTools\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- SuspiciousMpCmdRunUse\n2025-08-25 09:15:41,166 [root] DEBUG: \t |-- SuspiciousPingUse\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesMicrosoftHTMLHelpExecutable\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesPowerShellCopyItem\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilities\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesAppCmd\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesCSVDELDFIDE\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesCipher\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesClickOnce\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesCurl\n2025-08-25 09:15:41,167 [root] DEBUG: \t |-- UsesWindowsUtilitiesDSQuery\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesEsentutl\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesFinger\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesMode\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesNTDSutil\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesNltest\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesScheduler\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- UsesWindowsUtilitiesXcopy\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- WMICCommandSuspicious\n2025-08-25 09:15:41,168 [root] DEBUG: \t |-- WiperZeroedBytes\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- ScrconsWMIScriptConsumer\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- WMICreateProcess\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- WMIScriptProcess\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- Win32ProcessCreate\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- AllapleMutexes\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- LinuxDeletesFiles\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- LinuxDropsFiles\n2025-08-25 09:15:41,169 [root] DEBUG: \t |-- LinuxReadsFiles\n2025-08-25 09:15:41,169 [root] DEBUG: \t `-- LinuxWritesFiles\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"reporting\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t |-- BinGraph\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- JsonDump\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"feeds\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- AbuseCH_SSL\n2025-08-25 09:15:41,170 [root] DEBUG: Imported \"machinery\" modules:\n2025-08-25 09:15:41,170 [root] DEBUG: \t `-- Physical\n2025-08-25 09:15:41,170 [root] DEBUG: Checking for locked tasks...\n/usr/bin/tcpdump\n2025-08-25 09:15:41,186 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[physical] with max_machines_count=10\n2025-08-25 09:15:41,186 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\nHost AIB-GDECI1-P has MAC 0a:00:27:00:00:05\n2025-08-25 09:15:41,390 [root] CRITICAL: CuckooCriticalError: Error initializing machines\nuser@capev2:/opt/CAPEv2$ \n```\n\nNote that these logs show that FOGProject connection succeeded : hostname and mac address of one host (the only on that specific FOG instance) is shown.\n\n
\n\nNetwork configuration of CAPEv2 host : \n```\n1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n inet6 ::1/128 scope host \n valid_lft forever preferred_lft forever\n2: enp1s0: mtu 1430 qdisc fq_codel state UP group default qlen 1000\n link/ether 52:54:00:57:1e:1a brd ff:ff:ff:ff:ff:ff\n inet 192.168.100.25/24 metric 100 brd 192.168.100.255 scope global dynamic enp1s0\n valid_lft 29529sec preferred_lft 29529sec\n inet6 fe80::5054:ff:fe57:1e1a/64 scope link \n valid_lft forever preferred_lft forever\n3: enp7s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000\n link/ether 52:54:00:72:07:a3 brd ff:ff:ff:ff:ff:ff\n inet 10.0.0.2/8 brd 10.255.255.255 scope global enp7s0\n valid_lft forever preferred_lft forever\n inet6 fe80::5054:ff:fe72:7a3/64 scope link \n valid_lft forever preferred_lft forever\n```\n\nComment: hello. we only officially support KVM hypervisor. so you must find the solution by yourself or wait till someone from community who uses it helps you\nComment: Was indeed a configuration error. The machinery module is throwing a descriptive error which is catched and printed with details stripped out.", + "Title: Error screenshot and file not opening\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [X] I am running the latest version\n- [X] I did read the README!\n- [X] I checked the documentation and found no answer\n- [X] I checked to make sure that this issue has not already been filed\n- [X] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [X] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nNo error and screenshot\n\n# Current Behavior\nRight now I ran my task it successfully upload, i dont see the TXT getting opened. I am currently only trying with a txt file to see if everything is good. I get two error. Plus screenshot arent working\n\n# Failure Information (for bugs)\n\n2025-08-24 08:06:13,990 [lib.cuckoo.core.plugins] ERROR: import_package: modules.processing.CAPE - error: Error detecting the version of libcrypto\n## Steps to Reproduce\n2025-08-24 08:07:11,782 [lib.cuckoo.core.guest] INFO: Task #6: Started capturing screenshots for Sand10\n2025-08-24 08:07:11,783 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:07:12,791 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:07:13,801 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:07:14,810 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:07:15,817 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n## Context\n\nCAPE is hosted on Ubuntu desktop 24.04\nUsing proxmox directly so my machinery is proxmox\nMy guest is Windows 10 with python 3.12.10, with Pillow installed.\nRunning The agent with task scheduler highest priv, curl is saying running as admin true when query port 8000 of agent\n\n## Failure Logs\n\n2025-08-24 08:08:47,585 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:08:48,594 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:08:49,601 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:08:50,610 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n2025-08-24 08:08:51,619 [lib.cuckoo.core.analysis_manager] WARNING: Task #6: Failed to take screenshot of Sand10:\n\n2025-08-24 08:06:13,513 [root] DEBUG: Importing modules...\n2025-08-24 08:06:13,520 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageChops.difference'\n2025-08-24 08:06:13,520 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.ImageDraw'\n2025-08-24 08:06:13,522 [modules.auxiliary.QemuScreenshots] DEBUG: Importing 'PIL.Image'\n2025-08-24 08:06:13,990 [lib.cuckoo.core.plugins] ERROR: import_package: modules.processing.CAPE - error: Error detecting the version of libcrypto\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/plugins.py\", line 100, in import_package\n import_plugin(name)\n File \"/opt/CAPEv2/lib/cuckoo/core/plugins.py\", line 53, in import_plugin\n module = __import__(name, globals(), locals(), [\"dummy\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/modules/processing/CAPE.py\", line 26, in \n from lib.cuckoo.common.integrations.file_extra_info import DuplicatesType, static_file_info\n File \"/opt/CAPEv2/lib/cuckoo/common/integrations/file_extra_info.py\", line 33, in \n from lib.cuckoo.common.integrations.parse_rdp import parse_rdp_file\n File \"/opt/CAPEv2/lib/cuckoo/common/integrations/parse_rdp.py\", line 15, in \n from certvalidator import CertificateValidator, ValidationContext\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/certvalidator/__init__.py\", line 9, in \n from .validate import validate_path, validate_tls_hostname, validate_usage\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/certvalidator/validate.py\", line 5, in \n from oscrypto import asymmetric\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/asymmetric.py\", line 19, in \n from ._asymmetric import _unwrap_private_key_info\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/_asymmetric.py\", line 27, in \n from .kdf import pbkdf1, pbkdf2, pkcs12_kdf\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/kdf.py\", line 9, in \n from .util import rand_bytes\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/util.py\", line 14, in \n from ._openssl.util import rand_bytes\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/_openssl/util.py\", line 6, in \n from ._libcrypto import libcrypto, libcrypto_version_info, handle_openssl_error\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/_openssl/_libcrypto.py\", line 9, in \n from ._libcrypto_cffi import (\n File \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.12/lib/python3.12/site-packages/oscrypto/_openssl/_libcrypto_cffi.py\", line 44, in \n raise LibraryNotFoundError('Error detecting the version of libcrypto')\noscrypto.errors.LibraryNotFoundError: Error detecting the version of libcrypto\nOPTIONAL! Missed dependency: poetry run pip install -U **git+https://github.com/CAPESandbox/httpreplay**\n\n\n\nComment: hey never saw that error. looks like 3rd party dependency is broken\nTry to install from inside of poetry shell: `pip install git+https://github.com/wbond/oscrypto.git@1547f535001ba568b239b8797465536759c742a3`", + "Title: Resolve \"The Poetry configuration is invalid\"\nBody: This commit fixes the following error when poetry install is run to install dependencies inside /opt/CAPEv2:\r\n\r\n```\r\nThe Poetry configuration is invalid:\r\n - Additional properties are not allowed ('requires-poetry' was unexpected)\r\n```", + "Title: Crash of Microsoft Office documents and some executables when analysis with capemon\nBody: ## About accounts on [capesandbox.com](https://capesandbox.com/)\n* Issues isn't the way to ask for account activation. Ping capesandbox in [Twitter](https://twitter.com/capesandbox) with your username\n\n## This is open source and you are getting __free__ support so be friendly!\n\n# Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [x] I did read the README!\n- [x] I checked the documentation and found no answer\n- [x] I checked to make sure that this issue has not already been filed\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [x] I have read and checked all configs (with all optional parts)\n\n# Expected Behavior\n\nSuccesfull analysis on some executables and Microsoft Office documents (docx, pptx).\n\n# Current Behavior\n\nWhen running analysis on some executables or Microsoft Office documents (like empty docx document), Capemon crashes the process leading to sample failure (or program running the sample failure).\n\nWhile trying manual detonation of DOCX document, the launcher of Microsoft Word was working fine - crash happened after the Word was launched.\n\n# Failure Information (for bugs)\n\nFrom the failure logs (and recent changes in capemon) I suspect some weird memory access issues.\n\nAlso while trying to debug the issue I found that `debug=2` in task options leads to empty behavioral analysis.\n\n## Steps to Reproduce\n\n1. Run CAPE analysis with empty Microsoft Word document.\n2. Wait for finish\n3. See on screenshots / behavioral analysis / analysis.log that execution of sample was terminated\n\nCurrent version of ChromeSetup available for download doesn't reproduce this issue.\nSome that I have from 2022-2024 are the only ones that have this issue. Here is one of them: [ChromeSetup.tar.gz](https://github.com/user-attachments/files/21938309/ChromeSetup.tar.gz)\n\n## Context\n\nPlease provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions. Operating system version, bitness, installed software versions, test sample details/hash/binary (if applicable).\n\n| Question | Answer\n|------------------|--------------------\n| Git commit | 5d15ec1364f52eaa8b1e329e0aa3e3c5864925ef\n| OS version | Ubuntu 24.04 on host, Windows 10 21H2 on guest\n| Office version | 2016 MSO (16.0.42666.1001)\n\nLast working version of capemon found at commit: 833a37effde090e85525a51b7b3dc80608cdd45e\n\n## Failure Logs\n\nSample: ChromeSetup.exe (md5: d1356f94daef39474623c97c8da64523), analysis.log, options=debug=2:\n\n```\n2025-08-21 15:25:44,502 [root] INFO: Added new file to list with pid None and path C:\\Program Files (x86)\\Google\\Temp\\GUM61F6.tmp\\GoogleUpdateSetup.exe\n2025-08-21 15:25:44,517 [root] DEBUG: 5608: InstrumentationCallback: Added region at 0x75A50000 to tracked regions list (thread 5568).\n2025-08-21 15:25:44,533 [root] DEBUG: 5608: CreateProcessHandler: Injection info set for new process 4448: C:\\Program Files (x86)\\Google\\Temp\\GUM61F6.tmp\\GoogleUpdate.exe, ImageBase: 0x00B10000\n2025-08-21 15:25:44,533 [root] INFO: Announced 32-bit process name: GoogleUpdate.exe pid: 4448\n2025-08-21 15:25:44,533 [lib.api.process] INFO: Monitor config for : C:\\7pb5kvup\\dll\\4448.ini\n2025-08-21 15:25:44,533 [lib.api.process] INFO: Option 'debug' with value '2' sent to monitor\n2025-08-21 15:25:44,877 [lib.api.process] INFO: 32-bit DLL to inject is C:\\7pb5kvup\\dll\\XeHYTmlK.dll, loader C:\\7pb5kvup\\bin\\NiHRsua.exe\n2025-08-21 15:25:44,892 [root] DEBUG: Loader: Injecting process 4448 (thread 4420) with C:\\7pb5kvup\\dll\\XeHYTmlK.dll.\n2025-08-21 15:25:44,892 [root] DEBUG: InjectDllViaIAT: Successfully patched IAT.\n2025-08-21 15:25:44,892 [root] DEBUG: Successfully injected DLL C:\\7pb5kvup\\dll\\XeHYTmlK.dll.\n2025-08-21 15:25:44,892 [lib.api.process] INFO: Injected into 32-bit \n2025-08-21 15:25:44,908 [root] DEBUG: 5608: DLL loaded at 0x74380000: C:\\Windows\\system32\\apphelp (0x9f000 bytes).\n...\n...\n...\n2025-08-21 15:25:45,394 [root] DEBUG: 4448: OpenProcessHandler: Image base for process 5608 (handle 0x490): 0x00690000.\n2025-08-21 15:25:45,394 [root] DEBUG: 4448: OpenProcessHandler: Injection info created for process 5540, handle 0x490: C:\\Windows\\System32\\SecurityHealthSystray.exe\n2025-08-21 15:25:45,394 [root] DEBUG: 4448: OpenProcessHandler: Injection info created for process 5776, handle 0x490: C:\\Windows\\System32\\smartscreen.exe\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: CAPEExceptionFilter: Exception 0xc0000005 accessing 0xd16c0490 caught at RVA 0x6617a in capemon (expected in memory scans), passing to next handler.\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: capemon.dll::0x7397617A 8b00 MOV EAX, [EAX]\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: Exception Caught! PID: 4448 EIP: capemon.dll+0x6617a SEH: capemon.dll+0x177a30 7397617a, Fault Address: d16c0490, Esp: 006fcc2c, Exception Code: c0000005\nEAX 0xd16c0490 EBX 0x42 ECX 0x36 EDX 0x0 ESI 0x6fd138 EDI 0x73b2dd91\n ESP 0x6fcc2c EBP 0x6fd0ec\ncapemon.dll+0x3d229\nKERNELBASE.dll+0x11d52c\nKERNELBASE.dll+0xf2f53\nKERNELBASE.dll+0xf2ed7\ngoopdate.dll+0xfb8d\ngoopdate.dll+0xf8aa\ngoopdate.dll+0x1184b1\ngoopdate.dll+0x1180b9\ngoopdate.dll+0x117c4c\ngoopdate.dll+0x1179ad\ngoopdate.dll+0x117731\ngoopdate.dll+0xa2\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: capemon.dll::0x7397617A 8b00 MOV EAX, [EAX]\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: Exception Caught! PID: 4448 EIP: capemon.dll+0x6617a SEH: capemon.dll+0x177a30 7397617a, Fault Address: d16c0490, Esp: 006fcc2c, Exception Code: c0000005\nEAX 0xd16c0490 EBX 0x42 ECX 0x36 EDX 0x0 ESI 0x6fd138 EDI 0x73b2dd91\n ESP 0x6fcc2c EBP 0x6fd0ec\ncapemon.dll+0x3d229\nKERNELBASE.dll+0x11d52c\nKERNELBASE.dll+0xf2f53\nKERNELBASE.dll+0xf2ed7\ngoopdate.dll+0xfb8d\ngoopdate.dll+0xf8aa\ngoopdate.dll+0x1184b1\ngoopdate.dll+0x1180b9\ngoopdate.dll+0x117c4c\ngoopdate.dll+0x1179ad\ngoopdate.dll+0x117731\ngoopdate.dll+0xa2\n2025-08-21 15:25:45,441 [root] INFO: Process with pid 4448 has terminated\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: NtTerminateProcess hook: Attempting to dump process 4448\n2025-08-21 15:25:45,441 [root] DEBUG: 4448: DoProcessDump: Skipping process dump as code is identical on disk.\n```\n\nNtTerminateProcess returned 0x8004ffff\n\nSample: empty.docx, analysis.log, options=debug=2:\n\n```\n2025-08-21 15:39:29,466 [root] DEBUG: 5564: Commandline: \"C:\\Program Files\\Microsoft Office\\Office16\\WINWORD.EXE\" \"C:\\Program Files\\Microsoft Office\\root\\Templates\\empty.docx\" /q\n2025-08-21 15:39:29,496 [root] WARNING: b'Unable to place hook on LockResource'\n2025-08-21 15:39:29,496 [root] DEBUG: 5564: set_hooks: Unable to hook LockResource\n2025-08-21 15:39:29,496 [root] DEBUG: 5564: Hooked 427 out of 428 functions\n2025-08-21 15:39:29,496 [root] DEBUG: 5564: Syscall hook installed, syscall logging level 1\n2025-08-21 15:39:29,512 [root] INFO: Loaded monitor into process with pid 5564\n...\n...\n...\n2025-08-21 15:39:30,660 [root] DEBUG: 5564: KERNELBASE.dll::0x00007FFA43044F99 0f1f440000 NOP DWORD [RAX+RAX+0x0]\n2025-08-21 15:39:30,660 [root] DEBUG: 5564: Exception Caught! PID: 5564 EIP: KERNELBASE.dll+0x34f99 7ffa43044f99, Fault Address: 3ae818ad30, Esp: 3ae818ab90, Exception Code: e06d7363\nRAX 0x3ae818ab08 RBX 0x7ffa234ce3b8 RCX 0x1ef223cd250 RDX 0x7ffa456f47b1 RSI 0x3ae818ad30 RDI 0x19930520\nR8 0x202 R9 0x0 R10 0x246 R11 0x202 R12 0x3ae818b1a0 R13 0x0 R14 0x20019 R15 0x492 RSP 0x3ae818ab90 RBP 0x3ae818ad50\nntdll.dll+0x78a3c\nntdll.dll+0x51276\ncapemon_x64.dll+0xb9dea\nntdll.dll+0x511a5\nKERNELBASE.dll+0x34f99\nVCRUNTIME140.dll+0x6720\nmso30win32client.dll+\n2025-08-21 15:39:30,660 [root] DEBUG: 5564: KERNELBASE.dll::0x00007FFA43044F99 0f1f440000 NOP DWORD [RAX+RAX+0x0]\n2025-08-21 15:39:30,660 [root] DEBUG: 5564: Exception Caught! PID: 5564 EIP: KERNELBASE.dll+0x34f99 7ffa43044f99, Fault Address: 00000000, Esp: 3ae8188700, Exception Code: e06d7363\nRAX 0x0 RBX 0x0 RCX 0x0 RDX 0x0 RSI 0x0 RDI 0x19930520\nR8 0x0 R9 0x0 R10 0x0 R11 0x0 R12 0x3ae81889b0 R13 0x0 R14 0x3ae8189828 R15 0x0 RSP 0x3ae8188700 RBP 0x3ae818ae00\nntdll.dll+0x78a3c\nntdll.dll+0x51276\ncapemon_x64.dll+0xb9dea\nntdll.dll+0x511a5\nKERNELBASE.dll+0x34f99\nVCRUNTIME140.dll+0x6720\nmso30win32client.dll+0x71f77\nmso30win32client.dll+0x7af05\nVCRUNTIME140.dll+0\n2025-08-21 15:39:30,676 [root] DEBUG: 5564: DLL loaded at 0x00007FFA40A20000: C:\\Windows\\SYSTEM32\\dwmapi (0x2f000 bytes).\n...\n...\n...\n2025-08-21 15:39:30,863 [root] DEBUG: 5564: api-rate-cap: ReadProcessMemory hook disabled due to rate\n2025-08-21 15:39:31,380 [lib.api.process] WARNING: failed to open process 5564\n2025-08-21 15:39:31,380 [lib.api.process] WARNING: failed to open process 5564\n2025-08-21 15:39:31,396 [lib.api.process] DEBUG: Failed getting image name for pid 5564\n2025-08-21 15:39:31,396 [lib.api.process] WARNING: failed to open process 5564\n2025-08-21 15:39:31,396 [lib.api.process] DEBUG: Failed getting image name for pid 5564\n2025-08-21 15:39:31,380 [lib.api.process] DEBUG: Failed getting exit code for \n2025-08-21 15:39:31,396 [root] INFO: Process with pid 5564 appears to have terminated\n2025-08-21 15:39:36,447 [root] INFO: Process list is empty, terminating analysis\n```\n\nComment: Fixed in 439dc0cf6cc2aa7cd4440f3c45d895c3cf861aa7, thanks!\nComment: Happy days \u2764\ufe0f ", + "Title: VMCloak\nBody: Dear,\n\nCan I create an analysis engine via VMCloak? Since I see that there is that possibility in Cuckoo.\n\nBest regards!\nComment: Idk\r\n\r\nEl jue, 21 ago 2025, 9:59, Loky85 ***@***.***> escribi\u00f3:\r\n\r\n> *Loky85* created an issue (kevoreilly/CAPEv2#2678)\r\n> \r\n>\r\n> Dear,\r\n>\r\n> Can I create an analysis engine via VMCloak? Since I see that there is\r\n> that possibility in Cuckoo.\r\n>\r\n> Best regards!\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> , or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you are subscribed to this thread.Message\r\n> ID: ***@***.***>\r\n>\r\n\nComment: I have never tried, so unfortunately can't answer whether it might be possible. It is not our recommendation though.", + "Title: Revert \"Update Stealc.yar\"\nBody: Reverts kevoreilly/CAPEv2#2674", + "Title: Dashboard crashes with `KeyError: 'aggregations'` when Elasticsearch is used\nBody: \n\n- [x] I am running the latest version \n- [x] I did read the README! \n- [x] I checked the documentation and found no answer \n- [x] I checked to make sure that this issue has not already been filed \n- [x] I'm reporting the issue to the correct repository (for multi-repository projects) \n- [x] I have read and checked all configs (with all optional parts) \n\n# Expected Behavior\n\nThe CAPEv2 dashboard should load successfully on port 8000.\n\n# Current Behavior\n\nThe dashboard crashes with a `KeyError: 'aggregations'` when accessing the portal on port 8000.\n\n# Failure Information (for bugs)\n\n## Steps to Reproduce\n\n1. Set up CAPEv2 with Elasticsearch enabled in `reporting.conf` and MongoDB disabled \n2. Submit a sample via CLI (e.g., a URL) \u2014 this works fine and completes \n3. Access the dashboard at `http://localhost:8000` \n4. Dashboard crashes with traceback \n\n## Context\n\n\n| Question | Answer\n|------------------|--------------------\n\n| OS version | Ubuntu 22.04\n| Elasticsearch | Running locally on port 9200\n| CAPEv2 config | See below\n\n```ini\n[elasticsearchdb]\nenabled = yes\nsearchonly = no\nhost = 127.0.0.1\nport = 9200\n# The report data is indexed in the form of {{index-yyyy.mm.dd}}\n# so the below index configuration option is actually an index 'prefix'.\nindex = cuckoo\nusername =\npassword =\n# use_ssl =\n# verify_certs =\n\n```\nKeyError\nKeyError: 'aggregations'\n\nTraceback (most recent call last)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/contrib/staticfiles/handlers.py\", line 80, in __call__\nreturn self.application(environ, start_response)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/wsgi.py\", line 124, in __call__\nresponse = self.get_response(request)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/base.py\", line 140, in get_response\nresponse = self._middleware_chain(request)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 57, in inner\nresponse = response_for_exception(request, exc)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 140, in response_for_exception\nresponse = handle_uncaught_exception(\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 181, in handle_uncaught_exception\nreturn debug.technical_500_response(request, *exc_info)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django_extensions/management/technical_response.py\", line 40, in null_technical_500_response\nraise exc_value.with_traceback(tb)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 55, in inner\nresponse = get_response(request)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/base.py\", line 197, in _get_response\nresponse = wrapped_callback(request, *callback_args, **callback_kwargs)\nFile \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/views/decorators/http.py\", line 64, in inner\nreturn func(request, *args, **kwargs)\nFile \"/opt/CAPEv2/web/dashboard/views.py\", line 94, in index\nreport[\"top_detections\"] = top_detections()\nFile \"/opt/CAPEv2/web/web/../../lib/cuckoo/common/web_utils.py\", line 366, in top_detections\n\n if date_since:\n q[\"query\"][\"bool\"][\"must\"].append({\"range\": {\"info.started\": {\"gte\": date_since.isoformat()}}})\n\n res = es.search(index=get_analysis_index(), body=q)\n data = [{\"total\": r[\"doc_count\"], \"family\": r[\"key\"]} for r in res[\"aggregations\"][\"family\"][\"buckets\"]]\n else:\n data = []\n\n if data:\n data = list(data)\nKeyError: 'aggregations'\n\n```\nComment: We don't support ES, is on community. So fix it or use mongodb\r\n\r\nEl mi\u00e9, 20 ago 2025, 15:45, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* created an issue (kevoreilly/CAPEv2#2676)\r\n> \r\n>\r\n> - I am running the latest version\r\n> - I did read the README!\r\n> - I checked the documentation and found no answer\r\n> - I checked to make sure that this issue has not already been filed\r\n> - I'm reporting the issue to the correct repository (for\r\n> multi-repository projects)\r\n> - I have read and checked all configs (with all optional parts)\r\n>\r\n> Expected Behavior\r\n>\r\n> The CAPEv2 dashboard should load successfully on port 8000.\r\n> Current Behavior\r\n>\r\n> The dashboard crashes with a KeyError: 'aggregations' when accessing the\r\n> portal on port 8000.\r\n> Failure Information (for bugs) Steps to Reproduce\r\n>\r\n> 1. Set up CAPEv2 with Elasticsearch enabled in reporting.conf and\r\n> MongoDB disabled\r\n> 2. Submit a sample via CLI (e.g., a URL) \u2014 this works fine and\r\n> completes\r\n> 3. Access the dashboard at http://localhost:8000\r\n> 4. Dashboard crashes with traceback\r\n>\r\n> Context\r\n> Question Answer\r\n>\r\n> | OS version | Ubuntu 22.04\r\n> | Elasticsearch | Running locally on port 9200\r\n> | CAPEv2 config | See below\r\n>\r\n> [elasticsearchdb]enabled = yessearchonly = nohost = 127.0.0.1port = 9200# The report data is indexed in the form of {{index-yyyy.mm.dd}}# so the below index configuration option is actually an index 'prefix'.index = cuckoousername =password =# use_ssl =# verify_certs =\r\n>\r\n> KeyError\r\n> KeyError: 'aggregations'\r\n>\r\n> Traceback (most recent call last)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/contrib/staticfiles/handlers.py\",\r\n> line 80, in *call*\r\n> return self.application(environ, start_response)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/wsgi.py\",\r\n> line 124, in *call*\r\n> response = self.get_response(request)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/base.py\",\r\n> line 140, in get_response\r\n> response = self._middleware_chain(request)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\",\r\n> line 57, in inner\r\n> response = response_for_exception(request, exc)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\",\r\n> line 140, in response_for_exception\r\n> response = handle_uncaught_exception(\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\",\r\n> line 181, in handle_uncaught_exception\r\n> return debug.technical_500_response(request, *exc_info)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django_extensions/management/technical_response.py\",\r\n> line 40, in null_technical_500_response\r\n> raise exc_value.with_traceback(tb)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/exception.py\",\r\n> line 55, in inner\r\n> response = get_response(request)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/core/handlers/base.py\",\r\n> line 197, in _get_response\r\n> response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n> File\r\n> \"/home/cape/.cache/pypoetry/virtualenvs/capev2-t2x27zRb-py3.10/lib/python3.10/site-packages/django/views/decorators/http.py\",\r\n> line 64, in inner\r\n> return func(request, *args, **kwargs)\r\n> File \"/opt/CAPEv2/web/dashboard/views.py\", line 94, in index\r\n> report[\"top_detections\"] = top_detections()\r\n> File \"/opt/CAPEv2/web/web/../../lib/cuckoo/common/web_utils.py\", line 366,\r\n> in top_detections\r\n>\r\n> if date_since:\r\n> q[\"query\"][\"bool\"][\"must\"].append({\"range\": {\"info.started\": {\"gte\": date_since.isoformat()}}})\r\n>\r\n> res = es.search(index=get_analysis_index(), body=q)\r\n> data = [{\"total\": r[\"doc_count\"], \"family\": r[\"key\"]} for r in res[\"aggregations\"][\"family\"][\"buckets\"]]\r\n> else:\r\n> data = []\r\n>\r\n> if data:\r\n> data = list(data)\r\n>\r\n> KeyError: 'aggregations'\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> , or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you are subscribed to this thread.Message\r\n> ID: ***@***.***>\r\n>\r\n", + "Title: fix Interactive desktop\nBody: Fixed guac error page.\r\nFixed submission, machine was always NONE so error page was shown. I changed db.view_machine to db.view_machine_by_label like here, and it worked again: https://github.com/kevoreilly/CAPEv2/blob/5d15ec1364f52eaa8b1e329e0aa3e3c5864925ef/web/submission/views.py#L787", + "Title: Update Stealc.yar\nBody: This should get rid of the FPs but not produce FNs.\nComment: I prefer to improve the signature patterns rather than add exclusions.\r\n\r\nPlease supply details and hashes for the FPs.\nComment: Agreed with this and had the same thought originally - although, that might mean getting rid of the $snippet1 string.. Example FP hashes:\r\n\r\n03f891ce963ca916c865b91da8e9cd47dc97406c2cbc7d019964fe5b61d7343f\r\n444ed825f744a8929d05f7900a8768e968e25489dfa0f73326833bc3bb5a55d9\r\n\r\n\"image\"\r\n\r\nHope this helps,\r\n", + "Title: Issues regarding pcap generation - pcap file is always empty\nBody: # Prerequisites\n\nPlease answer the following questions for yourself before submitting an issue.\n\n- [x] I am running the latest version\n- [X] I did read the README!\n- [X] I checked the documentation and found no answer\n- [X] I checked to make sure that this issue has not already been filed\n- [ ] I'm reporting the issue to the correct repository (for multi-repository projects)\n- [X] I have read and checked all configs (with all optional parts)\n\n\n# Expected Behavior\n\nPCAP File beeing generated correcty and contains packet dumps. Contacted hosts and DNS Requests are beeing displayed in the analysis report.\n\n# Current Behavior\n\nAnalysis runs, gets reported but Network analysis is empty, downloading the pcap file brings up an empty dump file.\n\n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n\n1. Start Analysis Task\n2. Check Reported Task\n3. No Contacted Hosts or DNS Requests \n\n## Context\n\nI did check the docs more than once and can not find any issue.\nI've checked the troubleshooting section regarding pcap generation as well, checked the pcap permissions for cape but still won't work.\n\n```\n$ ll /usr/bin/tcpdump\n-rwxr-xr-x 1 root pcap 1273976 Feb 16 2025 /usr/bin/tcpdump*\n\n$ getent group pcap\npcap:x:1002:cape\n```\n\ntcpdump path is correct as well:\n\n```\n$ whereis tcpdump\ntcpdump: /usr/bin/tcpdump /usr/share/man/man8/tcpdump.8.gz\n```\n\n\nIn the analysis folder, there will always be a dump.pcap file, which is empty.\n\n\"enp6s18\" is the interface which provides internet connectivity. Which cape seems to be using for pcap generation.\n\nManually running tcpdump as cape user works fine as seen below, so this should not be an issue regarding permissions.\n\n```\n$ sudo -u cape tcpdump -i enp6s18\n[sudo] password for sandy:\ntcpdump: verbose output suppressed, use -v[v]... for full protocol decode\nlistening on enp6s18, link-type EN10MB (Ethernet), snapshot length 262144 bytes\n10:11:23.457190 IP madlen-sandbox.ssh > 10.10.88.1.55322: Flags [P.], seq 1874148051:1874148259, ack 3562662718, win 1431, length 208\n10:11:23.512504 IP 10.10.88.1.55322 > madlen-sandbox.ssh: Flags [.], ack 0, win 254, length 0\n10:11:23.539742 IP 10.10.88.1.55322 > madlen-sandbox.ssh: Flags [P.], seq 1:161, ack 208, win 253, length 160\n10:11:23.540064 IP madlen-sandbox.ssh > 10.10.88.1.55322: Flags [P.], seq 208:256, ack 161, win 1452, length 48\n10:11:23.550665 IP madlen-sandbox.53217 > dns9.quad9.net.domain: 32350+ [1au] PTR? 1.88.10.10.in-addr.arpa. (52)\n10:11:23.578182 IP dns9.quad9.net.domain > madlen-sandbox.53217: 32350 NXDomain- 0/0/1 (52)\n10:11:23.579691 IP madlen-sandbox.49724 > dns9.quad9.net.domain: 26458+ [1au] PTR? 5.195.168.192.in-addr.arpa. (55)\n10:11:23.595055 IP dns9.quad9.net.domain > madlen-sandbox.49724: 26458 NXDomain- 0/0/1 (55)\n```\n\n\n\n\n\n## Failure Logs\n\nCape Analysis Log:\n\n\n```\nCAPE: Config and Payload Extraction\ngithub.com/kevoreilly/CAPEv2\n\nXLMMacroDeobfuscator: pywin32 is not installed (only is required if you want to use MS Excel)\npip3 install certvalidator asn1crypto mscerts\n2025-08-19 09:42:23,788 [modules.processing.network] INFO: Loading maxmind database from /opt/CAPEv2/modules/processing/../../data/GeoLite2-Country.mmdb\n/usr/bin/tcpdump\n2025-08-19 09:42:24,172 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[proxmox] with max_machines_count=10\n2025-08-19 09:42:24,172 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\n2025-08-19 09:42:24,192 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-08-19 09:42:24,257 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-08-19 09:42:24,261 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n2025-08-19 09:46:24,620 [lib.cuckoo.core.machinery_manager] INFO: Task #4: found useable machine SBX-Windows10-01 (arch=x86, platform=windows)\n2025-08-19 09:46:24,620 [lib.cuckoo.core.scheduler] INFO: Task #4: Processing task\n2025-08-19 09:46:24,875 [lib.cuckoo.core.analysis_manager] INFO: Task #4: File already exists at '/opt/CAPEv2/storage/binaries/4d70290367ad03399e17d5001842553fcd4d57e026eb330add0e3f28327d79a7'\n2025-08-19 09:46:24,876 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_kkwhui6f/ChromeSetup.exe'\n2025-08-19 09:46:28,758 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Enabled route 'internet'.\n2025-08-19 09:46:28,773 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\n2025-08-19 09:46:28,774 [modules.auxiliary.PolarProxy] INFO: PolarProxy module loaded\n2025-08-19 09:46:28,774 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\n/usr/bin/tcpdump\n2025-08-19 09:46:28,789 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 26677 (interface=enp6s18, host=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/4/dump.pcap)\n2025-08-19 09:46:30,085 [lib.cuckoo.core.guest] INFO: Task #4: Starting analysis on guest (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 09:46:58,196 [lib.cuckoo.core.guest] INFO: Task #4: Guest is running CAPE Agent 0.11 (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 09:47:02,270 [lib.cuckoo.core.guest] INFO: Task #4: Uploading script files to guest (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 09:47:11,585 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 7584 (parent 5872): ChromeSetup.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\ChromeSetup.exe\n2025-08-19 09:47:13,617 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 8604 (parent 7584): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7584_1487186110\\bin\\updater.exe\n2025-08-19 09:47:14,381 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 5516 (parent 8604): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7584_1487186110\\bin\\updater.exe\n2025-08-19 09:47:15,881 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 1188 (parent 768): svchost.exe, path C:\\Windows\\System32\\svchost.exe\n2025-08-19 09:48:08,918 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 7816 (parent 1188): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\n2025-08-19 09:48:58,604 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 5244 (parent 5212): explorer.exe, path C:\\Windows\\explorer.exe\n2025-08-19 09:49:05,053 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 8916 (parent 1188): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\n2025-08-19 09:49:18,536 [lib.cuckoo.core.guest] INFO: Task #4: Analysis completed successfully (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 09:49:18,682 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Disabled route 'internet'\n2025-08-19 09:49:21,141 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Completed analysis successfully.\n2025-08-19 09:49:21,147 [lib.cuckoo.core.analysis_manager] INFO: Task #4: analysis procedure completed\n```\n\n\nComment: Did you enable pcap in routing.conf?\r\n\r\nEl mar, 19 ago 2025, 12:49, Sk4hnt42 ***@***.***> escribi\u00f3:\r\n\r\n> *Sk4hnt42* created an issue (kevoreilly/CAPEv2#2673)\r\n> \r\n> Prerequisites\r\n>\r\n> Please answer the following questions for yourself before submitting an\r\n> issue.\r\n>\r\n> - I am running the latest version\r\n> - I did read the README!\r\n> - I checked the documentation and found no answer\r\n> - I checked to make sure that this issue has not already been filed\r\n> - I'm reporting the issue to the correct repository (for\r\n> multi-repository projects)\r\n> - I have read and checked all configs (with all optional parts)\r\n>\r\n> Expected Behavior\r\n>\r\n> PCAP File beeing generated correcty and contains packet dumps. Contacted\r\n> hosts and DNS Requests are beeing displayed in the analysis report.\r\n> Current Behavior\r\n>\r\n> Analysis runs, gets reported but Network analysis is empty, downloading\r\n> the pcap file brings up an empty dump file.\r\n> Steps to Reproduce\r\n>\r\n> Please provide detailed steps for reproducing the issue.\r\n>\r\n> 1. Start Analysis Task\r\n> 2. Check Reported Task\r\n> 3. No Contacted Hosts or DNS Requests\r\n>\r\n> Context\r\n>\r\n> I did check the docs more than once and can not find any issue.\r\n> I've checked the troubleshooting section regarding pcap generation as\r\n> well, checked the pcap permissions for cape but still won't work.\r\n>\r\n> $ ll /usr/bin/tcpdump\r\n> -rwxr-xr-x 1 root pcap 1273976 Feb 16 2025 /usr/bin/tcpdump*\r\n>\r\n> $ getent group pcap\r\n> pcap:x:1002:cape\r\n>\r\n> tcpdump path is correct as well:\r\n>\r\n> $ whereis tcpdump\r\n> tcpdump: /usr/bin/tcpdump /usr/share/man/man8/tcpdump.8.gz\r\n>\r\n> In the analysis folder, there will always be a dump.pcap file, which is\r\n> empty.\r\n>\r\n> \"enp6s18\" is the interface which provides internet connectivity. Which\r\n> cape seems to be using for pcap generation.\r\n>\r\n> Manually running tcpdump as cape user works fine as seen below, so this\r\n> should not be an issue regarding permissions.\r\n>\r\n> $ sudo -u cape tcpdump -i enp6s18\r\n> [sudo] password for sandy:\r\n> tcpdump: verbose output suppressed, use -v[v]... for full protocol decode\r\n> listening on enp6s18, link-type EN10MB (Ethernet), snapshot length 262144 bytes\r\n> 10:11:23.457190 IP madlen-sandbox.ssh > 10.10.88.1.55322: Flags [P.], seq 1874148051:1874148259, ack 3562662718, win 1431, length 208\r\n> 10:11:23.512504 IP 10.10.88.1.55322 > madlen-sandbox.ssh: Flags [.], ack 0, win 254, length 0\r\n> 10:11:23.539742 IP 10.10.88.1.55322 > madlen-sandbox.ssh: Flags [P.], seq 1:161, ack 208, win 253, length 160\r\n> 10:11:23.540064 IP madlen-sandbox.ssh > 10.10.88.1.55322: Flags [P.], seq 208:256, ack 161, win 1452, length 48\r\n> 10:11:23.550665 IP madlen-sandbox.53217 > dns9.quad9.net.domain: 32350+ [1au] PTR? 1.88.10.10.in-addr.arpa. (52)\r\n> 10:11:23.578182 IP dns9.quad9.net.domain > madlen-sandbox.53217: 32350 NXDomain- 0/0/1 (52)\r\n> 10:11:23.579691 IP madlen-sandbox.49724 > dns9.quad9.net.domain: 26458+ [1au] PTR? 5.195.168.192.in-addr.arpa. (55)\r\n> 10:11:23.595055 IP dns9.quad9.net.domain > madlen-sandbox.49724: 26458 NXDomain- 0/0/1 (55)\r\n>\r\n> Failure Logs\r\n>\r\n> Cape Analysis Log:\r\n>\r\n> CAPE: Config and Payload Extractiongithub.com/kevoreilly/CAPEv2\r\n>\r\n> XLMMacroDeobfuscator: pywin32 is not installed (only is required if you want to use MS Excel)\r\n> pip3 install certvalidator asn1crypto mscerts\r\n> 2025-08-19 09:42:23,788 [modules.processing.network] INFO: Loading maxmind database from /opt/CAPEv2/modules/processing/../../data/GeoLite2-Country.mmdb\r\n> /usr/bin/tcpdump\r\n> 2025-08-19 09:42:24,172 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[proxmox] with max_machines_count=10\r\n> 2025-08-19 09:42:24,172 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\r\n> 2025-08-19 09:42:24,192 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\r\n> 2025-08-19 09:42:24,257 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\r\n> 2025-08-19 09:42:24,261 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\r\n> 2025-08-19 09:46:24,620 [lib.cuckoo.core.machinery_manager] INFO: Task #4: found useable machine SBX-Windows10-01 (arch=x86, platform=windows)\r\n> 2025-08-19 09:46:24,620 [lib.cuckoo.core.scheduler] INFO: Task #4: Processing task\r\n> 2025-08-19 09:46:24,875 [lib.cuckoo.core.analysis_manager] INFO: Task #4: File already exists at '/opt/CAPEv2/storage/binaries/4d70290367ad03399e17d5001842553fcd4d57e026eb330add0e3f28327d79a7'\r\n> 2025-08-19 09:46:24,876 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_kkwhui6f/ChromeSetup.exe'\r\n> 2025-08-19 09:46:28,758 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Enabled route 'internet'.\r\n> 2025-08-19 09:46:28,773 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\r\n> 2025-08-19 09:46:28,774 [modules.auxiliary.PolarProxy] INFO: PolarProxy module loaded\r\n> 2025-08-19 09:46:28,774 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\r\n> /usr/bin/tcpdump\r\n> 2025-08-19 09:46:28,789 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 26677 (interface=enp6s18, host=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/4/dump.pcap)\r\n> 2025-08-19 09:46:30,085 [lib.cuckoo.core.guest] INFO: Task #4: Starting analysis on guest (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 09:46:58,196 [lib.cuckoo.core.guest] INFO: Task #4: Guest is running CAPE Agent 0.11 (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 09:47:02,270 [lib.cuckoo.core.guest] INFO: Task #4: Uploading script files to guest (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 09:47:11,585 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 7584 (parent 5872): ChromeSetup.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\ChromeSetup.exe\r\n> 2025-08-19 09:47:13,617 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 8604 (parent 7584): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7584_1487186110\\bin\\updater.exe\r\n> 2025-08-19 09:47:14,381 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 5516 (parent 8604): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7584_1487186110\\bin\\updater.exe\r\n> 2025-08-19 09:47:15,881 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 1188 (parent 768): svchost.exe, path C:\\Windows\\System32\\svchost.exe\r\n> 2025-08-19 09:48:08,918 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 7816 (parent 1188): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\r\n> 2025-08-19 09:48:58,604 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 5244 (parent 5212): explorer.exe, path C:\\Windows\\explorer.exe\r\n> 2025-08-19 09:49:05,053 [lib.cuckoo.core.resultserver] INFO: Task 4: Process 8916 (parent 1188): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\r\n> 2025-08-19 09:49:18,536 [lib.cuckoo.core.guest] INFO: Task #4: Analysis completed successfully (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 09:49:18,682 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Disabled route 'internet'\r\n> 2025-08-19 09:49:21,141 [lib.cuckoo.core.analysis_manager] INFO: Task #4: Completed analysis successfully.\r\n> 2025-08-19 09:49:21,147 [lib.cuckoo.core.analysis_manager] INFO: Task #4: analysis procedure completed\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> , or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you are subscribed to this thread.Message\r\n> ID: ***@***.***>\r\n>\r\n\nComment: > Did you enable pcap in routing.conf?\n> \n> El mar, 19 ago 2025, 12:49, Sk4hnt42 ***@***.***> escribi\u00f3:\n> [\u2026](#)\n\nHi,\n\ni had not. I just activated it, restarted all services, but the issue persists.\nComment: can you post new log from logs/cuckoo.log now that you have activated it and restarted services. just generate new job and post output\nComment: > can you post new log from logs/cuckoo.log now that you have activated it and restarted services. just generate new job and post output\n\nOf course! Here it is.\n\n\n```\n2025-08-19 11:37:25,253 [modules.processing.network] INFO: Loading maxmind database from /opt/CAPEv2/modules/processing/../../data/GeoLite2-Country.mmdb\n2025-08-19 11:37:25,626 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[proxmox] with max_machines_count=10\n2025-08-19 11:37:25,627 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\n2025-08-19 11:37:25,650 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\n2025-08-19 11:37:25,700 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\n2025-08-19 11:37:25,705 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\n2025-08-19 11:37:25,738 [lib.cuckoo.core.machinery_manager] INFO: Task #6: found useable machine SBX-Windows10-01 (arch=x86, platform=windows)\n2025-08-19 11:37:25,738 [lib.cuckoo.core.scheduler] INFO: Task #6: Processing task\n2025-08-19 11:37:25,984 [lib.cuckoo.core.analysis_manager] INFO: Task #6: File already exists at '/opt/CAPEv2/storage/binaries/4d70290367ad03399e17d5001842553fcd4d57e026eb330add0e3f2>\n2025-08-19 11:37:25,985 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_9te_vs52/ChromeSetup.exe'\n2025-08-19 11:37:29,865 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Enabled route 'internet'.\n2025-08-19 11:37:29,875 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\n2025-08-19 11:37:29,875 [modules.auxiliary.PolarProxy] INFO: PolarProxy module loaded\n2025-08-19 11:37:29,876 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\n2025-08-19 11:37:29,888 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 29710 (interface=enp6s18, host=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/6/dump.pcap)\n2025-08-19 11:37:31,133 [lib.cuckoo.core.guest] INFO: Task #6: Starting analysis on guest (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 11:37:56,571 [lib.cuckoo.core.guest] INFO: Task #6: Guest is running CAPE Agent 0.11 (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 11:37:59,976 [lib.cuckoo.core.guest] INFO: Task #6: Uploading script files to guest (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 11:38:09,270 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 7620 (parent 4240): ChromeSetup.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\ChromeSetup.exe\n2025-08-19 11:38:11,406 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 8572 (parent 7620): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7620_948679524\\bin\\up>\n2025-08-19 11:38:12,072 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 8144 (parent 8572): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7620_948679524\\bin\\up>\n2025-08-19 11:38:13,651 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1196 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\n2025-08-19 11:38:22,190 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5188 (parent 5128): explorer.exe, path C:\\Windows\\explorer.exe\n2025-08-19 11:38:25,069 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 872 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\n2025-08-19 11:38:27,320 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 10200 (parent 872): OpenWith.exe, path C:\\Windows\\System32\\OpenWith.exe\n2025-08-19 11:38:28,773 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 2996 (parent 872): dllhost.exe, path C:\\Windows\\System32\\dllhost.exe\n2025-08-19 11:38:35,468 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 3144 (parent 10200): msedge.exe, path C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\n2025-08-19 11:38:36,725 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 9816 (parent 872): BdeUISrv.exe, path C:\\Windows\\System32\\BdeUISrv.exe\n2025-08-19 11:38:41,164 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 9412 (parent 1196): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\n2025-08-19 11:38:41,599 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5276 (parent 872): dllhost.exe, path C:\\Windows\\System32\\dllhost.exe\n2025-08-19 11:39:01,644 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5312 (parent 872): mobsync.exe, path C:\\Windows\\System32\\mobsync.exe\n2025-08-19 11:39:10,791 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5828 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\n2025-08-19 11:39:52,057 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 7264 (parent 1196): MusNotification.exe, path C:\\Windows\\System32\\MusNotification.exe\n2025-08-19 11:39:55,682 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1820 (parent 7264): MusNotificationUx.exe, path C:\\Windows\\System32\\MusNotificationUx.exe\n2025-08-19 11:39:57,084 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1080 (parent 7264): MusNotifyIcon.exe, path C:\\Windows\\System32\\MusNotifyIcon.exe\n2025-08-19 11:40:19,234 [lib.cuckoo.core.guest] INFO: Task #6: Analysis completed successfully (id=SBX-Windows10-01, ip=192.168.99.20)\n2025-08-19 11:40:19,404 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Disabled route 'internet'\n2025-08-19 11:40:21,850 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Completed analysis successfully.\n2025-08-19 11:40:21,855 [lib.cuckoo.core.analysis_manager] INFO: Task #6: analysis procedure completed\n```\nComment: INFO: Started sniffer with PID 29710 (interface=enp6s18,\r\nhost=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/6/dump.pcap)\r\n\r\nEl mar, 19 ago 2025, 13:45, Sk4hnt42 ***@***.***> escribi\u00f3:\r\n\r\n> *Sk4hnt42* left a comment (kevoreilly/CAPEv2#2673)\r\n> \r\n>\r\n> can you post new log from logs/cuckoo.log now that you have activated it\r\n> and restarted services. just generate new job and post output\r\n>\r\n> Of course! Here it is.\r\n>\r\n> 2025-08-19 11:37:25,253 [modules.processing.network] INFO: Loading maxmind database from /opt/CAPEv2/modules/processing/../../data/GeoLite2-Country.mmdb\r\n> 2025-08-19 11:37:25,626 [lib.cuckoo.core.machinery_manager] INFO: Using MachineryManager[proxmox] with max_machines_count=10\r\n> 2025-08-19 11:37:25,627 [lib.cuckoo.core.scheduler] INFO: Creating scheduler with max_analysis_count=unlimited\r\n> 2025-08-19 11:37:25,650 [lib.cuckoo.core.machinery_manager] INFO: Loaded 1 machine\r\n> 2025-08-19 11:37:25,700 [lib.cuckoo.core.machinery_manager] INFO: max_vmstartup_count for BoundedSemaphore = 5\r\n> 2025-08-19 11:37:25,705 [lib.cuckoo.core.scheduler] INFO: Waiting for analysis tasks\r\n> 2025-08-19 11:37:25,738 [lib.cuckoo.core.machinery_manager] INFO: Task #6: found useable machine SBX-Windows10-01 (arch=x86, platform=windows)\r\n> 2025-08-19 11:37:25,738 [lib.cuckoo.core.scheduler] INFO: Task #6: Processing task\r\n> 2025-08-19 11:37:25,984 [lib.cuckoo.core.analysis_manager] INFO: Task #6: File already exists at '/opt/CAPEv2/storage/binaries/4d70290367ad03399e17d5001842553fcd4d57e026eb330add0e3f2>\r\n> 2025-08-19 11:37:25,985 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Starting analysis of FILE '/tmp/cuckoo-tmp/upload_9te_vs52/ChromeSetup.exe'\r\n> 2025-08-19 11:37:29,865 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Enabled route 'internet'.\r\n> 2025-08-19 11:37:29,875 [modules.auxiliary.Mitmdump] INFO: Mitmdump module loaded\r\n> 2025-08-19 11:37:29,875 [modules.auxiliary.PolarProxy] INFO: PolarProxy module loaded\r\n> 2025-08-19 11:37:29,876 [modules.auxiliary.QemuScreenshots] INFO: QEMU screenshots module loaded\r\n> 2025-08-19 11:37:29,888 [modules.auxiliary.sniffer] INFO: Started sniffer with PID 29710 (interface=enp6s18, host=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/6/dump.pcap)\r\n> 2025-08-19 11:37:31,133 [lib.cuckoo.core.guest] INFO: Task #6: Starting analysis on guest (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 11:37:56,571 [lib.cuckoo.core.guest] INFO: Task #6: Guest is running CAPE Agent 0.11 (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 11:37:59,976 [lib.cuckoo.core.guest] INFO: Task #6: Uploading script files to guest (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 11:38:09,270 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 7620 (parent 4240): ChromeSetup.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\ChromeSetup.exe\r\n> 2025-08-19 11:38:11,406 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 8572 (parent 7620): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7620_948679524\\bin\\up>\r\n> 2025-08-19 11:38:12,072 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 8144 (parent 8572): updater.exe, path C:\\Users\\Peter Silie\\AppData\\Local\\Temp\\Google7620_948679524\\bin\\up>\r\n> 2025-08-19 11:38:13,651 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1196 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\r\n> 2025-08-19 11:38:22,190 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5188 (parent 5128): explorer.exe, path C:\\Windows\\explorer.exe\r\n> 2025-08-19 11:38:25,069 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 872 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\r\n> 2025-08-19 11:38:27,320 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 10200 (parent 872): OpenWith.exe, path C:\\Windows\\System32\\OpenWith.exe\r\n> 2025-08-19 11:38:28,773 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 2996 (parent 872): dllhost.exe, path C:\\Windows\\System32\\dllhost.exe\r\n> 2025-08-19 11:38:35,468 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 3144 (parent 10200): msedge.exe, path C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\r\n> 2025-08-19 11:38:36,725 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 9816 (parent 872): BdeUISrv.exe, path C:\\Windows\\System32\\BdeUISrv.exe\r\n> 2025-08-19 11:38:41,164 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 9412 (parent 1196): taskhostw.exe, path C:\\Windows\\System32\\taskhostw.exe\r\n> 2025-08-19 11:38:41,599 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5276 (parent 872): dllhost.exe, path C:\\Windows\\System32\\dllhost.exe\r\n> 2025-08-19 11:39:01,644 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5312 (parent 872): mobsync.exe, path C:\\Windows\\System32\\mobsync.exe\r\n> 2025-08-19 11:39:10,791 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 5828 (parent 728): svchost.exe, path C:\\Windows\\System32\\svchost.exe\r\n> 2025-08-19 11:39:52,057 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 7264 (parent 1196): MusNotification.exe, path C:\\Windows\\System32\\MusNotification.exe\r\n> 2025-08-19 11:39:55,682 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1820 (parent 7264): MusNotificationUx.exe, path C:\\Windows\\System32\\MusNotificationUx.exe\r\n> 2025-08-19 11:39:57,084 [lib.cuckoo.core.resultserver] INFO: Task 6: Process 1080 (parent 7264): MusNotifyIcon.exe, path C:\\Windows\\System32\\MusNotifyIcon.exe\r\n> 2025-08-19 11:40:19,234 [lib.cuckoo.core.guest] INFO: Task #6: Analysis completed successfully (id=SBX-Windows10-01, ip=192.168.99.20)\r\n> 2025-08-19 11:40:19,404 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Disabled route 'internet'\r\n> 2025-08-19 11:40:21,850 [lib.cuckoo.core.analysis_manager] INFO: Task #6: Completed analysis successfully.\r\n> 2025-08-19 11:40:21,855 [lib.cuckoo.core.analysis_manager] INFO: Task #6: analysis procedure completed\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: > INFO: Started sniffer with PID 29710 (interface=enp6s18,\n> host=192.168.99.20, dump path=/opt/CAPEv2/storage/analyses/6/dump.pcap)\n> \n> El mar, 19 ago 2025, 13:45, Sk4hnt42 ***@***.***> escribi\u00f3:\n> [\u2026](#)\n\nYes i've seen that, the file is also beeing created.\nBut it is empty.\nBut i know that there has been network activity which should be written to the pcap dump file.\nComment: can you verify this?\nhttps://github.com/kevoreilly/CAPEv2/issues/2571#issuecomment-3108659344\n\n+ so as you saying that you sure, did you start VM manually with utils/rooter_manager.py(see -h) and enable it internet to test if it works ?\nComment: > can you verify this? [#2571 (comment)](https://github.com/kevoreilly/CAPEv2/issues/2571#issuecomment-3108659344)\n> \n> * so as you saying that you sure, did you start VM manually with utils/rooter_manager.py(see -h) and enable it internet to test if it works ?\n\nI am not using libvirt, because cape is using proxmox as vm manager.\n\nInternet access for the Analysis machine works fine. \nComment: well no one of devs are using proxmon, so i can't help more here, my advice is to start tcpdump by yourself on host to see if you can see traffic, or you need to add some custom rules. we only officially support kvm\nComment: > well no one of devs are using proxmon, so i can't help more here, my advice is to start tcpdump by yourself on host to see if you can see traffic, or you need to add some custom rules. we only officially support kvm\n\nI see, thanks for your help.\n\nYes running tcpdump on the host as cape user works fine.\n\n```\nsandy@madlen-sandbox:~$ sudo -u cape tcpdump -i enp6s18\ntcpdump: verbose output suppressed, use -v[v]... for full protocol decode\nlistening on enp6s18, link-type EN10MB (Ethernet), snapshot length 262144 bytes\n21:13:58.607690 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 813228487:813228695, ack 2158976888, win 521, length 208\n21:13:58.610409 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 208, win 1020, length 0\n21:13:58.654463 IP madlen-sandbox.41616 > dns9.quad9.net.domain: 14781+ [1au] AAAA? www.virustotal.com. (47)\n21:13:58.669820 IP dns9.quad9.net.domain > madlen-sandbox.41616: 14781 0/1/1 (137)\n21:13:58.670406 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [S], seq 565562669, win 64240, options [mss 1460,sackOK,TS val 3308019100 ecr 0,nop,wscale 7], length 0\n21:13:58.684427 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [S.], seq 4051756273, ack 565562670, win 65412, options [mss 960,sackOK,TS val 777585335 ecr 3308019100,nop,wscale 8], length 0\n21:13:58.684478 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 1, win 502, options [nop,nop,TS val 3308019114 ecr 777585335], length 0\n21:13:58.685438 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [P.], seq 1:518, ack 1, win 502, options [nop,nop,TS val 3308019115 ecr 777585335], length 517\n21:13:58.698759 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [.], ack 518, win 254, options [nop,nop,TS val 777585350 ecr 3308019115], length 0\n21:13:58.700271 IP madlen-sandbox.46872 > dns9.quad9.net.domain: 27249+ [1au] PTR? 240.6.10.10.in-addr.arpa. (53)\n21:13:58.716330 IP dns9.quad9.net.domain > madlen-sandbox.46872: 27249 NXDomain- 0/0/1 (53)\n21:13:58.717742 IP madlen-sandbox.44492 > dns9.quad9.net.domain: 31232+ [1au] PTR? 5.195.168.192.in-addr.arpa. (55)\n21:13:58.720152 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [P.], seq 1:1897, ack 518, win 254, options [nop,nop,TS val 777585371 ecr 3308019115], length 1896\n21:13:58.720213 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 1897, win 531, options [nop,nop,TS val 3308019149 ecr 777585371], length 0\n21:13:58.720240 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [.], seq 1897:2845, ack 518, win 254, options [nop,nop,TS val 777585371 ecr 3308019115], length 948\n21:13:58.720272 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 2845, win 546, options [nop,nop,TS val 3308019149 ecr 777585371], length 0\n21:13:58.720329 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [P.], seq 2845:3793, ack 518, win 254, options [nop,nop,TS val 777585371 ecr 3308019115], length 948\n21:13:58.720351 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 3793, win 552, options [nop,nop,TS val 3308019150 ecr 777585371], length 0\n21:13:58.720366 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [P.], seq 3793:4529, ack 518, win 254, options [nop,nop,TS val 777585372 ecr 3308019115], length 736\n21:13:58.720375 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 4529, win 547, options [nop,nop,TS val 3308019150 ecr 777585372], length 0\n21:13:58.724914 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [P.], seq 518:598, ack 4529, win 552, options [nop,nop,TS val 3308019154 ecr 777585372], length 80\n21:13:58.725604 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [P.], seq 598:926, ack 4529, win 552, options [nop,nop,TS val 3308019155 ecr 777585372], length 328\n21:13:58.733209 IP dns9.quad9.net.domain > madlen-sandbox.44492: 31232 NXDomain- 0/0/1 (55)\n21:13:58.734712 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 208:496, ack 1, win 521, length 288\n21:13:58.735169 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 496:752, ack 1, win 521, length 256\n21:13:58.735483 IP madlen-sandbox.59956 > dns9.quad9.net.domain: 62969+ [1au] PTR? 138.88.54.34.in-addr.arpa. (54)\n21:13:58.737828 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 752, win 1018, length 0\n21:13:58.738966 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [.], ack 926, win 253, options [nop,nop,TS val 777585390 ecr 3308019154], length 0\n21:13:58.828469 IP dns9.quad9.net.domain > madlen-sandbox.59956: 62969 1/0/1 PTR 138.88.54.34.bc.googleusercontent.com. (105)\n21:13:58.830158 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 752:1984, ack 1, win 521, length 1232\n21:13:58.830263 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 1984:3280, ack 1, win 521, length 1296\n21:13:58.830400 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 3280:4096, ack 1, win 521, length 816\n21:13:58.830524 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 4096:4352, ack 1, win 521, length 256\n21:13:58.830576 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 4352:4912, ack 1, win 521, length 560\n21:13:58.833426 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 3280, win 1023, length 0\n21:13:58.833426 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 4352, win 1019, length 0\n21:13:58.883855 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 4912, win 1023, length 0\n21:13:58.907460 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 4912:5968, ack 1, win 521, length 1056\n21:13:58.960257 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 5968, win 1019, length 0\n21:13:58.962936 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [.], seq 4529:5477, ack 926, win 253, options [nop,nop,TS val 777585614 ecr 3308019154], length 948\n21:13:58.963044 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [P.], seq 5477:5502, ack 926, win 253, options [nop,nop,TS val 777585614 ecr 3308019154], length 25\n21:13:58.963062 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 5502, win 567, options [nop,nop,TS val 3308019392 ecr 777585614], length 0\n21:13:58.965525 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [F.], seq 926, ack 5502, win 567, options [nop,nop,TS val 3308019395 ecr 777585614], length 0\n21:13:58.983945 IP 138.88.54.34.bc.googleusercontent.com.https > madlen-sandbox.47696: Flags [F.], seq 5502, ack 927, win 253, options [nop,nop,TS val 777585635 ecr 3308019395], length 0\n21:13:58.983980 IP madlen-sandbox.47696 > 138.88.54.34.bc.googleusercontent.com.https: Flags [.], ack 5503, win 567, options [nop,nop,TS val 3308019413 ecr 777585635], length 0\n21:13:59.010996 IP madlen-sandbox.ssh > 10.10.6.240.52345: Flags [P.], seq 5968:7360, ack 1, win 521, length 1392\n21:13:59.069308 IP 10.10.6.240.52345 > madlen-sandbox.ssh: Flags [.], ack 7360, win 1023, length 0\n```", + "Title: Update Yara Rule for Amadey\nBody: Update Amadey Rule", + "Title: Bump pypdf from 5.2.0 to 6.0.0\nBody: Bumps [pypdf](https://github.com/py-pdf/pypdf) from 5.2.0 to 6.0.0.\n
\nRelease notes\n

Sourced from pypdf's releases.

\n
\n

Version 6.0.0, 2025-08-11

\n

What's new

\n

Security (SEC)

\n\n

Deprecations (DEP)

\n\n

New Features (ENH)

\n\n

Robustness (ROB)

\n\n

Developer Experience (DEV)

\n\n

Maintenance (MAINT)

\n\n

Full Changelog

\n

Version 5.9.0, 2025-07-27

\n

What's new

\n

New Features (ENH)

\n\n

Bug Fixes (BUG)

\n\n

Robustness (ROB)

\n\n

Documentation (DOC)

\n\n

Full Changelog

\n

Version 5.8.0, 2025-07-13

\n

What's new

\n

New Features (ENH)

\n\n

Bug Fixes (BUG)

\n\n
\n

... (truncated)

\n
\n
\nChangelog\n

Sourced from pypdf's changelog.

\n
\n

Version 6.0.0, 2025-08-11

\n

Security (SEC)

\n
    \n
  • Limit decompressed size for FlateDecode filter (#3430)
  • \n
\n

Deprecations (DEP)

\n
    \n
  • Drop Python 3.8 support (#3412)
  • \n
\n

New Features (ENH)

\n
    \n
  • Move BlackIs1 functionality to tiff_header (#3421)
  • \n
\n

Robustness (ROB)

\n
    \n
  • Skip Go-To actions without a destination (#3420)
  • \n
\n

Developer Experience (DEV)

\n
    \n
  • Update code style related libraries (#3414)
  • \n
  • Update mypy to 1.17.0 (#3413)
  • \n
  • Stop testing on Python 3.8 and start testing on Python 3.14 (#3411)
  • \n
\n

Maintenance (MAINT)

\n
    \n
  • Cleanup deprecations (#3424)
  • \n
\n

Full Changelog

\n

Version 5.9.0, 2025-07-27

\n

New Features (ENH)

\n
    \n
  • Automatically preserve links in added pages (#3298)
  • \n
  • Allow writing/updating all properties of an embedded file (#3374)
  • \n
\n

Bug Fixes (BUG)

\n
    \n
  • Fix XMP handling dropping indirect references (#3392)
  • \n
\n

Robustness (ROB)

\n
    \n
  • Deal with DecodeParms being empty list (#3388)
  • \n
\n

Documentation (DOC)

\n
    \n
  • Document how to read and modify XMP metadata (#3383)
  • \n
\n

Full Changelog

\n

Version 5.8.0, 2025-07-13

\n

New Features (ENH)

\n
    \n
  • Implement flattening for writer (#3312)
  • \n
\n

Bug Fixes (BUG)

\n
    \n
  • Unterminated object when using PdfWriter with incremental=True (#3345)
  • \n
\n

Robustness (ROB)

\n\n
\n

... (truncated)

\n
\n
\nCommits\n\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pypdf&package-manager=pip&previous-version=5.2.0&new-version=6.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\nYou can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/kevoreilly/CAPEv2/network/alerts).\n\n
", + "Title: Ideas - Features\nBody: Just writing them down here to not forget \ud83d\ude09 . Feel free to split / close etc. @kevoreilly @doomedraven \n\n* Update CAPEv2 README + docs to include a section on CAPE-parsers since they were moved there.\n\n* 'Bump the dump'. Any payloads or anything detected by CAPE itself (whether procdumps, etc..), should always be all the way at the top in case of multiple dumps. \n\nThese may or may not be interesting and/or harder to do:\n\n* On the main submission page of CAPE, it could be useful to display the 'current version' and the last version (date) as mentioned in the changelog. Might require pulling the info from Git or whichever. But could be handy to keep track.\n\n* Option for a 'Download all' button on Payloads / Dropped files tab.\n\n* When setting a bp or using dump on api options using CAPE debugger, and a dump is generated (in Payloads tab), one can infer the correct dump by reviewing the analysis log, e.g. see https://capesandbox.com/analysis/21104/ which I uploaded. Perhaps an extra row in Payloads could be useful that shows the specific bp or API option that generated the dump? Probably this idea can be scrapped as we can review the analysis log, but who knows.\n \nThat's all,\nComment: Hey nice, download all is already implemented ", + "Title: Add package to run JS with NodeJS\nBody: None\nComment: thank you\nComment: \ud83d\ude4f ", + "Title: Bump protobuf from 5.29.3 to 5.29.5\nBody: Bumps [protobuf](https://github.com/protocolbuffers/protobuf) from 5.29.3 to 5.29.5.\n
\nCommits\n
    \n
  • f5de0a0 Updating version.json and repo version numbers to: 29.5
  • \n
  • 8563766 Merge pull request #21858 from shaod2/py-cp-29
  • \n
  • 05ba1a8 Add recursion depth limits to pure python
  • \n
  • 1ef3f01 Internal pure python fixes
  • \n
  • 69cca9b Remove fast-path check for non-clang compilers in MessageCreator. (#21612)
  • \n
  • 21fdb7a fix: contains check segfaults on empty map (#20446) (#20904)
  • \n
  • 03c50e3 Re-enable aarch64 tests. (#20853)
  • \n
  • 128f0aa Add volatile to featuresResolved (#20767)
  • \n
  • bdd49bb Merge pull request #20755 from protocolbuffers/29.x-202503192110
  • \n
  • c659468 Updating version.json and repo version numbers to: 29.5-dev
  • \n
  • Additional commits viewable in compare view
  • \n
\n
\n
\n\n\n[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=protobuf&package-manager=pip&previous-version=5.29.3&new-version=5.29.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)\n\nDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.\n\n[//]: # (dependabot-automerge-start)\n[//]: # (dependabot-automerge-end)\n\n---\n\n
\nDependabot commands and options\n
\n\nYou can trigger Dependabot actions by commenting on this PR:\n- `@dependabot rebase` will rebase this PR\n- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it\n- `@dependabot merge` will merge this PR after your CI passes on it\n- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it\n- `@dependabot cancel merge` will cancel a previously requested merge and block automerging\n- `@dependabot reopen` will reopen this PR if it is closed\n- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually\n- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency\n- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)\n- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)\nYou can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/kevoreilly/CAPEv2/network/alerts).\n\n
", + "Title: Update usages with expected location of 7zz binary\nBody: Replace the leftover code spots pointing to `/usr/bin/7z`.\r\n\r\n@doomedraven \r\nMy preferred fix:\r\n1. Set `7zz` binary relative location in only one place in one config file.\r\n2. Create a constant using `SEVENZIP_BIN = os.path.join(CUCKOO_ROOT, )` that can be imported where needed.\r\n\r\nLet me know if I should do that instead and I will make the changes to this PR.\nComment: thank you", + "Title: Azure machinery major updates\nBody: **New features:**\r\n1. VMSSs can now idle with zero VMs attached\r\n2. Faster VMSS instance deletions -> lower scale-down times\r\n\r\n**Fixes**\r\n1. Improved handling of IP conflicts when spot instances are evicted. Bug was preventing valid machines currently attached to the VMSS from being added to the DB, thus unable to run tasks.\r\n2. Minor logging typo fix and reduced `_thr_scale_machine_pool` call complexity\r\n\r\n\r\nThis PR removes the requirement to always have running instances on a VMSS. It does this by use of \"placeholder\" machines.\r\n- Previously, when a VMSS was at 0 capacity, CAPE would crash.\r\n- The placeholder machine is set in our DB to prevent that.\r\n- These machines are in a permanent \"locked\" state and can never be assigned tasks.\r\n- When initializing, if `initial_pool_size == 0`, we add a placeholder machine for that VMSS.\r\n- Removed when scaling-up and added when scaling down to zero.\r\n- **NOTE:** This means that when a VMSS is at 0 capacity and a task is submitted targeting it, it must first provision -> run -> wait for agent to be available before the task run begins. **This has been seen to take 1-3 minutes of time.** Efforts are being made to greatly reduce this, but be aware that it exists.\r\n\r\n\r\n**Undesirable changes:**\r\n1. Clumsy use of try/except to prevent sqlalchemy session issues\r\n2. No longer using `\"platform\"` with `filter_machines_to_task`. Caused many issues when running VMSS of win7/10/11 concurrently.\r\n\r\n\r\nThese are some pretty large changes, but should be nearly entirely backwards compatible. I have concerns over not filtering tasks by `\"platform\"` and would like feedback from anyone using Azure machinery if this is something they actively use for task submissions.\nComment: @cccs-mog @cccs-kevin These changes may/may not be of value for your use case. Have found it to be useful in reducing cost and quota load from less frequently used OS versions. If you have a chance to test this out or have ideas to improve, that would be greatly appreciated.\nComment: Hi @ChrisThibodeaux, might be able to do some limited testing. Will comment back in the near future. \nComment: thank you", + "Title: Reduce vivisect.base and vivisect.impemu cape-processor logs\nBody: Cape-processor logs are flooded with vivisect logs. These changes keep relevant `vivisect` logs and remove `vivisect.base` and `vivisect.impemu` logs below the `ERROR` level.\nComment: interesting, i had disable it completely \nComment: I found that the current settings had no impact on logs at `INFO` level or above. I'm not entirely sure how `NOTSET` works, but it seems to use a lower-heirarchy logger's level to set the one in capa.py \nComment: \"Captura\r\n\r\nmaybe better to set it to critical?\nComment: I can definitely make that change, not a problem.\r\n\r\nI found some of the `[vivisect]` logs to be useful, such as:\r\n```\r\n2025-08-11 15:39:37,257 [Task 894] [vivisect] INFO: Percentage of discovered executable surface area: 95.8% (427791 / 446464)\r\n2025-08-11 15:39:37,257 [Task 894] [vivisect] INFO: Xrefs/Blocks/Funcs: (31491 / 23131 / 1610)\r\n2025-08-11 15:39:37,257 [Task 894] [vivisect] INFO: Locs, Ops/Strings/Unicode/Nums/Ptrs/Vtables: (89714: 83169 / 623 / 782 / 1339 / 620 / 0)\r\n```\r\n\r\nHowever, leaving it at `INFO` _does_ still output more logs than are useful. Particularly, `vivisect.parsers.pe` is still a pain and really overkill with logging..\r\n\r\nI agree on the swap to `CRITICAL` and have tested it locally.\nComment: we can mute per subparsers if we want that is not a big problem. so maybe we should add `logging.getLogger(\"vivisect.parsers.pe\").setLevel(logging.CRITICAL)` ? could you test it please \r\n\nComment: No problem, I will test that out today\nComment: Thank you", + "Title: Change citation information 's/is/in/'\nBody: None", + "Title: fix volatility malfind output\nBody: None", + "Title: Failing unserviceable task | failed_analysis when submitting any analysis\nBody: \n\n- [x] I am running the latest version Y\n- [x] I did read the README! Y\n- [x] I checked the documentation and found no answer Y\n- [x] I checked to make sure that this issue has not already been filed Y\n- [x] I'm reporting the issue to the correct repository (for multi-repository projects) Y\n- [x] I have read and checked all configs (with all optional parts) Y\n\n\n# Expected Behavior\n\nOnce submitting a payload, capev2 should run the vm and start the analysis\n\n# Current Behavior\n\nonce I submit a a payload, capev2 catches it, and tries to run the machine with no avail until it hits a timeout. And shows unserviceable task.\n\n# Failure Information (for bugs)\n\nCapev2 is able to shut down the machine if it's on, but unable to run it. And if it's off it can't run it.\n\n\nI'm able to use virsh --connect qemu:///system list --all from the cape user, and run the vm.\n\n\n\n## Steps to Reproduce\n\nPlease provide detailed steps for reproducing the issue.\n1. submits payload\n2. Takes a while\n3. Gets failed_analysis \n\n## Context\n\ncat /etc/os-release\nPRETTY_NAME=\"Ubuntu 24.04.2 LTS\"\nNAME=\"Ubuntu\"\nVERSION_ID=\"24.04\"\nVERSION=\"24.04.2 LTS (Noble Numbat)\"\n\npython3 cuckoo.py -v\nYou are running Cuckoo Sandbox 2.4-CAPE\n\nvirsh dumpxml win10 | grep '\ndomain type='kvm' id='2'\n\n\n## Config Files:\n\n**KVM.cof**\n[kvm]\nmachines = win10\ninterface = virbr0\ndsn = qemu:///system\n[win10]\nlabel = win10\nplatform = windows\nip = 192.168.122.105\ntags = win10,office2019,acrobat_reader_11\nsnapshot = Snapshot1\ninterface = virbr0\narch = x64\n\n[cuckoo2]\nlabel = cuckoo2\nplatform = windows\nip = 192.168.122.106\narch = x86\ntags = win11,office2016,acrobat_reader_11\nsnapshot = Snapshot1\nreserved = no\n\n## Failure Logs\n\nBelow are logs from **python3 cuckoo.py -d**\n2025-08-07 11:32:02,422 [lib.cuckoo.core.machinery_manager] INFO: Task #8: found useable machine win10 (arch=x64, platform=windows)\n2025-08-07 11:32:02,422 [lib.cuckoo.core.scheduler] INFO: Task #8: Processing task\n2025-08-07 11:32:02,426 [lib.cuckoo.core.analysis_manager] INFO: Task #8: Starting analysis of URL 'www.google.com'\n2025-08-07 11:32:02,426 [lib.cuckoo.common.abstracts] DEBUG: Starting machine win10\n2025-08-07 11:32:02,430 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:02,434 [lib.cuckoo.common.abstracts] DEBUG: Using snapshot snapshot1 for virtual machine win10\n2025-08-07 11:32:02,912 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:02,916 [lib.cuckoo.common.abstracts] DEBUG: Waiting 0 cuckooseconds for machine win10 to switch to status ['running']\n2025-08-07 11:32:03,916 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:03,920 [lib.cuckoo.common.abstracts] DEBUG: Waiting 1 cuckooseconds for machine win10 to switch to status ['running']\n2025-08-07 11:32:04,920 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:04,935 [lib.cuckoo.common.abstracts] DEBUG: Waiting 2 cuckooseconds for machine win10 to switch to status ['running']\n2025-08-07 11:32:05,150 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 1; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 1; # Specific Locked Machines: {'win10': 1, 'office2019': 1, 'acrobat_reader_11': 1, 'windows': 1}; # Total Machines: 1\n2025-08-07 11:32:05,935 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:05,950 [lib.cuckoo.common.abstracts] DEBUG: Waiting 3 cuckooseconds for machine win10 to switch to status ['running']\n2025-08-07 11:32:06,951 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-07 11:32:06,965 [lib.cuckoo.common.abstracts] DEBUG: Waiting 4 cuckooseconds for machine win10 to switch to status ['running']\n2025-08-07 11:32:07,966 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n\n**python3 cuckoo.py**\n2025-08-10 08:58:15,271 [lib.cuckoo.core.analysis_manager] INFO: Task #9: Starting analysis of URL 'google.com'\n2025-08-10 09:03:21,302 [lib.cuckoo.core.analysis_manager] ERROR: Task #9: Timeout hit while for machine win10 to change status\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 454, in start\n self._wait_status(label, self.RUNNING)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 357, in _wait_status\n raise CuckooMachineError(f\"Timeout hit while for machine {label} to change status\")\nlib.cuckoo.common.exceptions.CuckooMachineError: Timeout hit while for machine win10 to change status\n2025-08-10 09:03:21,350 [lib.cuckoo.core.analysis_manager] ERROR: Task #9: failure in AnalysisManager.run\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 454, in start\n self._wait_status(label, self.RUNNING)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 357, in _wait_status\n raise CuckooMachineError(f\"Timeout hit while for machine {label} to change status\")\nlib.cuckoo.common.exceptions.CuckooMachineError: Timeout hit while for machine win10 to change status\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 511, in run\n self.launch_analysis()\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 472, in launch_analysis\n success = self.perform_analysis()\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 456, in perform_analysis\n with self.machine_running(), self.result_server(), self.network_routing(), self.run_auxiliary():\n File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 335, in machine_running\n raise CuckooDeadMachine(self.machine.name) from e\nlib.cuckoo.core.analysis_manager.CuckooDeadMachine: win10 is dead!\n2025-08-10 09:03:21,803 [lib.cuckoo.core.scheduler] INFO: Task #9: Failing unserviceable task\n\n\n**These logs are from /var/log/libvirt/qemu/win10.log**\nchar device redirected to /dev/pts/3 (label charserial0)\n2025-08-07T11:31:24.398279Z qemu-system-x86_64: terminating on signal 15 from pid 3207 (/usr/sbin/virtqemud)\n2025-08-07 11:31:24.999+0000: shutting down, reason=destroyed\n\nComment: does it works for files? you using kvm or qemu?\nComment: No doesn't work for files.\nif I understood your question correctly, QEMU with KVM.\nvirsh dumpxml win10 | grep ' \ndomain type='kvm' id='2'\nComment: that is easier which machinery do you use kvm or qemu, aka which config file you modified qemu.conf or kvm.conf, but from output i guess it was kvm.conf.\n\nif not even files works, well you have wrong configuration of VMs related part, can you post config that you edited like kvm.conf or qemu.conf?\nComment: It's kvm.conf\n\ni just added it to the main post, and here it's again:\n\n**KVM.cof**\n[kvm]\nmachines = win10\ninterface = virbr0\ndsn = qemu:///system\n[win10]\nlabel = win10\nplatform = windows\nip = 192.168.122.105\ntags = win10,office2019,acrobat_reader_11\nsnapshot = Snapshot1\ninterface = virbr0\narch = x64\n\n[cuckoo2]\nlabel = cuckoo2\nplatform = windows\nip = 192.168.122.106\narch = x86\ntags = win11,office2016,acrobat_reader_11\nsnapshot = Snapshot1\n\n\nI'm using win10 machine.\n\nComment: It's in NAT mode for start. But that is not root of problem.\r\nIf you use the same IP range in cuckoomconf for resolserver then I don't\r\nsee any issue\r\n\r\nEl dom, 10 ago 2025, 9:44, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* left a comment (kevoreilly/CAPEv2#2662)\r\n> \r\n>\r\n> It's kvm.conf\r\n>\r\n> i just added it to the main post, and here it's again:\r\n>\r\n> *KVM.cof*\r\n> [kvm]\r\n> machines = win10\r\n> interface = virbr0\r\n> dsn = qemu:///system\r\n> [win10]\r\n> label = win10\r\n> platform = windows\r\n> ip = 192.168.122.105\r\n> tags = win10,office2019,acrobat_reader_11\r\n> snapshot = Snapshot1\r\n> interface = virbr0\r\n> arch = x64\r\n>\r\n> [cuckoo2]\r\n> label = cuckoo2\r\n> platform = windows\r\n> ip = 192.168.122.106\r\n> arch = x86\r\n> tags = win11,office2016,acrobat_reader_11\r\n> snapshot = Snapshot1\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Yes it's in NAT mode.\n\n\nHere is the content of **cuckoo.conf** \n\n**cat cuckoo.conf | grep 'resultserver' -A10**\n[resultserver]\nip = 192.168.122.1\nforce_port = yes\n\n**ip add**\n4: virbr0: mtu 1500 qdisc htb state UP group default qlen 1000\n link/ether 52:54:00:49:93:0c brd ff:ff:ff:ff:ff:ff\n inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\n valid_lft forever preferred_lft forever\n\n**ss -tulpn | grep 2042**\ntcp LISTEN 0 128 192.168.122.1:2042 0.0.0.0:* users:((\"python3\",pid=34556,fd=10))\n\n\n\n\nComment: So far everything looks correct, in one of the configs you should be able\r\nto disable inservible option, then restart cuckoo service, as from\r\ninformation that you provided everything looks correr\r\n\r\nEl dom, 10 ago 2025, 11:06, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* left a comment (kevoreilly/CAPEv2#2662)\r\n> \r\n>\r\n> Yes it's in NAT mode.\r\n>\r\n> Here is the content of *cuckoo.conf*\r\n>\r\n> *cat cuckoo.conf | grep 'resultserver' -A10*\r\n> [resultserver]\r\n> ip = 192.168.122.1\r\n> force_port = yes\r\n>\r\n> *ip add*\r\n> 4: virbr0: mtu 1500 qdisc htb state UP\r\n> group default qlen 1000\r\n> link/ether 52:54:00:49:93:0c brd ff:ff:ff:ff:ff:ff\r\n> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\r\n> valid_lft forever preferred_lft forever\r\n>\r\n> *ss -tulpn | grep 2042*\r\n> tcp LISTEN 0 128 192.168.122.1:2042 0.0.0.0:*\r\n> users:((\"python3\",pid=34556,fd=10))\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: > disable inservible option\n\nWhere can I find this? I searched the docs, looked in conf files but I didn't find such option.\nComment: Cuckoo.conf\r\n\r\n# Fail \"unserviceable\" tasks as they are queued.\r\n# Any task found that will never be analyzed based on the available\r\nanalysis machines\r\n# will have its status set to \"failed\".\r\nfail_unserviceable = on\r\n\r\nEl dom, 10 ago 2025, 13:36, LAKH ***@***.***> escribi\u00f3:\r\n\r\n> *LaKH-exe* left a comment (kevoreilly/CAPEv2#2662)\r\n> \r\n>\r\n> disable inservible option\r\n>\r\n> Where can I find this? I searched the docs, looked in conf files but I\r\n> didn't find such option.\r\n>\r\n> \u2014\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n\nComment: Same issue. The thing is I'm watching the logs on /var/log/libvirt/qemu/win10.log, but libvirt is not receiving any request to change the status.\n\n**cuckoo.py -d**\n2025-08-10 12:56:04,778 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 1; # Available Specific Machines: {'acrobat_reader_11': 1, 'office2019': 1, 'win10': 1, 'windows': 1}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines: 1\n2025-08-10 12:56:10,955 [lib.cuckoo.core.machinery_manager] INFO: Task #12: found useable machine win10 (arch=x64, platform=windows)\n2025-08-10 12:56:10,955 [lib.cuckoo.core.scheduler] INFO: Task #12: Processing task\n2025-08-10 12:56:10,960 [lib.cuckoo.core.analysis_manager] INFO: Task #12: Starting analysis of URL 'www.google.com'\n2025-08-10 12:56:10,960 [lib.cuckoo.common.abstracts] DEBUG: Starting machine win10\n2025-08-10 12:56:10,963 [lib.cuckoo.common.abstracts] DEBUG: Getting status for win10\n2025-08-10 12:56:10,965 [lib.cuckoo.core.analysis_manager] ERROR: Task #12: Trying to start a virtual machine that has not been turned off win10\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 421, in start\n raise CuckooMachineError(msg)\nlib.cuckoo.common.exceptions.CuckooMachineError: Trying to start a virtual machine that has not been turned off win10\n2025-08-10 12:56:10,978 [lib.cuckoo.core.analysis_manager] ERROR: Task #12: failure in AnalysisManager.run\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 310, in machine_running\n self.machinery_manager.start_machine(self.machine)\n File \"/opt/CAPEv2/lib/cuckoo/core/machinery_manager.py\", line 305, in start_machine\n self.machinery.start(machine.label)\n File \"/opt/CAPEv2/modules/machinery/kvm.py\", line 37, in start\n super(KVM, self).start(label)\n File \"/opt/CAPEv2/lib/cuckoo/common/abstracts.py\", line 421, in start\n raise CuckooMachineError(msg)\nlib.cuckoo.common.exceptions.CuckooMachineError: Trying to start a virtual machine that has not been turned off win10\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 511, in run\n self.launch_analysis()\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 472, in launch_analysis\n success = self.perform_analysis()\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 456, in perform_analysis\n with self.machine_running(), self.result_server(), self.network_routing(), self.run_auxiliary():\n File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/opt/CAPEv2/lib/cuckoo/core/analysis_manager.py\", line 335, in machine_running\n raise CuckooDeadMachine(self.machine.name) from e\nlib.cuckoo.core.analysis_manager.CuckooDeadMachine: win10 is dead!\n2025-08-10 12:56:11,970 [lib.cuckoo.core.scheduler] INFO: Task #12: Failing unserviceable task\n2025-08-10 12:56:14,801 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines: 0\n2025-08-10 12:56:24,823 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines: 0\n2025-08-10 12:56:34,829 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines: 0\n2025-08-10 12:56:44,838 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines: 0\n2025-08-10 12:56:54,843 [lib.cuckoo.core.scheduler] DEBUG: # Active analysis: 0; # Pending Tasks: 0; # Specific Pending Tasks: {}; # Available Machines: 0; # Available Specific Machines: {}; # Locked Machines: 0; # Specific Locked Machines: {}; # Total Machines\nComment: so `ERROR: Task https://github.com/kevoreilly/CAPEv2/pull/12: Trying to start a virtual machine that has not been turned off win10`\n\nthis says to me that VM snapshot is not in running state, could you provide details about VM ?\nComment: When I got the mentioned error, I had the VM on while monitoring libvert logs to see if cape is able to turn it off\n\n**virsh dominfo win10**\n```\nId: 2\nName: win10\nUUID: 86226fec-7ca6-4797-9e23-16b5d3048bae\nOS Type: hvm\nState: running\nCPU(s): 4\nCPU time: 4275.0s\nMax memory: 8290304 KiB\nUsed memory: 8290304 KiB\nPersistent: yes\nAutostart: disable\nManaged save: no\nSecurity model: apparmor\nSecurity DOI: 0\nSecurity label: libvirt-86226fec-7ca6-4797-9e23-16b5d3048bae (enforcing)\nMessages: tainted: running with undesirable elevated privileges\n\n```\n\n_More verbose info from_ **virsh dumpxml win10**\n\n```\n\n win10\n 86226fec-7ca6-4797-9e23-16b5d3048bae\n \n \n \n \n \n 8290304\n 8290304\n 4\n \n /machine\n \n \n hvm\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n destroy\n restart\n destroy\n \n \n \n \n \n /usr/bin/qemu-system-x86_64\n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n
\n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n \n \n
\n \n \n \n
\n \n \n \n
\n \n \n \n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n