Skip to content
/ qapyq Public
forked from FennelFetish/qapyq

An image viewer and AI-assisted editing/captioning/masking tool that helps with curating datasets for generative AI models, finetunes and LoRA.

License

Notifications You must be signed in to change notification settings

4kir4vjp/qapyq

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

qapyq

(CapPic)
An image viewer and AI-assisted editing tool that helps with curating datasets for generative AI models, finetunes and LoRA.




Screenshot of qapyq with its 5 windows open.

Edit captions quickly with drag-and-drop support Select one-of-many Apply sorting and filtering rules

Quick cropping Image comparison Draw masks manually or apply automatic detection and segmentation

Transform tags using conditional rules Multi-Edit and Focus Mode

Features

  • Image Viewer: Display and navigate images

    • Quick-starting desktop application built with Qt
    • Runs smoothly with tens of thousands of images
    • Modular interface that lets you place windows on different monitors
    • Open multiple tabs
    • Zoom/pan and fullscreen mode
    • Gallery with thumbnails and optionally captions
    • Semantic image sorting with text prompts
    • Compare two images
    • Measure size, area and pixel distances
    • Slideshow
  • Image/Mask Editor: Prepare images for training

    • Crop and save parts of images
    • Scale images, optionally using AI upscale models
    • Dynamic save paths with template variables
    • Manually edit masks with multiple layers
    • Support for pressure-sensitive drawing pens
    • Record masking operations into macros
    • Automated masking
  • Captioning: Describe images with text

    • Edit captions manually with drag-and-drop support
    • Multi-Edit Mode for editing captions of multiple images simultaneously
    • Focus Mode where one key stroke adds a tag, saves the file and skips to the next image
    • Tag grouping, merging, sorting, filtering and replacement rules
    • Colored text highlighting
    • CLIP Token Counter
    • Automated captioning with support for grounding
    • Prompt presets
    • Multi-turn conversations with each answer saved to different entries in a .json file
    • Further refinement with LLMs
  • Stats/Filters: Summarize your data and get an overview

    • List all tags, image resolutions, masked regions, or size of concept folders
    • Filter images and create subsets
    • Combine and chain filters
    • Export the summaries as CSV
  • Batch Processing: Process whole folders at once

    • Flexible batch captioning, tagging and transformation
    • Batch scaling of images
    • Batch masking with user-defined macros
    • Batch cropping of images using your macros
    • Copy and move files, create symlinks, ZIP captions for backups
  • AI Assistance:

    • Support for state-of-the-art captioning and masking models
    • Model and sampling settings, GPU acceleration with CPU offload support
    • On-the-fly NF4 and INT8 quantization
    • Run inference locally and/or on multiple remote machines over SSH
    • Separate inference subprocess isolates potential crashes and allows complete VRAM cleanup

Supported Models

These are the supported architectures with links to the original models.
Find more specialized finetuned models on huggingface.co.

Setup

Requires Python 3.10 or later.

By default, prebuilt packages for CUDA 12.8 are installed. If you need a different CUDA version, change the URLs in requirements-pytorch.txt and requirements-flashattn.txt before running the setup script.

  1. Git clone or download this repository.
  2. Run setup.sh on Linux, setup.bat on Windows.
    • Packages are installed into a virtual environment.

The setup script will ask you a couple of questions.
You can choose to install only the GUI and image processing packages without AI assistance. Or when installing on a headless server for remote inference, you can choose to install only the backend.

If the setup scripts didn't work for you, but you manually got it running, please share your solution and raise an issue.

Startup

  • Linux: run.sh
  • Windows: run.bat or run-console.bat

You can open files or folders directly in qapyq by associating the file types with the respective run script in your OS. For shortcuts, icons are available in the qapyq/res folder.

Update

If git was used to clone the repository, simply use git pull to update.
If the repository was downloaded as a zip archive, download it again and replace the installed files.

To update the installed packages in the virtual environment, run the setup script again.

New dependencies may be added. If the program fails to start or crashes, run the setup script to install the missing packages.

User Guide

More information is available in the Wiki.
Use the page index on the right side to find topics and navigate the Wiki.

How to setup and configure AI models: Model Setup

How to use qapyq: User Guide

How to caption with qapyq: Captioning

How to use qapyq's features in a workflow: Tips and Workflows

If you have questions, please ask in the Discussions.

Planned Features

  • Natural sorting of files
  • Gallery list view with captions
  • Summary and stats of captions and tags
  • Shortcuts and improved ease-of-use
  • AI-assisted mask editing
  • Overlays (difference image) for comparison tool
  • Image resizing
  • Run inference on remote machines
  • Adapt new captioning and masking models
  • Possibly a plugin system for new tools
  • Integration with ComfyUI
  • Docs, Screenshots, Video Guides

About

An image viewer and AI-assisted editing/captioning/masking tool that helps with curating datasets for generative AI models, finetunes and LoRA.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%