- Once you've followed the setup instructions below you can run the application
- Run the following from repository's root
python -m t3 'Rekorde im Tierreich.gme' workdir- This will translate
Rekorde im Tierreich.gmeand store the translated GME and intermediate files in theworkdir - Run
python -m t3 -hto see all available options
- Alternatively run the application from a Docker container (see instructions below)
- Clone this repo with submodules:
git clone --recurse-submodules [email protected]:jtomori/t3.git sudo apt install sox ffmpegpip install numpy typing_extensionspip install -r requirements.txt- Store SeamlessExpressive models in the
SeamlessExpressivefolder in repository's root - Compile
libtiptoi.c:gcc tip-toi-reveng/libtiptoi.c -o libtiptoi
- GPU inference requires NVIDIA Container Toolkit
- Build image with
docker build -t t3 . - Run container with
docker run --runtime=nvidia --gpus all --volume ./SeamlessExpressive:/app/SeamlessExpressive --volume ./gme:/app/gme --volume ./workdir:/app/workdir --rm --name t3 t3 gme/name_of_file.gme workdir- Make sure that
gme, SeamlessExpressive, workdirdirectories are present in your current directory workdirwill contain translated GME file along with intermediate files, CSV report- Omit
--runtime=nvidia --gpus allfor performing a CPU inference
- Make sure that
python tests.py./checks.sh
- Finished setup for running GPU (or CPU) inference from a Docker container
- Initial release