Skip to content

Improve speed #40

@thewh1teagle

Description

@thewh1teagle

Currently kokoro-onnx inference speed is ~2s for each generation on macOS M1 and it uses only ~30% CPU / GPU (CoreML / CPU providers)

Things we tried:

  • Using more threads
  • PARALLEL mode

Related:

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions