Skip to content

Releases: roboflow/inference

v0.9.14

23 Feb 17:40
74032e7
Compare
Choose a tag to compare

🚀 Added

LMMs (GPT-4V and CogVLM) 🤝 workflows

Now, with Roboflow workflows LMMs integration is made easy 💪 . Just look at the demo! 🤯

lmms_in_workflows.mp4

As always, we encourage you to visit workflows docs 📖 and examples.

This is how to create a multi-functional app with workflows and LMMs:

inference server start
from inference_sdk import InferenceHTTPClient

LOCAL_CLIENT = InferenceHTTPClient(
    api_url="http://127.0.0.1:9001", 
    api_key=ROBOFLOW_API_KEY,
)
FLEXIBLE_SPECIFICATION = {
    "version": "1.0",
    "inputs": [
        { "type": "InferenceImage", "name": "image" },
        { "type": "InferenceParameter", "name": "open_ai_key" },
        { "type": "InferenceParameter", "name": "lmm_type" },
        { "type": "InferenceParameter", "name": "prompt" },
        { "type": "InferenceParameter", "name": "expected_output" },
    ],
    "steps": [     
        {
            "type": "LMM",
            "name": "step_1",
            "image": "$inputs.image",
            "lmm_type": "$inputs.lmm_type",
            "prompt": "$inputs.prompt",
            "json_output": "$inputs.expected_output",
            "remote_api_key": "$inputs.open_ai_key",
        },
    ],
    "outputs": [
        { "type": "JsonField", "name": "structured_output", "selector": "$steps.step_1.structured_output" },
        { "type": "JsonField", "name": "llm_output", "selector": "$steps.step_1.*" },
    ]   
}

response_gpt = LOCAL_CLIENT.infer_from_workflow(
    specification=FLEXIBLE_SPECIFICATION,
    images={
        "image": cars_image,
    },
    parameters={
        "open_ai_key": OPEN_AI_KEY,
        "lmm_type": "gpt_4v",
        "prompt": "You are supposed to act as object counting expert. Please provide number of **CARS** visible in the image",
        "expected_output": {
            "objects_count": "Integer value with number of objects",
        }
    }
)

🌱 Changed

🔨 Fixed

  • turn off instant page for load to cookbook page properly by @onuralpszr in #275 (thanks for contribution 🥇 )
  • bug in workflows that made cropping in multi-detection set-up

Full Changelog: v0.9.13...v0.9.14

v0.9.13

16 Feb 16:04
9f0265a
Compare
Choose a tag to compare

🚀 Added

YOLO World 🤝 workflows

We've introduced Yolo World model into workflows making it trivially easy to use the model as any other object-detection model ☺️

To try this out, install dependencies first:

pip install inference-sdk inference-cli

Start the server:

inference server start

And run the script:

from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(api_url="http://127.0.0.1:9001", api_key="YOUR_API_KEY")

YOLO_WORLD = {
    "specification": {
        "version": "1.0",
        "inputs": [
            { "type": "InferenceImage", "name": "image" },
            { "type": "InferenceParameter", "name": "classes" },
            { "type": "InferenceParameter", "name": "confidence", "default_value": 0.003 },
        ],
        "steps": [
            {
                "type": "YoloWorld",
                "name": "step_1",
                "image": "$inputs.image",
                "class_names": "$inputs.classes",
                "confidence": "$inputs.confidence",
            },
        ],
        "outputs": [
            { "type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions" },
        ]   
    }
}

response = CLIENT.infer_from_workflow(
    specification=YOLO_WORLD["specification"],
    images={
        "image": frame,
    },
    parameters={
        "classes": ["yellow filling", "black hole"]  # each time you may specify different classes!
    }
)

Check details in documentation 📖 and discover usage examples.

🏆 Contributors

@PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.12...v0.9.13

v0.9.12

16 Feb 13:59
2e49f85
Compare
Choose a tag to compare

🚀 Added

inference cookbook

Visit our cookbook 🧑‍🍳

Screenshot 2024-02-19 at 14 58 30

🔨 Fixed

In this release, we are fixing issues spotted in YoloWorld model released in v0.9.11, in particular:

  • bug with hashing of YOLO World classes making it impossible in some cases to run inference due to improper caching of CLIP embeddings
  • bug with YOLO World pre-processing of colour channels causing model misunderstanding of prompted colours

🏆 Contributors

@capjamesg (James Gallagher), @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.11...v0.9.12

v0.9.12rc3

16 Feb 13:29
127776a
Compare
Choose a tag to compare
v0.9.12rc3 Pre-release
Pre-release

Fixed embeddings hashing

v0.9.12rc2

16 Feb 13:25
e6ecbb1
Compare
Choose a tag to compare
v0.9.12rc2 Pre-release
Pre-release

Fixed hashing of text embeddings

v0.9.12rc1

16 Feb 12:17
5945994
Compare
Choose a tag to compare
v0.9.12rc1 Pre-release
Pre-release

Release candidate with fix to Yolo-World pre-processing

v0.9.11

16 Feb 00:33
4ac428b
Compare
Choose a tag to compare

🚀 Added

YOLO World in the inference

Have you heard about YOLO World model? 🤔 If not - you would probably be interested to learn something about it! Our blog post 📰 may be a good starting point❗

Great news is that YOLO World is already integrated with inference. Model is capable to perform zero-shot detections of classes specified in inference parameter. Thanks to that, you may start making videos like that just now 🚀

yellow-filling-output-1280x720.mp4

Simply install dependencies.

pip install inference-sdk inference-cli

Start the server

inference server start

And run inference against our HTTP server:

from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
result = client.infer_from_yolo_world(
    inference_input=YOUR_IMAGE,
    class_names=["dog", "cat"],
)

Active Learning 🤝 workflows

Active Learning data collection made simple with workflows 🔥 Now, with just a little bit of configuration you can start data collection to improve your model over time. Just take look how easy it is:

active_learning_in_workflows.mp4

Key features:

  • works for all models supported at Roboflow platform, including the ones from Roboflow Universe - making it trivial to use off-the-shelf model during project kick-off stage to collect dataset while serving meaningful predictions
  • combines well with multiple workflows blocks - including DetectionsConsensus - making it possible to sample based on predictions of models ensemble 💥
  • Active Learning block may use project-level config of Active Learning or define Active Learning strategy directly in the block definition (refer to Active Learning documentation 📖 for details on how to configure data collection)

See documentation 📖 of new ActiveLearningDataCollector to find detailed info.

🌱 Changed

InferencePipeline now works with all models supported at Roboflow platform 🎆

For a long time - InferencePipeline worked only with object-detection models. This is no longer the case - from now on, other type of models supported at Roboflow platform (including stubs - like my-project/0) work under InferencePipeline. No changes are required in existing code. Just put model_id of your model and the pipeline should work. Sinks suited for detection-only models were adjusted to ignore non-compliant formats of predictions and produce warnings notifying about incompatibility.

🔨 Fixed

  • Bug in yolact model in #266

🏆 Contributors

@paulguerrie (Paul Guerrie), @probicheaux (Peter Robicheaux), @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.10...v0.9.11

v0.9.10

13 Feb 18:41
Compare
Choose a tag to compare

🚀 Added

inference Benchmarking 🏃‍♂️

A new command has been added to the inference-cli for benchmarking performance. Now you can test inference in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform 🤯

scaling_of_hosted_roboflow_platform.mov

Run your own benchmark with a simple command:

inference benchmark python-package-speed -m coco/3 

See the docs for more details.

🌱 Changed

  • Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring

🔨 Fixed

  • bug #260 causing inference API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management
  • faulty logic for getting request_id causing errors in parallel-http container

🏆 Contributors

@paulguerrie (Paul Guerrie), @SolomonLake (Solomon Lake ), @robiscoding (Rob Miller) @PawelPeczek-Roboflow (Paweł Pęczek)

Full Changelog: v0.9.9...v0.9.10

v0.9.10rc3

12 Feb 19:42
ac39653
Compare
Choose a tag to compare
v0.9.10rc3 Pre-release
Pre-release

This is a pre-release version that mainly addresses some instabilities in the model manager.

What's Changed

Full Changelog: v0.9.9...v0.9.10rc3

v0.9.9

07 Feb 17:22
7900225
Compare
Choose a tag to compare

🚀 Added

Roboflow workflows 🤖

A new way to create ML pipelines without writing code. Declare the sequence of models and intermediate processing steps using JSON config and execute using inference container (or Hosted Roboflow platform). No Python code needed! 🤯 Just watch our feature preview

workflows_feature_preview.mp4

Want to experiment more?

pip install inference-cli

inference server start --dev

Hit http://127.0.0.1:9001 in your browser, then click Jump Into an Inference Enabled Notebook → button and open the notebook named workflows.ipynb:

We encourage to acknowledge our documentation 📖 to reveal full potential of Roboflow workflows.

This feature is still under heavy development. Your feedback is needed to make it better!

Take inference to the cloud with one command 🚀

Yes, you got it right. inference-cli package now provides set of inference cloud commands to deploy required infrastructure without effort.

Just:

pip install inference-cli

And depended on your needs use:

inference cloud deploy --provider aws --compute-type gpu
# or
inference cloud deploy --provider gcp --compute-type cpu

With example posted here, we are just scratching the surface - visit our docs 📖 where more examples are presented.

🔥 YOLO-NAS is coming!

  • We plan to onboard YOLO-NAS to the Roboflow platform. In this release we are introducing foundation work to make that happen. Stay tuned!

supervision 🤝 inference

We've extended capabilities of inference infer command of inference-cli package. Now it is capable to run inference against images, directories of images and videos, visualise predictions using supervision and save them in the location of choice.

What does it take to get your predictions?

pip install inference-cli

# start the server
inference server start 

# run inference
inference infer -i {PATH_TO_VIDEO} -m coco/3 -c bounding_boxes_tracing -o {OUTPUT_DIRECTORY} -D

There are plenty of configuration options that can alter the visualisation. You can use predefined configs (example: -c bounding_boxes_tracing) or create your own. See our docs 📖 to discover all options.

🌱 Changed

  • breaking: Pydantic 2: Inference now depends on pydantic>=2.
  • breaking: Default values of parameters (like confidence, iou_threshold etc.) that were set for newer parts of inference (including inference HTTP container endpoints) were aligned with more reasonable defaults that hosted Roboflow platform uses. That is going to make the experience of inference usage consistent with Roboflow platform. This, however, will alter the behaviour of package for clients that do not specify their own values of parameters while making predictions. Summary: confidence is from now on defaulted to 0.4 and iou_threshold to 0.3. We encourage clients using self-hosted containers to evaluate results on their end. Changes to be inspected here.
  • API calls to HTTP endpoints with Roboflow models now accept disable_active_learning flag that prevents Active Learning being active for specific request
  • Documentation 📖 was refreshed. Redesign is supposed to make the content easier to comprehend. We would love to have some feedback 🙏

🔨 Fixed

  • breaking: Fixed the issue #260 with bug introduced in version v0.9.3 causing classification models with 10 and more classes to assign wrong class name to predictions (despite maintaining good class ids) - clients relying on class name instead on class_id of predictions were affected.
  • breaking: Typo coglvm -> cogvlm in inference-sdk HTTP client method name prompt_cogvlm(...)

Full Changelog: v0.9.8...v0.9.9