Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]

## [0.1.0] - 2025-11-25

### Added
- Initial release of CrewAI instrumentation
- Wrapper-based instrumentation for CrewAI workflows, agents, tasks, and tools
- Support for `Crew.kickoff()` → `Workflow` spans
- Support for `Task.execute_sync()` → `Step` spans
- Support for `Agent.execute_task()` → `AgentInvocation` spans
- Support for `BaseTool.run()` and `CrewStructuredTool.invoke()` → `ToolCall` spans
- Integration with `splunk-otel-util-genai` for standardized GenAI telemetry
- Proper trace context propagation using `contextvars`
- Rich span attributes for all CrewAI components
- Defensive instrumentation that doesn't break applications on errors

### Documentation
- Comprehensive README with usage examples
- Compositional instrumentation patterns (CrewAI + OpenAI + Vector Stores)
- Configuration and environment variable documentation

### Limitations
- Synchronous workflows only (async support planned for future release)
- LLM calls not instrumented (use provider-specific instrumentation)

[Unreleased]: https://github.com/signalfx/splunk-otel-python-contrib/compare/v0.1.0...HEAD
[0.1.0]: https://github.com/signalfx/splunk-otel-python-contrib/releases/tag/v0.1.0

Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
OpenTelemetry CrewAI Instrumentation
=====================================

|pypi|

.. |pypi| image:: https://badge.fury.io/py/splunk-otel-instrumentation-crewai.svg
:target: https://pypi.org/project/splunk-otel-instrumentation-crewai/

This library provides OpenTelemetry instrumentation for `CrewAI <https://github.com/joaomdmoura/crewAI>`_,
a framework for orchestrating autonomous AI agents.

Installation
------------

.. code-block:: bash

pip install splunk-otel-instrumentation-crewai


Usage
-----

.. code-block:: python

from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
from crewai import Agent, Task, Crew

# Instrument CrewAI
CrewAIInstrumentor().instrument()

# Create your crew
agent = Agent(
role="Research Analyst",
goal="Provide accurate research",
backstory="Expert researcher with attention to detail",
)

task = Task(
description="Research the latest AI trends",
expected_output="A comprehensive report on AI trends",
agent=agent,
)

crew = Crew(agents=[agent], tasks=[task])

# Run your crew - telemetry is automatically captured
result = crew.kickoff()


What Gets Instrumented
-----------------------

This instrumentation captures:

- **Crews** → Mapped to ``Workflow`` spans
- **Tasks** → Mapped to ``Step`` spans
- **Agents** → Mapped to ``AgentInvocation`` spans
- **Tool Usage** → Mapped to ``ToolCall`` spans

All spans are properly nested with correct parent-child relationships and include
rich attributes about the operation.


Compositional Instrumentation
------------------------------

This instrumentation focuses on CrewAI's workflow orchestration. For complete observability:

**CrewAI Only**

.. code-block:: python

from opentelemetry.instrumentation.crewai import CrewAIInstrumentor

CrewAIInstrumentor().instrument()

Provides workflow structure but no LLM call details.

**CrewAI + OpenAI**

.. code-block:: python

from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

CrewAIInstrumentor().instrument()
OpenAIInstrumentor().instrument()

Adds LLM call spans with token usage, model names, and latency metrics.

**Full Stack (CrewAI + OpenAI + Vector Store)**

.. code-block:: python

from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.instrumentation.chromadb import ChromaDBInstrumentor

CrewAIInstrumentor().instrument()
OpenAIInstrumentor().instrument()
ChromaDBInstrumentor().instrument()

Complete RAG workflow visibility with vector store operations.


Configuration
-------------

Environment Variables
~~~~~~~~~~~~~~~~~~~~~

.. code-block:: bash

# Disable CrewAI's built-in telemetry (recommended)
export CREWAI_DISABLE_TELEMETRY=true


Instrumentation Options
~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: python

from opentelemetry.instrumentation.crewai import CrewAIInstrumentor

# Basic instrumentation
CrewAIInstrumentor().instrument()

# With custom tracer provider
CrewAIInstrumentor().instrument(tracer_provider=my_tracer_provider)

# Uninstrumentation
CrewAIInstrumentor().uninstrument()


Requirements
------------

- Python >= 3.9
- CrewAI >= 0.70.0
- OpenTelemetry API >= 1.38
- ``splunk-otel-util-genai`` >= 0.1.4


Trace Hierarchy Example
------------------------

.. code-block::

Crew: Customer Support (Workflow)
├── Task: inquiry_resolution (Step)
│ └── Agent: Senior Support Representative
│ ├── LLM: gpt-4o-mini (via openai-instrumentation)
│ └── Tool: docs_scrape
└── Task: quality_assurance (Step)
└── Agent: QA Specialist
└── LLM: gpt-4o-mini (via openai-instrumentation)


Each span includes rich attributes:

- ``gen_ai.system`` = "crewai"
- ``gen_ai.operation.name`` = "invoke_workflow" | "invoke_agent" | "execute_tool"
- Framework-specific attributes (agent role, task description, tool names, etc.)


Limitations
-----------

- **Async Support**: Currently supports synchronous workflows only. Async support (``kickoff_async()``)
is planned for a future release.
- **LLM Calls**: Not instrumented here. Use provider-specific instrumentation
(e.g., ``opentelemetry-instrumentation-openai``).


Contributing
------------

Contributions are welcome! Please ensure:

- All tests pass
- Code follows project style guidelines
- Instrumentation is defensive (catches exceptions)
- Documentation is updated


Links
-----

- `CrewAI Documentation <https://docs.crewai.com/>`_
- `OpenTelemetry Python <https://opentelemetry.io/docs/languages/python/>`_
- `Splunk GenAI Utilities <https://github.com/signalfx/splunk-otel-python-contrib>`_


License
-------

Apache-2.0

Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
CREWAI_DISABLE_TELEMETRY=true
OPENAI_API_KEY=
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
PYTHONUNBUFFERED=1
OTEL_SERVICE_NAME=crewai-examples
DEEPEVAL_TELEMETRY_OPT_OUT="YES"
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
FROM python:3.12-slim

WORKDIR /app

# Install git for pip dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
&& rm -rf /var/lib/apt/lists/*

# Copy only the CrewAI instrumentation package and example
COPY instrumentation-genai/opentelemetry-instrumentation-crewai /app/opentelemetry-instrumentation-crewai

# Set working directory to examples
WORKDIR /app/opentelemetry-instrumentation-crewai/examples

# Install Python dependencies (including genai utils from PyPI)
RUN pip install --no-cache-dir -r requirements.txt

# Install local CrewAI instrumentation package
RUN pip install --no-cache-dir /app/opentelemetry-instrumentation-crewai

# Verify packages are installed correctly
RUN python3 -c "from opentelemetry.instrumentation.crewai import CrewAIInstrumentor; print('✓ CrewAI instrumentation available')" && \
python3 -c "from opentelemetry.util.genai.handler import get_telemetry_handler; print('✓ GenAI handler available (from PyPI)')"

# Set default environment variables
ENV OTEL_PYTHON_LOG_CORRELATION=true \
OTEL_PYTHON_LOG_LEVEL=info \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
PYTHONUNBUFFERED=1 \
CREWAI_DISABLE_TELEMETRY=true

# Run the customer support example
CMD ["python3", "customer_support.py"]

Loading
Loading