Skip to content

feat(tools): Update dashboards with jsonpath #241

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,21 @@ _The following features are currently available in MCP server. This list is for
### Dashboards

- **Search for dashboards:** Find dashboards by title or other metadata
- **Get dashboard by UID:** Retrieve full dashboard details using its unique identifier
- **Update or create a dashboard:** Modify existing dashboards or create new ones. _Note: Use with caution due to context window limitations; see [issue #101](https://github.com/grafana/mcp-grafana/issues/101)_
- **Get dashboard by UID:** Retrieve full dashboard details using its unique identifier. _Warning: Large dashboards can consume significant context window space._
- **Get dashboard summary:** Get a compact overview of a dashboard including title, panel count, panel types, variables, and metadata without the full JSON to minimize context window usage
- **Get dashboard property:** Extract specific parts of a dashboard using JSONPath expressions (e.g., `$.title`, `$.panels[*].title`) to fetch only needed data and reduce context window consumption
- **Update or create a dashboard:** Modify existing dashboards or create new ones. _Warning: Requires full dashboard JSON which can consume large amounts of context window space._
- **Patch dashboard:** Apply specific changes to a dashboard without requiring the full JSON, significantly reducing context window usage for targeted modifications
- **Get panel queries and datasource info:** Get the title, query string, and datasource information (including UID and type, if available) from every panel in a dashboard

#### Context Window Management

The dashboard tools now include several strategies to manage context window usage effectively ([issue #101](https://github.com/grafana/mcp-grafana/issues/101)):

- **Use `get_dashboard_summary`** for dashboard overview and planning modifications
- **Use `get_dashboard_property`** with JSONPath when you only need specific dashboard parts
- **Avoid `get_dashboard_by_uid`** unless you specifically need the complete dashboard JSON

### Datasources

- **List and fetch datasource information:** View all configured datasources and retrieve detailed information about each.
Expand Down Expand Up @@ -148,6 +159,8 @@ Scopes define the specific resources that permissions apply to. Each action requ
| `get_dashboard_by_uid` | Dashboard | Get a dashboard by uid | `dashboards:read` | `dashboards:uid:abc123` |
| `update_dashboard` | Dashboard | Update or create a new dashboard | `dashboards:create`, `dashboards:write` | `dashboards:*`, `folders:*` or `folders:uid:xyz789` |
| `get_dashboard_panel_queries` | Dashboard | Get panel title, queries, datasource UID and type from a dashboard | `dashboards:read` | `dashboards:uid:abc123` |
| `get_dashboard_property` | Dashboard | Extract specific parts of a dashboard using JSONPath expressions | `dashboards:read` | `dashboards:uid:abc123` |
| `get_dashboard_summary` | Dashboard | Get a compact summary of a dashboard without full JSON | `dashboards:read` | `dashboards:uid:abc123` |
| `list_datasources` | Datasources | List datasources | `datasources:read` | `datasources:*` |
| `get_datasource_by_uid` | Datasources | Get a datasource by uid | `datasources:read` | `datasources:uid:prometheus-uid` |
| `get_datasource_by_name` | Datasources | Get a datasource by name | `datasources:read` | `datasources:*` or `datasources:uid:loki-uid` |
Expand Down
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ require (
github.com/grafana/pyroscope/api v1.2.0
github.com/invopop/jsonschema v0.13.0
github.com/mark3labs/mcp-go v0.36.0
github.com/oliveagle/jsonpath v0.0.0-20180606110733-2e52cf6e6852
github.com/prometheus/client_golang v1.23.0
github.com/prometheus/common v0.65.0
github.com/prometheus/prometheus v0.305.0
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -219,6 +219,8 @@ github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/oliveagle/jsonpath v0.0.0-20180606110733-2e52cf6e6852 h1:Yl0tPBa8QPjGmesFh1D0rDy+q1Twx6FyU7VWHi8wZbI=
github.com/oliveagle/jsonpath v0.0.0-20180606110733-2e52cf6e6852/go.mod h1:eqOVx5Vwu4gd2mmMZvVZsgIqNSaW3xxRThUJ0k/TPk4=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
Expand Down
72 changes: 72 additions & 0 deletions tests/dashboards_test.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import json
import pytest
from langevals import expect
from langevals_langevals.llm_boolean import (
Expand Down Expand Up @@ -42,3 +43,74 @@ async def test_dashboard_panel_queries_tool(model: str, mcp_client: ClientSessio
)
print("content", content)
expect(input=prompt, output=content).to_pass(panel_queries_checker)


@pytest.mark.parametrize("model", models)
@pytest.mark.flaky(max_runs=3)
async def test_dashboard_update_with_patch_operations(model: str, mcp_client: ClientSession):
"""Test that LLMs naturally use patch operations for dashboard updates"""
tools = await get_converted_tools(mcp_client)

# First, create a non-provisioned test dashboard by copying the demo dashboard
# 1. Get the demo dashboard JSON
demo_result = await mcp_client.call_tool("get_dashboard_by_uid", {"uid": "fe9gm6guyzi0wd"})
demo_data = json.loads(demo_result.content[0].text)
dashboard_json = demo_data["dashboard"]

# 2. Remove uid and id to create a new dashboard
if "uid" in dashboard_json:
del dashboard_json["uid"]
if "id" in dashboard_json:
del dashboard_json["id"]

# 3. Set a new title
title = f"Test Dashboard"
dashboard_json["title"] = title
dashboard_json["tags"] = ["python-integration-test"]

# 4. Create the dashboard in Grafana
create_result = await mcp_client.call_tool("update_dashboard", {
"dashboard": dashboard_json,
"folderUid": "",
"overwrite": False
})
create_data = json.loads(create_result.content[0].text)
created_dashboard_uid = create_data["uid"]

# 5. Update the dashboard title
updated_title = f"Updated {title}"
title_prompt = f"Update the title of the Test Dashboard to {updated_title}. Search for the dashboard by title first."

messages = [
Message(role="system", content="You are a helpful assistant"),
Message(role="user", content=title_prompt),
]

# 6. Search for the test dashboard
messages = await llm_tool_call_sequence(
model, messages, tools, mcp_client, "search_dashboards",
{"query": title}
)

# 7. Update the dashboard using patch operations
messages = await llm_tool_call_sequence(
model, messages, tools, mcp_client, "update_dashboard",
{
"uid": created_dashboard_uid,
"operations": [
{
"op": "replace",
"path": "$.title",
"value": updated_title
}
]
}
)

# 8. Final LLM response - just verify it completes successfully
response = await acompletion(model=model, messages=messages, tools=tools)
content = response.choices[0].message.content

# Test passes if we get here - the tool call sequence worked correctly
assert len(content) > 0, "LLM should provide a response after updating the dashboard"

Loading