Skip to content

Improved error logging - add more details including index/data stream #18465

@oshmyrko

Description

@oshmyrko

I have a Logstash pipeline that gets logs from Elasticsearch data streams and uploads them to AWS S3 bucket.
Recently, I've added one more data stream name to the index field in the input section, and started getting errors that the user does not have enough permissions. But the problem is that the user has enough permissions because when I run the pipeline separately for new and old data streams, everything works well.

Original log message

[2025-12-04T15:10:07,921][ERROR][logstash.inputs.elasticsearch.searchafter][my-logs][10655218...]
Tried search_after paginated search unsuccessfully
{:message=>"[403] {\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"action [indices:data/read/search] is unauthorized for user [my-logstash-user] with effective roles [eck_logstash_user_role,monitoring,writer], this action is granted by the index privileges [read,all]\"}],\"type\":\"security_exception\",\"reason\":\"action [indices:data/read/search] is unauthorized for user [my-logstash-user] with effective roles [eck_logstash_user_role,monitoring,writer], this action is granted by the index privileges [read,all]\"},\"status\":403}", :cause=>nil}

Message from above, but formatted:

{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "action [indices:data/read/search] is unauthorized for user [my-logstash-user] with effective roles [eck_logstash_user_role,monitoring,writer], this action is granted by the index privileges [read,all]"
      }
    ],
    "type": "security_exception",
    "reason": "action [indices:data/read/search] is unauthorized for user [my-logstash-user] with effective roles [eck_logstash_user_role,monitoring,writer], this action is granted by the index privileges [read,all]"
  },
  "status": 403
}

Logstash pipeline config:

input {
  elasticsearch {
    hosts    => ["${ECK_ES_HOSTS}"]
    user     => "${ECK_ES_USER}"
    password => "${ECK_ES_PASSWORD}"
    index    => "svc1*,svc2*,svc3*,svc4*"
    query    => '{ "query": { "bool": { "filter": [{ "range": { "@timestamp": { "gte": "now-1d/d", "lte": "now-1d/d" }}}, { "match_phrase": { "tags": "my" } }] }}, "sort": [{ "@timestamp": { "order": "asc" }}] }'
    size     => 5000
    schedule => "0 1 * * *"
  }
}

filter {
  mutate {
    remove_field => ["kubernetes"]
  }
}

output {
  s3 {
    id                  => "my-logs"
    bucket              => "my-logs"
    region              => "eu-central-1"
    prefix              => "raw/app=%{[app]}/year=%{+YYYY}/month=%{+MM}/day=%{+dd}"
    encoding            => "gzip"
    canned_acl          => "bucket-owner-full-control"
    rotation_strategy   => "size_and_time"
    size_file           => 268435456 # 256MB in bytes
    time_file           => 15
    codec               => "json_lines"
    temporary_directory => "${HOME}/data/my-logs"
  }
}

When I run this pipeline with index => "svc1*,svc2*,svc3*" and index => "svc4*", it works well, but when I specify all these data streams together, it fails with the error.

Please add more details to the error message to solve the issue I'm facing with.

Similar request but for Kibana - elastic/kibana#126255.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions