Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 63 additions & 0 deletions content/en/observability_pipelines/rehydration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
title: Rehydration
description: Learn about using Rehydration to pull archived logs and processing them in Observability Pipelines.
disable_toc: false
private: true
further_reading:
- link: "/observability_pipelines/processors/"
tag: "Documentation"
text: "Learn more about processors"
- link: "/observability_pipelines/packs/"
tag: "Documentation"
text: "Learn more about Packs"
---

{{< callout
btn_hidden="true" header="false">}}
Rehydration is in Preview.
{{< /callout >}}

## Overview

Rehydration for Observability Pipelines enables you to pull archived logs from object storage and process them in Observability Pipelines, including with [Packs][1]. This gives you consistent access to historical context without having to rebuild workflows or modify ingestion pipelines.

Organizations often store large volumes of logs in cost-efficient, long-term archives to control spend and meet compliance requirements. However, historical data often becomes difficult to access when there is a security incident, audit request, or operational investigation. Retrieving archived logs from cold storage can be slow, manual, and disruptive, requiring ad-hoc scripts, decompression, or dedicated engineering effort. Rehydration for Observability Pipelines solves these issues.

## How Rehydration works

Check warning on line 26 in content/en/observability_pipelines/rehydration.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.headings

'How Rehydration works' should use sentence-style capitalization.

Rehydration provides an automated workflow for retrieving and reprocessing archived logs stored in object stores, such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. This helps you balance storage efficiency with quick access to historical data.

Check warning on line 28 in content/en/observability_pipelines/rehydration.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.words

Use '' instead of 'quick'.

Check notice on line 28 in content/en/observability_pipelines/rehydration.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.

With Rehydration, you can:

### Retrieve archived logs on demand

Pull only the data you need for investigations, audits, troubleshooting, or pipeline testing, and eliminate long retrieval delays and manual extraction steps.

### Target specific time ranges or event slices

Specify the exact time frame or subset of events you need to prevent moving or processing data unnecessarily.

### Process historical logs with Observability Pipelines

Rehydrated logs go through the same parsing, enrichment, normalization, and routing logic applied to live log streams.

This ensures:

- Consistent formatting and field extraction
- Reliable enrichment (for example, user, geo-IP, and cloud metadata)
- Uniform security and compliance controls
- Identical behavior across historical and real-time data

### Route rehydrated data to any supported destination

You can send processed historical logs to SIEMs, data lakes, analytics platforms, or any Observability Pipelines destination.

### Eliminate manual handling

Rehydration provides a structured, predictable way to pull archived data back into your observability platform, so you don't have to use custom scripts, manual decompression, or ad-hoc retrieval processes.

Check notice on line 57 in content/en/observability_pipelines/rehydration.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.sentencelength

Suggestion: Try to keep your sentence length to 25 words or fewer.

## Further reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: /observability_pipelines/packs/
Loading