Skip to content

OTel Correlation Guides #30354

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 13 additions & 3 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -780,16 +780,26 @@ menu:
identifier: otel_explore
parent: opentelemetry_top_level
weight: 7
- name: Correlate Logs and Traces
- name: Logs and Traces
url: /opentelemetry/correlate/logs_and_traces/
identifier: otel_logs
parent: otel_explore
weight: 701
- name: Correlate RUM and Traces
- name: Metrics and Traces
url: /opentelemetry/correlate/metrics_and_traces/
identifier: otel_metrics_traces
parent: otel_explore
weight: 702
- name: RUM and Traces
url: /opentelemetry/correlate/rum_and_traces/
identifier: otel_rum
parent: otel_explore
weight: 702
weight: 703
- name: DBM and Traces
url: /opentelemetry/correlate/dbm_and_traces/
identifier: otel_dbm
parent: otel_explore
weight: 704
- name: Integrations
url: opentelemetry/integrations/
identifier: otel_integrations
Expand Down
93 changes: 86 additions & 7 deletions content/en/opentelemetry/correlate/_index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Correlate Data
title: Correlate OpenTelemetry Data
description: Learn how to correlate your OpenTelemetry traces, metrics, logs, and other telemetry in Datadog to get a unified view of your application's performance.
aliases:
- /opentelemetry/otel_logs/
further_reading:
Expand All @@ -10,13 +11,91 @@ further_reading:

## Overview

Link your telemetry data for full-stack observability:
Getting a unified view of your application's performance requires connecting its traces, metrics, logs, user interactions, and more. By correlating your OpenTelemetry data in Datadog, you can navigate between all related telemetry in a single view, allowing you to diagnose and resolve issues faster.

{{< whatsnext desc=" " >}}
{{< nextlink href="/opentelemetry/correlate/logs_and_traces/" >}}Connect Logs and Traces{{< /nextlink >}}
{{< nextlink href="/opentelemetry/correlate/rum_and_traces/" >}}Connect RUM and Traces{{< /nextlink >}}
{{< /whatsnext >}}

## Prerequisite: Unified service tagging

Datadog uses three standard tags to link telemetry together: `env`, `service`, and `version`.

To ensure your OpenTelemetry data is properly correlated, you must configure your application or system to use these tags by setting a standard set of OpenTelemetry resource attributes. Datadog automatically maps these attributes to the correct tags.

| OpenTelemetry Resource Attribute | Datadog Tag | Notes |
|----------------------------------|-------------|---------------------------------------------------------------------------------------------------------|
| `deployment.environment.name` | `env` | **Recommended**. Supported in Agent v7.58.0+ and Collector Exporter v0.110.0+. |
| `deployment.environment` | `env` | Use if you are running an Agent version older than v7.58.0 or a Collector Exporter older than v0.110.0. |
| `service.name` | `service` | |
| `service.version` | `version` | |

You can set these attributes in your application's environment variables, SDK, or in the OpenTelemetry Collector.

{{< tabs >}}
{{% tab "Environment Variables" %}}

Set the `OTEL_RESOURCE_ATTRIBUTES` environment variable with your service's information:

```sh
export OTEL_SERVICE_NAME="my-service"
export OTEL_RESOURCE_ATTRIBUTES="deployment.environment.name=production,service.version=1.2.3"
```

{{% /tab %}}
{{% tab "SDK" %}}

Create a Resource with the required attributes and associate it with your TracerProvider in your application code.

Here's an example using the OpenTelemetry SDK for Python:

```python
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider

resource = Resource(attributes={
"service.name": "<SERVICE>",
"deployment.environment.name": "<ENV>",
"service.version": "<VERSION>"
})
tracer_provider = TracerProvider(resource=resource)
```

{{% /tab %}}
{{% tab "Collector" %}}

Use the `resource` processor in your Collector configuration to set the resource attributes on your telemetry data:

```yaml
processors:
resource:
attributes:
- key: service.name
value: "my-service"
action: upsert
- key: deployment.environment.name
value: "production"
action: upsert
- key: service.version
value: "1.2.3"
action: upsert
...
```

{{% /tab %}}
{{< /tabs >}}

## Correlate telemetry

After unified service tagging is configured, you can connect your various telemetry streams. Select a guide below for platform-specific instructions.

- [Correlate logs and traces][1]
- [Correlate metrics and traces][2]
- [Correlate RUM and traces][3]
- [Correlate DBM and traces][4]

## Further reading

{{< partial name="whats-next/whats-next.html" >}}
{{< partial name="whats-next/whats-next.html" >}}

[1]: /opentelemetry/correlate/logs_and_traces
[2]: /opentelemetry/correlate/metrics_and_traces
[3]: /opentelemetry/correlate/rum_and_traces
[4]: /opentelemetry/correlate/dbm_and_traces
111 changes: 111 additions & 0 deletions content/en/opentelemetry/correlate/dbm_and_traces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
title: Correlate OpenTelemetry Traces and DBM
further_reading:
- link: "/opentelemetry/otel_tracing/"
tag: "Documentation"
text: "Send OpenTelemetry Traces to Datadog"
---

## Overview

Correlate backend traces to detailed database performance data in Datadog Database Monitoring (DBM). This allows you to link spans from your OpenTelemetry-instrumented application directly to query metrics and execution plans to identify the exact queries that are slowing down your application.

## Requirements

Before you begin, ensure you have configured [unified service tagging][1]. This is required for all data correlation in Datadog.

## Setup

To correlate traces and metrics, you must:

1. **Instrument database spans**: Add specific OpenTelemetry attributes to your database spans to enable correlation with DBM.

2. **Configure trace ingestion path**: Enable the correct feature gate on your Collector or Agent to ensure database spans are properly processed for DBM.

### Step 1: Instrument your database spans

For DBM correlation to work, your database spans must include the following attributes.

| Attribute | Description | Example |
|----------------|-----------------------------------------------------------------------------------------------------|------------------------------------|
| `db.system` | **Required.** The database technology, such as `postgres`, `mysql`, or `sqlserver`. | `postgres` |
| `db.statement` | **Required.** The raw SQL query text. This is used for obfuscation and normalization. | `SELECT * FROM users WHERE id = ?` |
| `db.name` | The logical database or schema name being queried. | `user_accounts` |
| `span.type` | **Required (Datadog-specific).** The type of span such as `sql`,`postgres`, `mysql`, or `sql.query` | `sql` |

#### Example

The method for adding these attributes depends on your setup. If you are using an OpenTelemetry auto-instrumentation library for your database client, see its documentation for configuration options. If you are manually creating spans with the OpenTelemetry SDK, you can set the attributes directly in your code. For more information, see the [OpenTelemetry documentation][4].

The following is a conceptual example of manual instrumentation using Python's OpenTelemetry SDK:

```python
from opentelemetry import trace

tracer = trace.get_tracer("my-app.instrumentation")

# When making a database call, create a span and set attributes
with tracer.start_as_current_span("postgres.query") as span:
# Set attributes required for DBM correlation
span.set_attribute("span.type", "sql")
span.set_attribute("db.system", "postgres")
span.set_attribute("db.statement", "SELECT * FROM users WHERE id = ?")
span.set_attribute("db.name", "user_accounts")

# Your actual database call would go here
# db_cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
```

### Step 2: Configure your ingest path

Depending on how you send traces to Datadog, you may need to enable specific feature gates to ensure database spans are processed correctly.

{{< tabs >}}
{{% tab "Datadog Agent (DDOT Collector)" %}}


If you are using the Datadog Helm chart (v3.107.0 or later), set the feature gate in your `values.yaml`:

```yaml
datadog:
otelCollector:
featureGates: datadog.EnableOperationAndResourceNameV2
```

{{% /tab %}}
{{% tab "OTel Collector" %}}

When starting the Collector, enable the `datadog.EnableOperationAndResourceNameV2` feature gate. This is available in Collector v0.118.0 and later.

```sh
otelcontribcol --config=config.yaml \
--feature-gates=datadog.EnableOperationAndResourceNameV2
```

{{% /tab %}}

{{% tab "Datadog Agent (OTLP Ingest)" %}}

In your Datadog Agent configuration, ensure the `DD_APM_FEATURES` environment variable includes `enable_operation_and_resource_name_logic_v2`.

{{% /tab %}}

{{< /tabs >}}

### View correlated data in Datadog

After your application is sending traces, you can see the correlation in the APM Trace View:

1. Navigate to [**APM** > **Traces**][3].
2. Find and click on a trace from your instrumented service.
3. In the trace's flame graph, select a database span (for example, a span with `span.type: sql`)
4. In the details panel, click the **SQL Queries** tab. You should see the host metrics, like CPU and memory utilization, from the host that executed that part of the request.

## Further reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: /opentelemetry/correlate/#prerequisite-unified-service-tagging
[2]: /opentelemetry/integrations/host_metrics
[3]: https://app.datadoghq.com/apm/traces
[4]: https://opentelemetry.io/docs/languages/
79 changes: 79 additions & 0 deletions content/en/opentelemetry/correlate/metrics_and_traces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
title: Correlate OpenTelemetry Traces and Metrics
further_reading:
- link: "/opentelemetry/otel_tracing/"
tag: "Documentation"
text: "Send OpenTelemetry Traces to Datadog"
---

## Overview

Correlating traces with host metrics allows you to pivot from a slow request directly to the CPU and memory metrics of the host or container it ran on. This helps you determine if resource contention was the root cause of a performance issue.

Correlation between traces and metrics relies on the following resource attributes:

- `host.name`: For correlating with host metrics (CPU, memory, disk).
- `container.id`: For correlating with container metrics.

## Requirements

Before you begin, ensure you have configured [unified service tagging][1]. This is required for all data correlation in Datadog.

## Setup

To correlate traces and metrics, you must:

1. **Collect Host Metrics**: You must have the OpenTelemetry Collector configured to collect and send host metrics to Datadog.

2. **Ensure Consistent Tagging**: Your traces and metrics must share a consistent `host.name` (for hosts) or `container.id` (for containers) attribute for Datadog to link them.


### 1. Collect host metrics

To collect system-level metrics from your infrastructure, enable the `hostmetrics` receiver in your OpenTelemetry Collector configuration. This receiver gathers metrics like CPU, memory, disk, and network usage.

Add the `hostmetrics` receiver to the `receivers` section of your Collector configuration and enable it in your `metrics` pipeline:


```yaml
receivers:
hostmetrics:
collection_interval: 10s
scrapers:
cpu:
memory:
disk:
...

service:
pipelines:
metrics:
receivers: [hostmetrics, ...]
processors: [...]
exporters: [...]
```

For the complete, working configuration, including Kubernetes-specific setup, see the [Host Metrics][2] documentation.

### 2. Ensure consistent host and container tagging

For correlation to work, the `host.name` (or `container.id`) attribute on your traces must match the corresponding attribute on the metrics collected by the `hostmetrics` receiver.

## View correlated data in Datadog

After your application is sending traces and the Collector is sending host metrics, you can see the correlation in the APM Trace View.

1. Navigate to [**APM** > **Traces**][3].
2. Find and click on a trace from your instrumented service.
3. In the trace's flame graph, select a span that ran on the instrumented host.
4. In the details panel, click the **Infrastructure** tab. You should see the host metrics, like CPU and memory utilization, from the host that executed that part of the request.

This allows you to immediately determine if a spike in host metrics corresponds with the performance of a specific request.

## Further reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: /opentelemetry/correlate/#prerequisite-unified-service-tagging
[2]: /opentelemetry/integrations/host_metrics
[3]: https://app.datadoghq.com/apm/traces
4 changes: 2 additions & 2 deletions content/en/opentelemetry/integrations/host_metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The metrics, mapped to Datadog metrics, are used in the following views:
- [Host default dashboards][8]
- [APM Trace view Host info][9]

**Note**: To correlate trace and host metrics, configure [Universal Service Monitoring attributes][10] for each service, and set the `host.name` resource attribute to the corresponding underlying host for both service and collector instances.
**Note**: To correlate trace and host metrics, configure [Unified Service Tagging attributes][10] for each service, and set the `host.name` resource attribute to the corresponding underlying host for both service and collector instances.

The following table shows which Datadog host metric names are associated with corresponding OpenTelemetry host metric names, and, if applicable, what math is applied to the OTel host metric to transform it to Datadog units during the mapping.

Expand Down Expand Up @@ -168,6 +168,6 @@ Value: 1153183744
[7]: https://app.datadoghq.com/infrastructure
[8]: /opentelemetry/collector_exporter/#out-of-the-box-dashboards
[9]: /tracing/trace_explorer/trace_view/?tab=hostinfo
[10]: /universal_service_monitoring/setup/
[10]: /opentelemetry/correlate/#prerequisite-unified-service-tagging


Loading
Loading