You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can run it, and see the traces print to your console:
35
+
When you run the script you can see the traces printed to your console:
36
36
37
37
.. code-block:: sh
38
38
@@ -94,99 +94,97 @@ We can run it, and see the traces print to your console:
94
94
95
95
Each span typically represents a single operation or unit of work.
96
96
Spans can be nested, and have a parent-child relationship with other spans.
97
-
While a given span is active, newly-created spans will inherit the active span's trace ID, options, and other attributes of its context.
98
-
A span without a parent is called the "root span", and a trace is comprised of one root span and its descendants.
97
+
While a given span is active, newly-created spans inherit the active span's trace ID, options, and other attributes of its context.
98
+
A span without a parent is called the root span, and a trace is comprised of one root span and its descendants.
99
99
100
-
In the example above, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
100
+
In this example, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
101
101
102
102
Configure exporters to emit spans elsewhere
103
103
-------------------------------------------
104
104
105
-
The example above does emit information about all spans, but the output is a bit hard to read.
106
-
In common cases, you would instead *export* this data to an application performance monitoring backend, to be visualized and queried.
107
-
It is also common to aggregate span and trace information from multiple services into a single database, so that actions that require multiple services can still all be visualized together.
105
+
The previous example does emit information about all spans, but the output is a bit hard to read.
106
+
In most cases, you can instead *export* this data to an application performance monitoring backend to be visualized and queried.
107
+
It's also common to aggregate span and trace information from multiple services into a single database, so that actions requiring multiple services can still all be visualized together.
108
108
109
-
This concept is known as distributed tracing. One such distributed tracing backend is known as Jaeger.
109
+
This concept of aggregating span and trace information is known as distributed tracing. One such distributed tracing backend is known as Jaeger. The Jaeger project provides an all-in-one Docker container with a UI, database, and consumer.
110
110
111
-
The Jaeger project provides an all-in-one docker container that provides a UI, database, and consumer. Let's bring
112
-
it up now:
111
+
Run the following command to start Jaeger:
113
112
114
113
.. code-block:: sh
115
114
116
115
docker run -p 16686:16686 -p 6831:6831/udp jaegertracing/all-in-one
117
116
118
-
This will start Jaeger on port 16686 locally, and expose Jaeger thrift agent on port 6831. You can visit it at http://localhost:16686.
117
+
This command starts Jaeger locally on port 16686 and exposes the Jaeger thrift agent on port 6831. You can visit Jaeger at http://localhost:16686.
119
118
120
-
With this backend up, your application will now need to export traces to this system. ``opentelemetry-sdk`` does not provide an exporter
121
-
for Jaeger, but you can install that as a separate package:
119
+
After you spin up the backend, your application needs to export traces to this system. Although ``opentelemetry-sdk`` doesn't provide an exporter
120
+
for Jaeger, you can install it as a separate package with the following command:
122
121
123
122
.. code-block:: sh
124
123
125
124
pip install opentelemetry-exporter-jaeger
126
125
127
-
Once installed, update your code to import the Jaeger exporter, and use that instead:
126
+
After you install the exporter, update your code to import the Jaeger exporter and use that instead:
You can then visit the jaeger UI, see you service under "services", and find your traces!
138
+
You can then visit the Jaeger UI, see your service under "services", and find your traces!
140
139
141
140
.. image:: images/jaeger_trace.png
142
141
143
-
Integrations example with Flask
144
-
-------------------------------
142
+
Instrumentation example with Flask
143
+
------------------------------------
145
144
146
-
The above is a great example, but it's very manual. Within the telemetry space, there are common actions that one wants to instrument:
145
+
While the example in the previous section is great, it's very manual. The following are common actions you might want to track and include as part of your distributed tracing.
147
146
148
147
* HTTP responses from web services
149
148
* HTTP requests from clients
150
149
* Database calls
151
150
152
-
To help instrument common scenarios, opentelemetry also has the concept of "instrumentations": packages that are designed to interface
153
-
with a specific framework or library, such as Flask and psycopg2. A list of the currently curated extension packages can be found `at the Contrib repo<https://github.com/open-telemetry/opentelemetry-python-contrib/tree/master/instrumentation>`_.
151
+
To track these common actions, OpenTelemetry has the concept of instrumentations. Instrumentations are packages designed to interface
152
+
with a specific framework or library, such as Flask and psycopg2. You can find a list of the currently curated extension packages in the `Contrib repository<https://github.com/open-telemetry/opentelemetry-python-contrib/tree/master/instrumentation>`_.
154
153
155
-
We will now instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
154
+
Instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
A major feature of distributed tracing is the ability to correlate a trace across
181
180
multiple services. However, those services need to propagate information about a
182
181
trace from one service to the other.
183
182
184
-
To enable this, OpenTelemetry has the concept of `propagators <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/context/api-propagators.md>`_,
185
-
which provide a common method to encode and decode span information from a request and response,
186
-
respectively.
183
+
To enable this propagation, OpenTelemetry has the concept of `propagators <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/context/api-propagators.md>`_,
184
+
which provide a common method to encode and decode span information from a request and response, respectively.
187
185
188
-
By default, opentelemetry-python is configured to use the `W3C Trace Context <https://www.w3.org/TR/trace-context/>`_
189
-
HTTP headers for HTTP requests. This can be configured to leverage different propagators. Here's
186
+
By default, ``opentelemetry-python`` is configured to use the `W3C Trace Context <https://www.w3.org/TR/trace-context/>`_
187
+
HTTP headers for HTTP requests, but you can configure it to leverage different propagators. Here's
190
188
an example using Zipkin's `b3 propagation <https://github.com/openzipkin/b3-propagation>`_:
191
189
192
190
.. code-block:: python
@@ -197,38 +195,38 @@ an example using Zipkin's `b3 propagation <https://github.com/openzipkin/b3-prop
197
195
propagators.set_global_textmap(B3Format())
198
196
199
197
200
-
Adding Metrics
198
+
Add metrics
201
199
--------------
202
200
203
201
Spans are a great way to get detailed information about what your application is doing, but
204
-
what about a more aggregated perspective? OpenTelemetry provides supports for metrics, a time series
205
-
of numbers that might express things such as CPU utilization, request count for an HTTP server, or a
202
+
what about a more aggregated perspective? OpenTelemetry provides support for metrics. Metrics are a time series
203
+
of values that express things such as CPU utilization, request count for an HTTP server, or a
206
204
business metric such as transactions.
207
205
208
-
All metrics can be annotated with labels: additional qualifiers that help describe what
206
+
You can annotate all metrics with labels. Labels are additional qualifiers that describe what
209
207
subdivision of the measurements the metric represents.
210
208
211
-
The following is an example of emitting metrics to console, in a similar fashion to the trace example:
209
+
The following example emits metrics to your console, similar to the trace example:
The Prometheus server will run locally on port 8000, and the instrumented code will make metrics available to Prometheus via the `PrometheusMetricsExporter`.
261
+
The Prometheus server runs locally on port 8000. The instrumented code makes metrics available to Prometheus via the `PrometheusMetricsExporter`.
264
262
Visit the Prometheus UI (http://localhost:9090) to view your metrics.
265
263
266
264
267
-
Using the OpenTelemetry Collector for traces and metrics
265
+
Use the OpenTelemetry Collector for traces and metrics
Although it's possible to directly export your telemetry data to specific backends, you may more complex use cases, including:
268
+
Although it's possible to directly export your telemetry data to specific backends, you might have more complex use cases such as the following:
271
269
272
-
* having a single telemetry sink shared by multiple services, to reduce overhead of switching exporters
273
-
* aggregating metrics or traces across multiple services, running on multiple hosts
270
+
* A single telemetry sink shared by multiple services, to reduce overhead of switching exporters.
271
+
* Aggregaing metrics or traces across multiple services, running on multiple hosts.
274
272
275
273
To enable a broad range of aggregation strategies, OpenTelemetry provides the `opentelemetry-collector <https://github.com/open-telemetry/opentelemetry-collector>`_.
276
274
The Collector is a flexible application that can consume trace and metric data and export to multiple other backends, including to another instance of the Collector.
277
275
278
-
To see how this works in practice, let's start the Collector locally. Write the following file:
276
+
Start the Collector locally to see how the Collector works in practice. Write the following file:
279
277
280
278
.. code-block:: yaml
281
279
@@ -299,7 +297,7 @@ To see how this works in practice, let's start the Collector locally. Write the
299
297
receivers: [opencensus]
300
298
exporters: [logging]
301
299
302
-
Start the docker container:
300
+
Then start the Docker container:
303
301
304
302
.. code-block:: sh
305
303
@@ -314,7 +312,7 @@ Install the OpenTelemetry Collector exporter:
0 commit comments