Skip to content

Commit c60aa69

Browse files
authored
Edits to getting started docs (open-telemetry#1368)
1 parent 2c1ee67 commit c60aa69

File tree

1 file changed

+59
-61
lines changed

1 file changed

+59
-61
lines changed

docs/getting-started.rst

Lines changed: 59 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
Getting Started with OpenTelemetry Python
22
=========================================
33

4-
This guide will walk you through instrumenting a Python application with ``opentelemetry-python``.
4+
This guide walks you through instrumenting a Python application with ``opentelemetry-python``.
55

6-
For more elaborate examples, see `examples`.
6+
For more elaborate examples, see `examples <https://github.com/open-telemetry/opentelemetry-python/tree/master/docs/examples/>`_.
77

8-
Hello world: emitting a trace to your console
8+
Hello world: emit a trace to your console
99
---------------------------------------------
1010

1111
To get started, install both the opentelemetry API and SDK:
@@ -18,21 +18,21 @@ To get started, install both the opentelemetry API and SDK:
1818
The API package provides the interfaces required by the application owner, as well
1919
as some helper logic to load implementations.
2020

21-
The SDK provides an implementation of those interfaces, designed to be generic and extensible enough
22-
that in many situations, the SDK will be sufficient.
21+
The SDK provides an implementation of those interfaces. The implementation is designed to be generic and extensible enough
22+
that in many situations, the SDK is sufficient.
2323

24-
Once installed, we can now utilize the packages to emit spans from your application. A span
24+
Once installed, you can use the packages to emit spans from your application. A span
2525
represents an action within your application that you want to instrument, such as an HTTP request
26-
or a database call. Once instrumented, the application owner can extract helpful information such as
27-
how long the action took, or add arbitrary attributes to the span that may provide more insight for debugging.
26+
or a database call. Once instrumented, you can extract helpful information such as
27+
how long the action took. You can also add arbitrary attributes to the span that provide more insight for debugging.
2828

29-
Here's an example of a script that emits a trace containing three named spans: "foo", "bar", and "baz":
29+
The following example script emits a trace containing three named spans: "foo", "bar", and "baz":
3030

3131
.. literalinclude:: getting_started/tracing_example.py
3232
:language: python
3333
:lines: 15-
3434

35-
We can run it, and see the traces print to your console:
35+
When you run the script you can see the traces printed to your console:
3636

3737
.. code-block:: sh
3838
@@ -94,99 +94,97 @@ We can run it, and see the traces print to your console:
9494
9595
Each span typically represents a single operation or unit of work.
9696
Spans can be nested, and have a parent-child relationship with other spans.
97-
While a given span is active, newly-created spans will inherit the active span's trace ID, options, and other attributes of its context.
98-
A span without a parent is called the "root span", and a trace is comprised of one root span and its descendants.
97+
While a given span is active, newly-created spans inherit the active span's trace ID, options, and other attributes of its context.
98+
A span without a parent is called the root span, and a trace is comprised of one root span and its descendants.
9999

100-
In the example above, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
100+
In this example, the OpenTelemetry Python library creates one trace containing three spans and prints it to STDOUT.
101101

102102
Configure exporters to emit spans elsewhere
103103
-------------------------------------------
104104

105-
The example above does emit information about all spans, but the output is a bit hard to read.
106-
In common cases, you would instead *export* this data to an application performance monitoring backend, to be visualized and queried.
107-
It is also common to aggregate span and trace information from multiple services into a single database, so that actions that require multiple services can still all be visualized together.
105+
The previous example does emit information about all spans, but the output is a bit hard to read.
106+
In most cases, you can instead *export* this data to an application performance monitoring backend to be visualized and queried.
107+
It's also common to aggregate span and trace information from multiple services into a single database, so that actions requiring multiple services can still all be visualized together.
108108

109-
This concept is known as distributed tracing. One such distributed tracing backend is known as Jaeger.
109+
This concept of aggregating span and trace information is known as distributed tracing. One such distributed tracing backend is known as Jaeger. The Jaeger project provides an all-in-one Docker container with a UI, database, and consumer.
110110

111-
The Jaeger project provides an all-in-one docker container that provides a UI, database, and consumer. Let's bring
112-
it up now:
111+
Run the following command to start Jaeger:
113112

114113
.. code-block:: sh
115114
116115
docker run -p 16686:16686 -p 6831:6831/udp jaegertracing/all-in-one
117116
118-
This will start Jaeger on port 16686 locally, and expose Jaeger thrift agent on port 6831. You can visit it at http://localhost:16686.
117+
This command starts Jaeger locally on port 16686 and exposes the Jaeger thrift agent on port 6831. You can visit Jaeger at http://localhost:16686.
119118

120-
With this backend up, your application will now need to export traces to this system. ``opentelemetry-sdk`` does not provide an exporter
121-
for Jaeger, but you can install that as a separate package:
119+
After you spin up the backend, your application needs to export traces to this system. Although ``opentelemetry-sdk`` doesn't provide an exporter
120+
for Jaeger, you can install it as a separate package with the following command:
122121

123122
.. code-block:: sh
124123
125124
pip install opentelemetry-exporter-jaeger
126125
127-
Once installed, update your code to import the Jaeger exporter, and use that instead:
126+
After you install the exporter, update your code to import the Jaeger exporter and use that instead:
128127

129128
.. literalinclude:: getting_started/jaeger_example.py
130129
:language: python
131130
:lines: 15-
132131

133-
Run the script:
132+
Finally, run the Python script:
134133

135134
.. code-block:: python
136135
137136
python jaeger_example.py
138137
139-
You can then visit the jaeger UI, see you service under "services", and find your traces!
138+
You can then visit the Jaeger UI, see your service under "services", and find your traces!
140139

141140
.. image:: images/jaeger_trace.png
142141

143-
Integrations example with Flask
144-
-------------------------------
142+
Instrumentation example with Flask
143+
------------------------------------
145144

146-
The above is a great example, but it's very manual. Within the telemetry space, there are common actions that one wants to instrument:
145+
While the example in the previous section is great, it's very manual. The following are common actions you might want to track and include as part of your distributed tracing.
147146

148147
* HTTP responses from web services
149148
* HTTP requests from clients
150149
* Database calls
151150

152-
To help instrument common scenarios, opentelemetry also has the concept of "instrumentations": packages that are designed to interface
153-
with a specific framework or library, such as Flask and psycopg2. A list of the currently curated extension packages can be found `at the Contrib repo <https://github.com/open-telemetry/opentelemetry-python-contrib/tree/master/instrumentation>`_.
151+
To track these common actions, OpenTelemetry has the concept of instrumentations. Instrumentations are packages designed to interface
152+
with a specific framework or library, such as Flask and psycopg2. You can find a list of the currently curated extension packages in the `Contrib repository <https://github.com/open-telemetry/opentelemetry-python-contrib/tree/master/instrumentation>`_.
154153

155-
We will now instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
154+
Instrument a basic Flask application that uses the requests library to send HTTP requests. First, install the instrumentation packages themselves:
156155

157156
.. code-block:: sh
158157
159158
pip install opentelemetry-instrumentation-flask
160159
pip install opentelemetry-instrumentation-requests
161160
162161
163-
And let's write a small Flask application that sends an HTTP request, activating each instrumentation during the initialization:
162+
The following small Flask application sends an HTTP request and also activates each instrumentation during its initialization:
164163

165164
.. literalinclude:: getting_started/flask_example.py
166165
:language: python
167166
:lines: 15-
168167

169168

170-
Now run the above script, hit the root url (http://localhost:5000/) a few times, and watch your spans be emitted!
169+
Now run the script, hit the root URL (http://localhost:5000/) a few times, and watch your spans be emitted!
171170

172171
.. code-block:: sh
173172
174173
python flask_example.py
175174
176175
177-
Configure Your HTTP Propagator (b3, Baggage)
176+
Configure Your HTTP propagator (b3, Baggage)
178177
-------------------------------------------------------
179178

180179
A major feature of distributed tracing is the ability to correlate a trace across
181180
multiple services. However, those services need to propagate information about a
182181
trace from one service to the other.
183182

184-
To enable this, OpenTelemetry has the concept of `propagators <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/context/api-propagators.md>`_,
185-
which provide a common method to encode and decode span information from a request and response,
186-
respectively.
183+
To enable this propagation, OpenTelemetry has the concept of `propagators <https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/context/api-propagators.md>`_,
184+
which provide a common method to encode and decode span information from a request and response, respectively.
187185

188-
By default, opentelemetry-python is configured to use the `W3C Trace Context <https://www.w3.org/TR/trace-context/>`_
189-
HTTP headers for HTTP requests. This can be configured to leverage different propagators. Here's
186+
By default, ``opentelemetry-python`` is configured to use the `W3C Trace Context <https://www.w3.org/TR/trace-context/>`_
187+
HTTP headers for HTTP requests, but you can configure it to leverage different propagators. Here's
190188
an example using Zipkin's `b3 propagation <https://github.com/openzipkin/b3-propagation>`_:
191189

192190
.. code-block:: python
@@ -197,38 +195,38 @@ an example using Zipkin's `b3 propagation <https://github.com/openzipkin/b3-prop
197195
propagators.set_global_textmap(B3Format())
198196
199197
200-
Adding Metrics
198+
Add metrics
201199
--------------
202200

203201
Spans are a great way to get detailed information about what your application is doing, but
204-
what about a more aggregated perspective? OpenTelemetry provides supports for metrics, a time series
205-
of numbers that might express things such as CPU utilization, request count for an HTTP server, or a
202+
what about a more aggregated perspective? OpenTelemetry provides support for metrics. Metrics are a time series
203+
of values that express things such as CPU utilization, request count for an HTTP server, or a
206204
business metric such as transactions.
207205

208-
All metrics can be annotated with labels: additional qualifiers that help describe what
206+
You can annotate all metrics with labels. Labels are additional qualifiers that describe what
209207
subdivision of the measurements the metric represents.
210208

211-
The following is an example of emitting metrics to console, in a similar fashion to the trace example:
209+
The following example emits metrics to your console, similar to the trace example:
212210

213211
.. literalinclude:: getting_started/metrics_example.py
214212
:language: python
215213
:lines: 15-
216214

217-
The sleeps will cause the script to take a while, but running it should yield:
215+
The sleep functions cause the script to take a while, but it eventually yields the following output:
218216

219217
.. code-block:: sh
220218
221219
$ python metrics_example.py
222220
ConsoleMetricsExporter(data="Counter(name="requests", description="number of requests")", labels="(('environment', 'staging'),)", value=25)
223221
ConsoleMetricsExporter(data="Counter(name="requests", description="number of requests")", labels="(('environment', 'staging'),)", value=45)
224222
225-
Using Prometheus
226-
----------------
223+
Use metrics with Prometheus
224+
------------------------------
227225

228-
Similar to traces, it is really valuable for metrics to have its own data store to help visualize and query the data. A common solution for this is
229-
`Prometheus <https://prometheus.io/>`_.
226+
It's valuable to have a data store for metrics so you can visualize and query the data. A common solution is
227+
`Prometheus <https://prometheus.io/>`_, which provides a server to scrape and store time series data.
230228

231-
Let's start by bringing up a Prometheus instance ourselves, to scrape our application. Write the following configuration:
229+
Start by bringing up a Prometheus instance to scrape your application. Write the following configuration:
232230

233231
.. code-block:: yaml
234232
@@ -239,43 +237,43 @@ Let's start by bringing up a Prometheus instance ourselves, to scrape our applic
239237
static_configs:
240238
- targets: ['localhost:8000']
241239
242-
And start a docker container for it:
240+
Then start a Docker container for the instance:
243241

244242
.. code-block:: sh
245243
246244
# --net=host will not work properly outside of Linux.
247245
docker run --net=host -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus \
248246
--log.level=debug --config.file=/etc/prometheus/prometheus.yml
249247
250-
For our Python application, we will need to install an exporter specific to Prometheus:
248+
Install an exporter specific to Prometheus for your Python application:
251249

252250
.. code-block:: sh
253251
254252
pip install opentelemetry-exporter-prometheus
255253
256254
257-
And use that instead of the `ConsoleMetricsExporter`:
255+
Use that exporter instead of the `ConsoleMetricsExporter`:
258256

259257
.. literalinclude:: getting_started/prometheus_example.py
260258
:language: python
261259
:lines: 15-
262260

263-
The Prometheus server will run locally on port 8000, and the instrumented code will make metrics available to Prometheus via the `PrometheusMetricsExporter`.
261+
The Prometheus server runs locally on port 8000. The instrumented code makes metrics available to Prometheus via the `PrometheusMetricsExporter`.
264262
Visit the Prometheus UI (http://localhost:9090) to view your metrics.
265263

266264

267-
Using the OpenTelemetry Collector for traces and metrics
265+
Use the OpenTelemetry Collector for traces and metrics
268266
--------------------------------------------------------
269267

270-
Although it's possible to directly export your telemetry data to specific backends, you may more complex use cases, including:
268+
Although it's possible to directly export your telemetry data to specific backends, you might have more complex use cases such as the following:
271269

272-
* having a single telemetry sink shared by multiple services, to reduce overhead of switching exporters
273-
* aggregating metrics or traces across multiple services, running on multiple hosts
270+
* A single telemetry sink shared by multiple services, to reduce overhead of switching exporters.
271+
* Aggregaing metrics or traces across multiple services, running on multiple hosts.
274272

275273
To enable a broad range of aggregation strategies, OpenTelemetry provides the `opentelemetry-collector <https://github.com/open-telemetry/opentelemetry-collector>`_.
276274
The Collector is a flexible application that can consume trace and metric data and export to multiple other backends, including to another instance of the Collector.
277275

278-
To see how this works in practice, let's start the Collector locally. Write the following file:
276+
Start the Collector locally to see how the Collector works in practice. Write the following file:
279277

280278
.. code-block:: yaml
281279
@@ -299,7 +297,7 @@ To see how this works in practice, let's start the Collector locally. Write the
299297
receivers: [opencensus]
300298
exporters: [logging]
301299
302-
Start the docker container:
300+
Then start the Docker container:
303301

304302
.. code-block:: sh
305303
@@ -314,7 +312,7 @@ Install the OpenTelemetry Collector exporter:
314312
315313
pip install opentelemetry-exporter-otlp
316314
317-
And execute the following script:
315+
Finally, execute the following script:
318316

319317
.. literalinclude:: getting_started/otlpcollector_example.py
320318
:language: python

0 commit comments

Comments
 (0)