You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/guides/servicediscovery.md
+67-36Lines changed: 67 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ If no configuration template is defined in the store for an image, the Agent wil
21
21
22
22
## How to set it up
23
23
24
-
To use Service Discovery, the only thing you need to do is to define the configuration templates for the images you want to monitor, in a key-value store on top of the Agent.
24
+
To use Service Discovery, you simply need to define the configuration templates for the images you want to monitor, in a key-value store on top of the Agent.
25
25
26
26
Here is the structure of a configuration template:
27
27
@@ -42,10 +42,54 @@ Here is the structure of a configuration template:
42
42
...
43
43
44
44
45
+
You also need to configure the Datadog Agents of the environment to enable service discovery using this store as a backend. To do so, edit the datadog.conf file to modify these options as needed:
45
46
46
-
### Setup NGINX
47
+
# For now only docker is supported so you just need to un-comment this line.
48
+
# service_discovery_backend: docker
49
+
#
50
+
# Define which key/value store must be used to look for configuration templates.
51
+
# Default is etcd. Consul is also supported.
52
+
# sd_config_backend: etcd
53
+
54
+
# Settings for connecting to the backend. These are the default, edit them if you run a different config.
55
+
# sd_backend_host: 127.0.0.1
56
+
# sd_backend_port: 4001
57
+
58
+
# By default, the agent will look for the configuration templates under the
59
+
# `/datadog/check_configs` key in the back-end.
60
+
# If you wish otherwise, uncomment this option and modify its value.
61
+
# sd_template_dir: /datadog/check_configs
62
+
63
+
64
+
### Running and configuring the Agent in a container
65
+
66
+
The above settings can be passed to the dd-agent container through the following environment variables:
67
+
68
+
SD_BACKEND <-> service_discovery_backend
69
+
SD_CONFIG_BACKEND <-> sd_config_backend
70
+
SD_BACKEND_HOST <-> sd_backend_host
71
+
SD_BACKEND_PORT <-> sd_backend_port
72
+
SD_TEMPLATE_DIR <-> sd_template_dir
47
73
48
-
The default NGINX image doesn't have the stub_status_module enabled, so we first need to build an image named that configures the /nginx_status endpoint.
74
+
Available tags:
75
+
76
+
datadog/docker-dd-agent:latest (has the Docker check preconfigured)
77
+
datadog/docker-dd-agent:kubernetes (has the Docker and Kubernetes checks preconfigured)
The default NGINX image doesn't have the stub_status_module enabled, so we first need to build an image (named `custom-nginx` here) that configures the /nginx_status endpoint.
49
93
50
94
Setup a configuration template in the form of a few keys in a key/value store the Agent can reach. Here is an example using etcd:
51
95
@@ -69,26 +113,9 @@ If the Agent is configured to use consul instead:
69
113
70
114
*Notice the format of template variables: `%%host%%`. For now host and port are supported on every platform. Kubernetes users can also use the `tags` variable that collects relevant tags like the pod name and node name from the Kubernetes API. Support for more variables and platforms is planned, and feature requests are welcome.*
71
115
72
-
Finally you need to configure all the Agents of the environment to enable service discovery using this store as a backend. To do so, simply edit the datadog.conf file to modify these options:
73
-
74
-
# For now only docker is supported so you just need to un-comment this line.
75
-
# service_discovery_backend: docker
76
-
#
77
-
# Define which key/value store must be used to look for configuration templates.
78
-
# Default is etcd. Consul is also supported.
79
-
# sd_config_backend: etcd
80
-
81
-
# Settings for connecting to the backend. These are the default, edit them if you run a different config.
82
-
# sd_backend_host: 127.0.0.1
83
-
# sd_backend_port: 4001
84
-
85
-
# By default, the agent will look for the configuration templates under the
86
-
# `/datadog/check_configs` key in the back-end.
87
-
# If you wish otherwise, uncomment this option and modify its value.
88
-
# sd_template_dir: /datadog/check_configs
89
-
90
116
Now every Agent will be able to detect an nginx instance running on its host and setup a check for it automatically. No need to restart the Agent every time the container starts or stops, and no other configuration file to modify.
91
117
118
+
92
119
### Template variables
93
120
94
121
To automate the resolution of parameters like the host IP address or its port, the agent uses template variables in this format: `%%variable%%`.
@@ -99,6 +126,13 @@ Let's take the example of the port variable: a rabbitmq container with the manag
99
126
100
127
The default management port for the rabbitmq image is `15672` with index 4 in the list (starting from 0), so the template variable needs to look like `%%port_4%%`.
101
128
129
+
It is also possible starting from version `5.8.3` of the agent to use keys as a suffix in case a dictionary is expected. This is particularly useful to select an IP address for a container that has several networks attached.
130
+
The format is the same: `%%variable_key%%`.
131
+
132
+
As an example if the rabbitmq container mentioned above is available over two networks `bridge` and `swarm`, using `%%host_swarm%%` will pick the IP address from the swarm network.
133
+
Note that for the `host` variable if several networks are found and no key is passed the agent attempts to use the default `bridge` network.
134
+
135
+
102
136
### Configuring multiple checks for the same image
103
137
104
138
Sometimes enabling several checks on a single container is needed. For instance if you run a Java service that provides an HTTP API, using the HTTP check and the JMX integration at the same time makes perfect sense. To declare that in templates, simply add elements to the `check_names`, `init_configs`, and `instances lists`. These elements will be matched together based on their index in their respective lists.
@@ -119,29 +153,25 @@ In the previous example of the custom nginx image, adding http_check would look
### Running and configuring the Agent in a container
123
156
124
-
The above settings can be passed to the dd-agent container through the following environment variables:
157
+
### Monitoring your custom container
125
158
126
-
SD_BACKEND <-> service_discovery_backend
127
-
SD_CONFIG_BACKEND <-> sd_config_backend
128
-
SD_BACKEND_HOST <-> sd_backend_host
129
-
SD_BACKEND_PORT <-> sd_backend_port
130
-
SD_TEMPLATE_DIR <-> sd_template_dir
159
+
Service discovery works with any image—one important note though is that for the `%%port%%` variable to be interpolated, the current version needs the container to expose the targeted port. See the NGINX Dockerfile for reference.
131
160
132
-
Available tags:
133
161
134
-
datadog/docker-dd-agent:latest (has the Docker check preconfigured)
135
-
datadog/docker-dd-agent:kubernetes (has the Docker and Kubernetes checks preconfigured)
162
+
### Image name format in the configuration store
136
163
137
-
example:
164
+
Before version `5.8.3` of the agent it was required to truncate the image name to its minimum. i.e. for the image `quay.io/coreos/etcd:latest` the key in the configuration store needed to be `datadog/check_configs/etcd/...`
To make configuration more precise we now use the complete image identifier in the key. So the agent will look in `datadog/check_configs/quay.io/coreos/etcd:latest/...`, and fallback to the old format if no template was found to ensure backward compatibility.
140
167
141
168
142
-
### Monitoring your custom container
169
+
#### Using Docker label to specify the template path
170
+
171
+
In case you need to match different templates with containers running the same image, it is also possible starting with `5.8.3` to define explicitly which path the agent should look for in the configuration store to find a template using the `com.datadoghq.sd.check.id` label.
172
+
173
+
For example, if a container has this label configured as `com.datadoghq.sd.check.id: foobar`, it will look for a configuration template in the store under the key `datadog/check_configs/foobar/...`.
143
174
144
-
Service discovery works with any image—one important note though is that for the `%%port%%` variable to be interpolated, the current version needs the container to expose the targeted port. See the NGINX Dockerfile for reference.
145
175
146
176
### Kubernetes users
147
177
@@ -153,9 +183,10 @@ Once the cluster is running, simply use the K/V store service IP address and por
153
183
154
184
Then write your configuration templates, and let the Agent detect your running pods and take care of re-configuring checks.
155
185
186
+
156
187
#### Examples
157
188
158
-
Following is an example of how to setup templates for an NGINX, PostgreSQL stack. The example will use etcd as the configuration store and suppose that the etcd cluster is deployed as a service in kubernetes with the IP address 10.0.65.98.
189
+
Following is an example of how to setup templates for an NGINX, PostgreSQL stack. The example will use etcd as the configuration store.
0 commit comments