You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/insights/container-insights-agent-config.md
+66-11Lines changed: 66 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -11,28 +11,28 @@ ms.service: azure-monitor
11
11
ms.topic: article
12
12
ms.tgt_pltfrm: na
13
13
ms.workload: infrastructure-services
14
-
ms.date: 07/12/2019
14
+
ms.date: 08/14/2019
15
15
ms.author: magoedte
16
16
---
17
17
18
18
# Configure agent data collection for Azure Monitor for containers
19
19
20
-
Azure Monitor for containers collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) from the containerized agent. This agent can also collect time series data (also referred to as metrics) from Prometheus using the containerized agent without having to setup and manage a Prometheus server and database. You can configure agent data collection settings by creating a custom Kubernetes ConfigMaps to control this experience.
20
+
Azure Monitor for containers collects stdout, stderr, and environmental variables from container workloads deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) from the containerized agent. This agent can also collect time series data (also referred to as metrics) from Prometheus using the containerized agent without having to set up and manage a Prometheus server and database. You can configure agent data collection settings by creating a custom Kubernetes ConfigMaps to control this experience.
21
21
22
22
This article demonstrates how to create ConfigMap and configure data collection based on your requirements.
23
23
24
24
>[!NOTE]
25
25
>Support for Prometheus is a feature in public preview at this time.
26
26
>
27
27
28
-
## Configure your cluster with custom data collection settings
28
+
## ConfigMap file settings overview
29
29
30
30
A template ConfigMap file is provided that allows you to easily edit it with your customizations without having to create it from scratch. Before starting, you should review the Kubernetes documentation about [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and familiarize yourself with how to create, configure, and deploy ConfigMaps. This will allow you to filter stderr and stdout per namespace or across the entire cluster, and environment variables for any container running across all pods/nodes in the cluster.
31
31
32
32
>[!IMPORTANT]
33
33
>The minimum agent version supported to collect stdout, stderr, and environmental variables from container workloads is ciprod06142019 or later. The minimum agent version supported for scraping Prometheus metrics is ciprod07092019 or later. To verify your agent version, from the **Node** tab select a node, and in the properties pane note value of the **Agent Image Tag** property.
34
34
35
-
### Overview of configurable data collection settings
35
+
### Data collection settings
36
36
37
37
The following are the settings that can be configured to control data collection.
38
38
@@ -46,21 +46,33 @@ The following are the settings that can be configured to control data collection
46
46
|`[log_collection_settings.stderr] exclude_namespaces =`|String |Comma-separated array |Array of Kubernetes namespaces for which stderr logs will not be collected. This setting is effective only if `log_collection_settings.stdout.enabled` is set to `true`. If not specified in ConfigMap, the default value is `exclude_namespaces = ["kube-system"]`. |
47
47
|`[log_collection_settings.env_var] enabled =`|Boolean | true or false | This controls if environment variable collection is enabled. When set to `false`, no environment variables are collected for any container running across all pods/nodes in the cluster. If not specified in ConfigMap, the default value is `enabled = true`. |
48
48
49
-
##Overview of configurable Prometheus scraping settings
49
+
###Prometheus scraping settings
50
50
51
-
Active scraping of metrics from Prometheus are performed from one of two perspectives:
51
+

52
+
53
+
Azure Monitor for containers provides a seamless experience to enable collection of Prometheus metrics by multiple scraping through the following mechanisms as shown in the following table. The metrics are collected through a set of settings specified in a single ConfigMap file, which is the same file used to configure collection of stdout, stderr, and environmental variables from container workloads.
54
+
55
+
Active scraping of metrics from Prometheus is performed from one of two perspectives:
52
56
53
57
* Cluster-wide - HTTP URL and discover targets from listed endpoints of a service, k8s services such as kube-dns and kube-state-metrics, and pod annotations specific to an application. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.
54
58
* Node-wide - HTTP URL and discover targets from listed endpoints of a service. Metrics collected in this context will be defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.
When a URL is specified, Azure Monitor for containers only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address and then the resolved service is scraped.
67
+
56
68
|Scope | Key | Data type | Value | Description |
57
69
|------|-----|-----------|-------|-------------|
58
70
| Cluster-wide |||| Specify any one of the following three methods to scrape endpoints for metrics. |
59
71
||`urls`| String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Azure Monitor for containers parameter and can be used instead of node IP address. Must be all uppercase.) |
60
72
||`kubernetes_services`| String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace:9100/metrics]`.|
61
73
||`monitor_kubernetes_pods`| Boolean | true or false | When set to `true` in the cluster-wide settings, Azure Monitor for containers agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:`|
62
-
||`prometheus.io/scrape`| Boolean | true or false | Enables scraping of the pod. |
63
-
||`prometheus.io/scheme`| String | http or https | Defaults to scrapping over HTTP. If required, set to `https`. |
74
+
||`prometheus.io/scrape`| Boolean | true or false | Enables scraping of the pod. `monitor_kubernetes_pods` must be set to `true`. |
75
+
||`prometheus.io/scheme`| String | http or https | Defaults to scrapping over HTTP. If necessary, set to `https`. |
64
76
||`prometheus.io/path`| String | Comma-separated array | The HTTP resource path on which to fetch metrics from. If the metrics path is not `/metrics`, define it with this annotation. |
65
77
||`prometheus.io/port`| String | 9102 | Specify a port to listen on. If port is not set, it will default to 9102. |
66
78
| Node-wide |`urls`| String | Comma-separated array | HTTP endpoint (Either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Azure Monitor for containers parameter and can be used instead of node IP address. Must be all uppercase.) |
@@ -69,16 +81,59 @@ Active scraping of metrics from Prometheus are performed from one of two perspec
69
81
70
82
ConfigMap is a global list and there can be only one ConfigMap applied to the agent. You cannot have another ConfigMap overruling the collections.
71
83
72
-
###Configure and deploy ConfigMaps
84
+
## Configure and deploy ConfigMaps
73
85
74
86
Perform the following steps to configure and deploy your ConfigMap configuration file to your cluster.
75
87
76
88
1.[Download](https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml) the template ConfigMap yaml file and save it as container-azm-ms-agentconfig.yaml.
77
89
1. Edit the ConfigMap yaml file with your customizations.
78
90
79
91
- To exclude specific namespaces for stdout log collection, you configure the key/value using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.
92
+
80
93
- To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally, and then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container.
94
+
81
95
- To disable stderr log collection cluster-wide, you configure the key/value using the following example: `[log_collection_settings.stderr] enabled = false`.
96
+
97
+
- The following examples demonstrates how to configure the ConfigMap file metrics from a URL cluster-wide, from an agent's DameonSet node-wide, and by specifying a pod annotation
98
+
99
+
- Scrape Prometheus metrics from a specific URL across the cluster.
100
+
101
+
```
102
+
prometheus-data-collection-settings: |-
103
+
# Custom Prometheus metrics data collection settings
104
+
[prometheus_data_collection_settings.cluster]
105
+
interval = "1m" ## Valid time units are ns, us (or µs), ms, s, m, h.
106
+
fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through
107
+
fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
108
+
urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
109
+
```
110
+
111
+
- Scrape Prometheus metrics from an agent's DaemonSet running in every node in the cluster.
112
+
113
+
```
114
+
prometheus-data-collection-settings: |-
115
+
# Custom Prometheus metrics data collection settings
116
+
[prometheus_data_collection_settings.node]
117
+
interval = "1m" ## Valid time units are ns, us (or µs), ms, s, m, h.
118
+
# Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in the cluster
- Scrape Prometheus metrics by specifying a pod annotation.
125
+
126
+
```
127
+
prometheus-data-collection-settings: |-
128
+
# Custom Prometheus metrics data collection settings
129
+
[prometheus_data_collection_settings.cluster]
130
+
interval = "1m" ## Valid time units are ns, us (or µs), ms, s, m, h
131
+
monitor_kubernetes_pods = true #replicaset will scrape Kubernetes pods for the following prometheus annotations:
132
+
- prometheus.io/scrape:"true" #Enable scraping for this pod
133
+
- prometheus.io/scheme:"http:" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ‘http’
134
+
- prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation.
135
+
- prometheus.io/port:"8000" #If port is not 9102 use this annotation
136
+
```
82
137
83
138
1. Create ConfigMap by running the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
84
139
@@ -93,7 +148,7 @@ To verify the configuration was successfully applied, use the following command
93
148
config::unsupported/missing config schema version - 'v21' , using defaults
94
149
```
95
150
96
-
Errors related to applying configuration changes for Prometheus are also available for review. Either from the logs from an agent pod using the same `kubectl logs` command or from live logs. Live logs shows errors similar to the following:
151
+
Errors related to applying configuration changes for Prometheus are also available for review. Either from the logs from an agent pod using the same `kubectl logs` command or from live logs. Live logs show errors similar to the following:
97
152
98
153
```
99
154
2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
@@ -103,7 +158,7 @@ Errors prevent omsagent from parsing the file, causing it to restart and use the
103
158
104
159
## Applying updated ConfigMap
105
160
106
-
If you have already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can simply edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
161
+
If you have already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
107
162
108
163
The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
Copy file name to clipboardExpand all lines: articles/azure-monitor/insights/container-insights-faq.md
+6-3Lines changed: 6 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ editor: tysonn
8
8
ms.service: azure-monitor
9
9
ms.topic: article
10
10
ms.workload: infrastructure-services
11
-
ms.date: 08/02/2019
11
+
ms.date: 08/14/2019
12
12
ms.author: magoedte
13
13
14
14
---
@@ -29,7 +29,7 @@ If you are unable to see any data in the Log Analytics workspace at a certain ti
29
29
30
30
The ContainerInventory table contains information about both stopped and running containers. The table is populated by a workflow inside the agent that queries the docker for all the containers (running and stopped), and forwards that data the Log Analytics workspace.
31
31
32
-
## How do I resolve **Missing Subscription registration** error?
32
+
## How do I resolve *Missing Subscription registration* error?
33
33
34
34
If you receive the error **Missing Subscription registration for Microsoft.OperationsManagement**, you can resolve it by registering the resource provider **Microsoft.OperationsManagement** in the subscription where the workspace is defined. The documentation for how to do this can be found [here](../../azure-resource-manager/resource-manager-register-provider-errors.md).
35
35
@@ -67,7 +67,7 @@ LogEntry : ({“Hello": "This example has multiple lines:","Docker/Moby": "will
67
67
68
68
```
69
69
70
-
For a detailed look at the issue, review the following [github link](https://github.com/moby/moby/issues/22920).
70
+
For a detailed look at the issue, review the following [GitHub link](https://github.com/moby/moby/issues/22920).
71
71
72
72
## How do I resolve Azure AD errors when I enable live logs?
73
73
@@ -82,6 +82,9 @@ If after you enable Azure Monitor for containers for an AKS cluster, you delete
0 commit comments