• 欢迎来到小爱博客,一个分享互联网IT技术的网站,喜欢就收藏吧!

prometheus.yml配置文件详解

prometheus 小爱 2个月前 (08-30) 45次浏览 已收录 0个评论 扫描二维码

组态

Prometheus通过命令行标志和配置文件进行配置。尽管命令行标志配置了不可变的系统参数(例如存储位置,要保留在磁盘和内存中的数据量等),但配置文件定义了与抓取作业及其实例相关的所有内容,以及哪些规则文件加载

要查看所有可用的命令行标志,请运行./prometheus -h

Prometheus可以在运行时重新加载其配置。如果新配置格式不正确,则更改将不会应用。通过向SIGHUPPrometheus进程发送a或向/-/reload端点发送HTTP POST请求(--web.enable-lifecycle启用该标志时)来触发配置重载。这还将重新加载所有已配置的规则文件。

配置文件

要指定要加载的配置文件,请使用该--config.file标志。

该文件以YAML格式写入,由以下所述的方案定义。方括号表示参数是可选的。对于非列表参数,该值设置为指定的默认值。

通用占位符定义如下:

  • <boolean>:可以接受值的布尔值truefalse
  • <duration>:与正则表达式匹配的持续时间 [0-9]+(ms|[smhdwy])
  • <filename>:当前工作目录中的有效路径
  • <host>:由主机名或IP后跟可选端口号组成的有效字符串
  • <int>:整数值
  • <labelname>:与正则表达式匹配的字符串 [a-zA-Z_][a-zA-Z0-9_]*
  • <labelvalue>:一串unicode字符
  • <path>:有效的网址路径
  • <scheme>:可以采用值httphttps
  • <secret>:是秘密的常规字符串,例如密码
  • <string>:常规字符串
  • <tmpl_string>:使用前已模板扩展的字符串

其他占位符分别指定。

这里可以找到有效的示例文件。

全局配置指定在所有其他配置上下文中有效的参数。它们还用作其他配置部分的默认设置。

global:
  # How frequently to scrape targets by default.
  [ scrape_interval: <duration> | default = 1m ]

  # How long until a scrape request times out.
  [ scrape_timeout: <duration> | default = 10s ]

  # How frequently to evaluate rules.
  [ evaluation_interval: <duration> | default = 1m ]

  # The labels to add to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    [ <labelname>: <labelvalue> ... ]

  # File to which PromQL queries are logged.
  # Reloading the configuration will reopen the file.
  [ query_log_file: <string> ]

# Rule files specifies a list of globs. Rules and alerts are read from
# all matching files.
rule_files:
  [ - <filepath_glob> ... ]

# A list of scrape configurations.
scrape_configs:
  [ - <scrape_config> ... ]

# Alerting specifies settings related to the Alertmanager.
alerting:
  alert_relabel_configs:
    [ - <relabel_config> ... ]
  alertmanagers:
    [ - <alertmanager_config> ... ]

# Settings related to the remote write feature.
remote_write:
  [ - <remote_write> ... ]

# Settings related to the remote read feature.
remote_read:
  [ - <remote_read> ... ]

<scrape_config>

一个scrape_config小节指定一组目标和参数,描述如何刮除它们。在一般情况下,一个刮擦配置指定一个作业。在高级配置中,这可能会改变。

可以通过static_configs参数静态配置目标,也可以使用受支持的服务发现机制之一动态发现目标。

此外,relabel_configs在刮擦之前,允许对任何目标及其标签进行高级修改。

# The job name assigned to scraped metrics by default.
job_name: <job_name>

# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# Per-scrape timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]

# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
#
# Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
[ honor_labels: <boolean> | default = false ]

# honor_timestamps controls whether Prometheus respects the timestamps present
# in scraped data.
#
# If honor_timestamps is set to "true", the timestamps of the metrics exposed
# by the target will be used.
#
# If honor_timestamps is set to "false", the timestamps of the metrics exposed
# by the target will be ignored.
[ honor_timestamps: <boolean> | default = true ]

# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]

# Optional HTTP URL parameters.
params:
  [ <string>: [<string>, ...] ]

# Sets the `Authorization` header on every scrape request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <secret> ]

# Sets the `Authorization` header on every scrape request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: <filename> ]

# Configures the scrape request's TLS settings.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# List of Azure service discovery configurations.
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# List of Consul service discovery configurations.
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# List of DigitalOcean service discovery configurations.
digitalocean_sd_configs:
  [ - <digitalocean_sd_config> ... ]

# List of Docker Swarm service discovery configurations.
dockerswarm_sd_configs:
  [ - <dockerswarm_sd_config> ... ]

# List of DNS service discovery configurations.
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# List of EC2 service discovery configurations.
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# List of file service discovery configurations.
file_sd_configs:
  [ - <file_sd_config> ... ]

# List of GCE service discovery configurations.
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# List of Marathon service discovery configurations.
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# List of OpenStack service discovery configurations.
openstack_sd_configs:
  [ - <openstack_sd_config> ... ]

# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# List of Triton service discovery configurations.
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# List of labeled statically configured targets for this job.
static_configs:
  [ - <static_config> ... ]

# List of target relabel configurations.
relabel_configs:
  [ - <relabel_config> ... ]

# List of metric relabel configurations.
metric_relabel_configs:
  [ - <relabel_config> ... ]

# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabelling
# the entire scrape will be treated as failed. 0 means no limit.
[ sample_limit: <int> | default = 0 ]

<job_name>所有刮板配置中,哪里必须是唯一的。

<tls_config>

tls_config允许配置TLS连接。

# CA certificate to validate API server certificate with.
[ ca_file: <filename> ]

# Certificate and key files for client cert authentication to the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]

# ServerName extension to indicate the name of the server.
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]

# Disable validation of the server certificate.
[ insecure_skip_verify: <boolean> ]

<azure_sd_config>

Azure SD配置允许从Azure VM检索抓取目标。

重新标记期间,目标上可以使用以下元标记:

  • __meta_azure_machine_id:机器ID
  • __meta_azure_machine_location:机器运行的位置
  • __meta_azure_machine_name:机器名称
  • __meta_azure_machine_os_type:机器操作系统
  • __meta_azure_machine_private_ip:机器的专用IP
  • __meta_azure_machine_public_ip:机器的公用IP(如果存在)
  • __meta_azure_machine_resource_group:机器的资源组
  • __meta_azure_machine_tag_<tagname>:机器的每个标签值
  • __meta_azure_machine_scale_set:vm所属的比例尺集的名称(仅在使用比例尺集时设置此值)
  • __meta_azure_subscription_id:订阅ID
  • __meta_azure_tenant_id:租户ID

请参阅以下有关Azure发现的配置选项:

# The information to access the Azure API.
# The Azure environment.
[ environment: <string> | default = AzurePublicCloud ]

# The authentication method, either OAuth or ManagedIdentity.
# See https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
[ authentication_method: <string> | default = OAuth]
# The subscription ID. Always required.
subscription_id: <string>
# Optional tenant ID. Only required with authentication_method OAuth.
[ tenant_id: <string> ]
# Optional client ID. Only required with authentication_method OAuth.
[ client_id: <string> ]
# Optional client secret. Only required with authentication_method OAuth.
[ client_secret: <secret> ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 300s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

<consul_sd_config>

Consul SD配置允许从Consul的 Catalog API 检索抓取目标。

重新标记期间,以下meta标签可用于目标:

  • __meta_consul_address:目标地址
  • __meta_consul_dc:目标的数据中心名称
  • __meta_consul_health:服务的健康状态
  • __meta_consul_metadata_<key>:目标的每个节点元数据键值
  • __meta_consul_node:为目标定义的节点名称
  • __meta_consul_service_address:目标的服务地址
  • __meta_consul_service_id:目标的服务ID
  • __meta_consul_service_metadata_<key>:目标的每个服务元数据键值
  • __meta_consul_service_port:目标的服务端口
  • __meta_consul_service:目标所属的服务名称
  • __meta_consul_tagged_address_<key>:每个节点标记了目标的地址键值
  • __meta_consul_tags:由标签分隔符连接的目标的标签列表
# The information to access the Consul API. It is to be defined
# as the Consul documentation requires.
[ server: <host> | default = "localhost:8500" ]
[ token: <secret> ]
[ datacenter: <string> ]
[ scheme: <string> | default = "http" ]
[ username: <string> ]
[ password: <secret> ]

tls_config:
  [ <tls_config> ]

# A list of services for which targets are retrieved. If omitted, all services
# are scraped.
services:
  [ - <string> ]

# See https://www.consul.io/api/catalog.html#list-nodes-for-service to know more
# about the possible filters that can be used.

# An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list.
tags:
  [ - <string> ]

# Node metadata key/value pairs to filter nodes for a given service.
[ node_meta:
  [ <string>: <string> ... ] ]

# The string by which Consul tags are joined into the tag label.
[ tag_separator: <string> | default = , ]

# Allow stale Consul results (see https://www.consul.io/api/features/consistency.html). Will reduce load on Consul.
[ allow_stale: <boolean> | default = true ]

# The time after which the provided names are refreshed.
# On large setup it might be a good idea to increase this value because the catalog will change all the time.
[ refresh_interval: <duration> | default = 30s ]

请注意,用于刮擦目标的IP地址和端口组装为 <__meta_consul_address>:<__meta_consul_service_port>。但是,在某些Consul设置中,相关地址在中__meta_consul_service_address。在这种情况下,您可以使用重新标签 功能来替换特殊__address__标签。

重新标记阶段是过滤器的服务或节点优选和更有力的方式为基于任意标签的服务。对于拥有数千项服务的用户而言,直接使用Consul API会更有效,该API具有基本的过滤节点支持(当前通过节点元数据和单个标签)。

<digitalocean_sd_config>

DigitalOcean SD配置允许从DigitalOcean的 Droplet API中检索刮擦目标。该服务发现默认情况下使用公共IPv4地址,可以通过重新标记进行更改,如Prometheus digitalocean-sd配置文件中所示

重新标记期间,以下meta标签可用于目标:

  • __meta_digitalocean_droplet_id:液滴的id
  • __meta_digitalocean_droplet_name:滴的名称
  • __meta_digitalocean_image:液滴的图像名称
  • __meta_digitalocean_private_ipv4:小滴的私有IPv4
  • __meta_digitalocean_public_ipv4:小滴的公共IPv4
  • __meta_digitalocean_public_ipv6:小滴的公共IPv6
  • __meta_digitalocean_region:液滴的区域
  • __meta_digitalocean_size:液滴的大小
  • __meta_digitalocean_status:液滴的状态
  • __meta_digitalocean_features:液滴的逗号分隔列表
  • __meta_digitalocean_tags:小滴的标签的逗号分隔列表
# Authentication information used to authenticate to the API server.
# Note that `basic_auth`, `bearer_token` and `bearer_token_file` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information, not currently supported by DigitalOcean.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional bearer token authentication information.
[ bearer_token: <secret> ]

# Optional bearer token file authentication information.
[ bearer_token_file: <filename> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# The time after which the droplets are refreshed.
[ refresh_interval: <duration> | default = 60s ]

<dockerswarm_sd_config>

Docker Swarm SD配置允许从Docker Swarm 引擎检索抓取目标。

可以配置以下角色之一来发现目标:

services

services角色用于发现Swarm服务

可用的元标签:

  • __meta_dockerswarm_service_id:服务的ID
  • __meta_dockerswarm_service_name:服务名称
  • __meta_dockerswarm_service_mode:服务模式
  • __meta_dockerswarm_service_endpoint_port_name:端点端口的名称(如果有)
  • __meta_dockerswarm_service_endpoint_port_publish_mode:端点端口的发布方式
  • __meta_dockerswarm_service_label_<labelname>:服务的每个标签
  • __meta_dockerswarm_service_task_container_hostname:目标的容器主机名(如果有)
  • __meta_dockerswarm_service_task_container_image:目标的容器图像
  • __meta_dockerswarm_service_updating_status:服务状态(如果有)
  • __meta_dockerswarm_network_id:网络的ID
  • __meta_dockerswarm_network_name:网络名称
  • __meta_dockerswarm_network_ingress:网络是否进入
  • __meta_dockerswarm_network_internal:网络是否是内部网络
  • __meta_dockerswarm_network_label_<labelname>:网络的每个标签
  • __meta_dockerswarm_network_scope:网络范围

tasks

tasks角色用于发现Swarm任务

可用的元标签:

  • __meta_dockerswarm_task_id:任务的ID
  • __meta_dockerswarm_task_container_id:任务的容器ID
  • __meta_dockerswarm_task_desired_state:任务的期望状态
  • __meta_dockerswarm_task_label_<labelname>:任务的每个标签
  • __meta_dockerswarm_task_slot:任务所在的位置
  • __meta_dockerswarm_task_state:任务状态
  • __meta_dockerswarm_task_port_publish_mode:任务端口的发布方式
  • __meta_dockerswarm_service_id:服务的ID
  • __meta_dockerswarm_service_name:服务名称
  • __meta_dockerswarm_service_mode:服务模式
  • __meta_dockerswarm_service_label_<labelname>:服务的每个标签
  • __meta_dockerswarm_network_id:网络的ID
  • __meta_dockerswarm_network_name:网络名称
  • __meta_dockerswarm_network_ingress:网络是否进入
  • __meta_dockerswarm_network_internal:网络是否是内部网络
  • __meta_dockerswarm_network_label_<labelname>:网络的每个标签
  • __meta_dockerswarm_network_label:网络的每个标签
  • __meta_dockerswarm_network_scope:网络范围
  • __meta_dockerswarm_node_id:节点的ID
  • __meta_dockerswarm_node_hostname:节点的主机名
  • __meta_dockerswarm_node_address:节点的地址
  • __meta_dockerswarm_node_availability:节点的可用性
  • __meta_dockerswarm_node_label_<labelname>:节点的每个标签
  • __meta_dockerswarm_node_platform_architecture:节点的体系结构
  • __meta_dockerswarm_node_platform_os:节点的操作系统
  • __meta_dockerswarm_node_role:节点的作用
  • __meta_dockerswarm_node_status:节点的状态

__meta_dockerswarm_network_*元标签不填充这些出版了港口mode=host

nodes

nodes角色用于发现Swarm节点

可用的元标签:

  • __meta_dockerswarm_node_address:节点的地址
  • __meta_dockerswarm_node_availability:节点的可用性
  • __meta_dockerswarm_node_engine_version:节点引擎的版本
  • __meta_dockerswarm_node_hostname:节点的主机名
  • __meta_dockerswarm_node_id:节点的ID
  • __meta_dockerswarm_node_label_<labelname>:节点的每个标签
  • __meta_dockerswarm_node_manager_address:节点的管理器组件的地址
  • __meta_dockerswarm_node_manager_leader:节点的管理器组件的领导状态(对或错)
  • __meta_dockerswarm_node_manager_reachability:节点管理器组件的可达性
  • __meta_dockerswarm_node_platform_architecture:节点的体系结构
  • __meta_dockerswarm_node_platform_os:节点的操作系统
  • __meta_dockerswarm_node_role:节点的作用
  • __meta_dockerswarm_node_status:节点的状态

请参阅以下有关Docker Swarm发现的配置选项:

# Address of the Docker daemon.
host: <string>

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Role of the targets to retrieve. Must be `services`, `tasks`, or `nodes`.
role: <string>

# The port to scrape metrics from, when `role` is nodes.
[ port: <int> | default = 80 ]

# The time after which the droplets are refreshed.
[ refresh_interval: <duration> | default = 60s ]

# Authentication information used to authenticate to the Docker daemon.
# Note that `basic_auth`, `bearer_token` and `bearer_token_file` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional bearer token authentication information.
[ bearer_token: <secret> ]

# Optional bearer token file authentication information.
[ bearer_token_file: <filename> ]

请参阅此示例Prometheus配置文件 ,以获取为Docker Swarm配置Prometheus的详细示例。

<dns_sd_config>

基于DNS的服务发现配置允许指定一组DNS域名,这些域名会定期查询以发现目标列表。从中读取要联系的DNS服务器/etc/resolv.conf

此服务发现方法仅支持基本DNS A,AAAA和SRV记录查询,但不支持RFC6763中指定的高级DNS-SD方法 。

重新标记阶段,元标记 __meta_dns_name在每个目标上可用,并设置为产生发现的目标的记录名称。

# A list of DNS domain names to be queried.
names:
  [ - <string> ]

# The type of DNS query to perform. One of SRV, A, or AAAA.
[ type: <string> | default = 'SRV' ]

# The port number used if the query type is not SRV.
[ port: <int>]

# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]

<ec2_sd_config>

EC2 SD配置允许从AWS EC2实例检索抓取目标。默认情况下使用私有IP地址,但可以通过重新标记将其更改为公共IP地址。

重新标记期间,以下meta标签可用于目标:

  • __meta_ec2_ami:EC2亚马逊机器映像
  • __meta_ec2_architecture:实例的架构
  • __meta_ec2_availability_zone:实例在其中运行的可用性区域
  • __meta_ec2_instance_id:EC2实例ID
  • __meta_ec2_instance_lifecycle:EC2实例的生命周期,仅为“ spot”或“ scheduled”实例设置,否则不存在
  • __meta_ec2_instance_state:EC2实例的状态
  • __meta_ec2_instance_type:EC2实例的类型
  • __meta_ec2_owner_id:拥有EC2实例的AWS账户的ID
  • __meta_ec2_platform:操作系统平台,在Windows服务器上设置为“ Windows”,否则不存在
  • __meta_ec2_primary_subnet_id:主网络接口的子网ID(如果有)
  • __meta_ec2_private_dns_name:实例的私有DNS名称(如果有)
  • __meta_ec2_private_ip:实例的私有IP地址(如果存在)
  • __meta_ec2_public_dns_name:实例的公共DNS名称(如果有)
  • __meta_ec2_public_ip:实例的公共IP地址(如果有)
  • __meta_ec2_subnet_id:用逗号分隔的实例在其中运行的子网ID列表(如果有)
  • __meta_ec2_tag_<tagkey>:实例的每个标签值
  • __meta_ec2_vpc_id:运行实例的VPC的ID(如果有)

请参阅以下有关EC2发现的配置选项:

# The information to access the EC2 API.

# The AWS region. If blank, the region from the instance metadata is used.
[ region: <string> ]

# Custom endpoint to be used.
[ endpoint: <string> ]

# The AWS API keys. If blank, the environment variables `AWS_ACCESS_KEY_ID`
# and `AWS_SECRET_ACCESS_KEY` are used.
[ access_key: <string> ]
[ secret_key: <secret> ]
# Named AWS profile used to connect to the API.
[ profile: <string> ]

# AWS Role ARN, an alternative to using AWS API keys.
[ role_arn: <string> ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# Filters can be used optionally to filter the instance list by other criteria.
# Available filter criteria can be found here:
# https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
# Filter API documentation: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Filter.html
filters:
  [ - name: <string>
      values: <string>, [...] ]

重新标记阶段是基于任意标签过滤目标的优选和更强大的方式。对于具有数千个实例的用户,直接使用支持过滤实例的EC2 API可能会更有效。

<openstack_sd_config>

OpenStack SD配置允许从OpenStack Nova实例检索抓取目标。

<openstack_role>可以将以下类型之一配置为发现目标:

hypervisor

hypervisor角色为每个Nova虚拟机管理程序节点发现一个目标。目标地址默认为host_ip虚拟机监控程序的属性。

重新标记期间,以下meta标签可用于目标:

  • __meta_openstack_hypervisor_host_ip:管理程序节点的IP地址。
  • __meta_openstack_hypervisor_id:系统管理程序节点的ID。
  • __meta_openstack_hypervisor_name:系统管理程序节点的名称。
  • __meta_openstack_hypervisor_state:管理程序节点的状态。
  • __meta_openstack_hypervisor_status:虚拟机监控程序节点的状态。
  • __meta_openstack_hypervisor_type:管理程序节点的类型。

instance

instance角色为Nova实例的每个网络接口发现一个目标。目标地址默认为网络接口的专用IP地址。

重新标记期间,以下meta标签可用于目标:

  • __meta_openstack_address_pool:专用IP的池。
  • __meta_openstack_instance_flavor:OpenStack实例的风格。
  • __meta_openstack_instance_id:OpenStack实例ID。
  • __meta_openstack_instance_name:OpenStack实例名称。
  • __meta_openstack_instance_status:OpenStack实例的状态。
  • __meta_openstack_private_ip:OpenStack实例的私有IP。
  • __meta_openstack_project_id:拥有此实例的项目(租户)。
  • __meta_openstack_public_ip:OpenStack实例的公共IP。
  • __meta_openstack_tag_<tagkey>:实例的每个标记值。
  • __meta_openstack_user_id:拥有租户的用户帐户。

请参阅以下有关OpenStack发现的配置选项:

# The information to access the OpenStack API.

# The OpenStack role of entities that should be discovered.
role: <openstack_role>

# The OpenStack Region.
region: <string>

# identity_endpoint specifies the HTTP endpoint that is required to work with
# the Identity API of the appropriate version. While it's ultimately needed by
# all of the identity services, it will often be populated by a provider-level
# function.
[ identity_endpoint: <string> ]

# username is required if using Identity V2 API. Consult with your provider's
# control panel to discover your account's username. In Identity V3, either
# userid or a combination of username and domain_id or domain_name are needed.
[ username: <string> ]
[ userid: <string> ]

# password for the Identity V2 and V3 APIs. Consult with your provider's
# control panel to discover your account's preferred method of authentication.
[ password: <secret> ]

# At most one of domain_id and domain_name must be provided if using username
# with Identity V3. Otherwise, either are optional.
[ domain_name: <string> ]
[ domain_id: <string> ]

# The project_id and project_name fields are optional for the Identity V2 API.
# Some providers allow you to specify a project_name instead of the project_id.
# Some require both. Your provider's authentication policies will determine
# how these fields influence authentication.
[ project_name: <string> ]
[ project_id: <string> ]

# The application_credential_id or application_credential_name fields are
# required if using an application credential to authenticate. Some providers
# allow you to create an application credential to authenticate rather than a
# password.
[ application_credential_name: <string> ]
[ application_credential_id: <string> ]

# The application_credential_secret field is required if using an application
# credential to authenticate.
[ application_credential_secret: <secret> ]

# Whether the service discovery should list all instances for all projects.
# It is only relevant for the 'instance' role and usually requires admin permissions.
[ all_tenants: <boolean> | default: false ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# The availability of the endpoint to connect to. Must be one of public, admin or internal.
[ availability: <string> | default = "public" ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

<file_sd_config>

基于文件的服务发现提供了一种配置静态目标的更通用的方法,并用作插入自定义服务发现机制的接口。

它读取一组包含零个或多个 <static_config>s的文件。对所有已定义文件的更改将通过磁盘监视来检测并立即应用。文件可以以YAML或JSON格式提供。仅应用导致形成良好目标组的更改。

JSON文件必须包含使用以下格式的静态配置列表:

[
  {
    "targets": [ "<host>", ... ],
    "labels": {
      "<labelname>": "<labelvalue>", ...
    }
  },
  ...
]

作为备用,文件内容也将以指定的刷新间隔定期重新读取。

__meta_filepath在 重新标记阶段,每个目标都有一个元标记。它的值设置为从中提取目标的文件路径。

有与此发现机制集成的列表 。

# Patterns for files from which target groups are extracted.
files:
  [ - <filename_pattern> ... ]

# Refresh interval to re-read the files.
[ refresh_interval: <duration> | default = 5m ]

<filename_pattern>可能结束的路径.json.yml.yaml。最后一个路径段可以包含一个*与任何字符序列匹配的单个字符,例如my/path/tg_*.json

<gce_sd_config>

GCE SD配置允许从GCP GCE实例中检索抓取目标。默认情况下使用私有IP地址,但可以通过重新标记将其更改为公共IP地址。

重新标记期间,以下meta标签可用于目标:

  • __meta_gce_instance_id:实例的数字ID
  • __meta_gce_instance_name:实例的名称
  • __meta_gce_label_<labelname>:实例的每个GCE标签
  • __meta_gce_machine_type:实例机器类型的完整或部分URL
  • __meta_gce_metadata_<name>:实例的每个元数据项
  • __meta_gce_network:实例的网络URL
  • __meta_gce_private_ip:实例的私有IP地址
  • __meta_gce_project:实例正在其中运行的GCP项目
  • __meta_gce_public_ip:实例的公共IP地址(如果存在)
  • __meta_gce_subnetwork:实例的子网URL
  • __meta_gce_tags:用逗号分隔的实例标签列表
  • __meta_gce_zone:实例在其中运行的GCE区域URL

请参阅以下有关GCE发现的配置选项:

# The information to access the GCE API.

# The GCP Project
project: <string>

# The zone of the scrape targets. If you need multiple zones use multiple
# gce_sd_configs.
zone: <string>

# Filter can be used optionally to filter the instance list by other criteria
# Syntax of this filter string is described here in the filter query parameter section:
# https://cloud.google.com/compute/docs/reference/latest/instances/list
[ filter: <string> ]

# Refresh interval to re-read the instance list
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# The tag separator is used to separate the tags on concatenation
[ tag_separator: <string> | default = , ]

Google Cloud SDK默认客户端通过在以下位置查找(首选找到的第一个位置)来发现凭据:

  1. GOOGLE_APPLICATION_CREDENTIALS环境变量指定的JSON文件
  2. 知名路径中的JSON文件 $HOME/.config/gcloud/application_default_credentials.json
  3. 从GCE元数据服务器获取

如果Prometheus在GCE中运行,则与其运行的实例关联的服务帐户应至少具有对计算资源的只读权限。如果在GCE之外运行,请确保创建适当的服务帐户,并将凭证文件放置在预期位置之一。

<kubernetes_sd_config>

Kubernetes SD配置允许从Kubernetes的 REST API 检索抓取目标, 并始终与集群状态保持同步。

role可以将以下类型之一配置为发现目标:

node

node角色为每个群集节点发现一个目标,其地址默认为Kubelet的HTTP端口。目标地址默认为的地址类型顺序Kubernetes节点对象的第一个现有地址NodeInternalIPNodeExternalIP, NodeLegacyHostIP,和NodeHostName

可用的元标签:

  • __meta_kubernetes_node_name:节点对象的名称。
  • __meta_kubernetes_node_label_<labelname>:节点对象中的每个标签。
  • __meta_kubernetes_node_labelpresent_<labelname>true对于节点对象中的每个标签。
  • __meta_kubernetes_node_annotation_<annotationname>:来自节点对象的每个注释。
  • __meta_kubernetes_node_annotationpresent_<annotationname>true用于节点对象的每个注释。
  • __meta_kubernetes_node_address_<address_type>:每个节点地址类型的第一个地址(如果存在)。

此外,该instance节点的标签将设置为从API服务器检索到的节点名称。

service

service角色发现每一个服务端口为每个服务的目标。通常,这对于监视服务的黑盒很有用。该地址将设置为服务的Kubernetes DNS名称以及相应的服务端口。

可用的元标签:

  • __meta_kubernetes_namespace:服务对象的名称空间。
  • __meta_kubernetes_service_annotation_<annotationname>:服务对象的每个注释。
  • __meta_kubernetes_service_annotationpresent_<annotationname>:对于服务对象的每个注释为“ true”。
  • __meta_kubernetes_service_cluster_ip:服务的群集IP地址。(不适用于“外部名称”类型的服务)
  • __meta_kubernetes_service_external_name:服务的DNS名称。(适用于外部名称类型的服务)
  • __meta_kubernetes_service_label_<labelname>:服务对象中的每个标签。
  • __meta_kubernetes_service_labelpresent_<labelname>true用于服务对象的每个标签。
  • __meta_kubernetes_service_name:服务对象的名称。
  • __meta_kubernetes_service_port_name:目标服务端口的名称。
  • __meta_kubernetes_service_port_protocol:目标服务端口的协议。
  • __meta_kubernetes_service_type:服务的类型。

pod

pod角色发现所有吊舱并将其容器公开为目标。对于容器的每个声明的端口,将生成一个目标。如果容器没有指定的端口,则会为每个容器创建无端口目标,以通过重新标记手动添加端口。

可用的元标签:

  • __meta_kubernetes_namespace:pod对象的名称空间。
  • __meta_kubernetes_pod_name:pod对象的名称。
  • __meta_kubernetes_pod_ip:pod对象的pod IP。
  • __meta_kubernetes_pod_label_<labelname>:来自pod对象的每个标签。
  • __meta_kubernetes_pod_labelpresent_<labelname>true用于pod对象中的每个标签。
  • __meta_kubernetes_pod_annotation_<annotationname>:来自pod对象的每个注释。
  • __meta_kubernetes_pod_annotationpresent_<annotationname>true用于pod对象的每个注释。
  • __meta_kubernetes_pod_container_inittrue如果容器是InitContainer
  • __meta_kubernetes_pod_container_name:目标地址指向的容器的名称。
  • __meta_kubernetes_pod_container_port_name:容器端口的名称。
  • __meta_kubernetes_pod_container_port_number:容器端口号。
  • __meta_kubernetes_pod_container_port_protocol:容器端口的协议。
  • __meta_kubernetes_pod_ready:设置truefalse吊舱的准备状态。
  • __meta_kubernetes_pod_phase:设置为PendingRunningSucceededFailedUnknown 在生命周期
  • __meta_kubernetes_pod_node_name:将Pod调度到的节点的名称。
  • __meta_kubernetes_pod_host_ip:pod对象的当前主机IP。
  • __meta_kubernetes_pod_uid:pod对象的UID。
  • __meta_kubernetes_pod_controller_kind:pod控制器的对象种类。
  • __meta_kubernetes_pod_controller_name:pod控制器的名称。

endpoints

endpoints角色从列出的服务端点发现目标。对于每个端点地址,每个端口都发现一个目标。如果端点由Pod支持,则该Pod的所有其他容器端口(未绑定到端点端口)也将作为目标。

可用的元标签:

  • __meta_kubernetes_namespace:端点对象的名称空间。
  • __meta_kubernetes_endpoints_name:端点对象的名称。
  • 对于直接从端点列表中发现的所有目标(未从基础容器中另外推断出的目标),附加了以下标签:
    • __meta_kubernetes_endpoint_hostname:端点的主机名。
    • __meta_kubernetes_endpoint_node_name:托管端点的节点的名称。
    • __meta_kubernetes_endpoint_ready:设置为truefalse为端点的就绪状态。
    • __meta_kubernetes_endpoint_port_name:端点端口的名称。
    • __meta_kubernetes_endpoint_port_protocol:端点端口的协议。
    • __meta_kubernetes_endpoint_address_target_kind:端点地址目标的种类。
    • __meta_kubernetes_endpoint_address_target_name:端点地址目标的名称。
  • 如果端点属于服务,role: service则会附加发现的所有标签。
  • 对于由Pod支持的所有目标,role: pod将附加发现的所有标签。

ingress

ingress角色发现了一个目标,为每个进入的每个路径。这通常对黑盒监视入口很有用。该地址将设置为入口规范中指定的主机。

可用的元标签:

  • __meta_kubernetes_namespace:入口对象的名称空间。
  • __meta_kubernetes_ingress_name:入口对象的名称。
  • __meta_kubernetes_ingress_label_<labelname>:来自入口对象的每个标签。
  • __meta_kubernetes_ingress_labelpresent_<labelname>true用于来自入口对象的每个标签。
  • __meta_kubernetes_ingress_annotation_<annotationname>:来自入口对象的每个注释。
  • __meta_kubernetes_ingress_annotationpresent_<annotationname>true用于来自入口对象的每个注释。
  • __meta_kubernetes_ingress_schemehttps如果设置了TLS配置,则为入口的协议方案。默认为http
  • __meta_kubernetes_ingress_path:来自入口规范的路径。默认为/

请参阅以下有关Kubernetes发现的配置选项:

# The information to access the Kubernetes API.

# The API server addresses. If left empty, Prometheus is assumed to run inside
# of the cluster and will discover API servers automatically and use the pod's
# CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/.
[ api_server: <host> ]

# The Kubernetes role of entities that should be discovered.
# One of endpoints, service, pod, node, or ingress.
role: <string>

# Optional authentication information used to authenticate to the API server.
# Note that `basic_auth`, `bearer_token` and `bearer_token_file` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional bearer token authentication information.
[ bearer_token: <secret> ]

# Optional bearer token file authentication information.
[ bearer_token_file: <filename> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
  names:
    [ - <string> ]

# Optional label and field selectors to limit the discovery process to a subset of available resources. 
# See https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
# and https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ to learn more about the possible 
# filters that can be used. Endpoints role supports pod, service and endpoints selectors, other roles
# only support selectors matching the role itself (e.g. node role can only contain node selectors).

# Note: When making decision about using field/label selector make sure that this 
# is the best approach - it will prevent Prometheus from reusing single list/watch
# for all scrape configs. This might result in a bigger load on the Kubernetes API,
# because per each selector combination there will be additional LIST/WATCH. On the other hand,
# if you just want to monitor small subset of pods in large cluster it's recommended to use selectors.
# Decision, if selectors should be used or not depends on the particular situation.
[ selectors:
  [ - role: <string>
    [ label: <string> ]
    [ field: <string> ] ]]

有关 为Kubernetes配置Prometheus的详细示例,请参见此示例Prometheus配置文件

您可能希望查看第三方Prometheus Operator,它可以自动在Kubernetes上设置Prometheus。

<marathon_sd_config>

Marathon SD配置允许使用Marathon REST API 检索抓取目标 。Prometheus将定期检查REST端点是否有当前正在运行的任务,并为每个至少具有一个正常任务的应用程序创建目标组。

重新标记期间,以下meta标签可用于目标:

  • __meta_marathon_app:应用程序的名称(斜杠由破折号代替)
  • __meta_marathon_image:使用的Docker映像的名称(如果可用)
  • __meta_marathon_task:Mesos任务的ID
  • __meta_marathon_app_label_<labelname>:附加到应用程序的所有Marathon标签
  • __meta_marathon_port_definition_label_<labelname>:端口定义标签
  • __meta_marathon_port_mapping_label_<labelname>:端口映射标签
  • __meta_marathon_port_index:端口索引号(例如1PORT1

请参阅以下有关Marathon发现的配置选项:

# List of URLs to be used to contact Marathon servers.
# You need to provide at least one server URL.
servers:
  - <string>

# Polling interval
[ refresh_interval: <duration> | default = 30s ]

# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token_file` and other authentication mechanisms.
[ auth_token: <secret> ]

# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token` and other authentication mechanisms.
[ auth_token_file: <filename> ]

# Sets the `Authorization` header on every request with the
# configured username and password.
# This is mutually exclusive with other authentication mechanisms.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file` and other authentication mechanisms.
# NOTE: The current version of DC/OS marathon (v1.11.0) does not support standard Bearer token authentication. Use `auth_token` instead.
[ bearer_token: <string> ]

# Sets the `Authorization` header on every request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token` and other authentication mechanisms.
# NOTE: The current version of DC/OS marathon (v1.11.0) does not support standard Bearer token authentication. Use `auth_token_file` instead.
[ bearer_token_file: <filename> ]

# TLS configuration for connecting to marathon servers
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

默认情况下,Prometheus将刮除Marathon中列出的每个应用。如果并非所有服务都提供Prometheus指标,则可以使用Marathon标签和Prometheus重新标签来控制实际上将被擦除的实例。有关 如何设置Marathon应用程序和Prometheus配置的实际示例,请参阅Prometheus marathon-sd配置文件

默认情况下,所有应用程序都将在Prometheus(配置文件中指定的一项)中显示为单个作业,也可以使用重新标记进行更改。

<nerve_sd_config>

神经SD配置允许从AirBnB的Nerve中检索刮擦目标,这些目标存储在 Zookeeper中

重新标记期间,以下meta标签可用于目标:

  • __meta_nerve_path:Zookeeper中端点节点的完整路径
  • __meta_nerve_endpoint_host:端点的主机
  • __meta_nerve_endpoint_port:端点的端口
  • __meta_nerve_endpoint_name:端点名称
# The Zookeeper servers.
servers:
  - <host>
# Paths can point to a single service, or the root of a tree of services.
paths:
  - <string>
[ timeout: <duration> | default = 10s ]

<serverset_sd_config>

Serverset SD配置允许从存储在Zookeeper中的Serverset检索抓取目标。服务器集通常由Finagle和 Aurora使用

重新标记期间,目标上可以使用以下元标记:

  • __meta_serverset_path:Zookeeper中服务器集成员节点的完整路径
  • __meta_serverset_endpoint_host:默认端点的主机
  • __meta_serverset_endpoint_port:默认端点的端口
  • __meta_serverset_endpoint_host_<endpoint>:给定端点的主机
  • __meta_serverset_endpoint_port_<endpoint>:给定端点的端口
  • __meta_serverset_shard:成员的分片号
  • __meta_serverset_status:成员的状态
# The Zookeeper servers.
servers:
  - <host>
# Paths can point to a single serverset, or the root of a tree of serversets.
paths:
  - <string>
[ timeout: <duration> | default = 10s ]

Serverset数据必须为JSON格式,当前不支持Thrift格式。

<triton_sd_config>

Triton SD配置允许从Container Monitor 发现端点中检索抓取目标。

<triton_role>可以将以下类型之一配置为发现目标:

container

container角色为所拥有的“虚拟机”发现一个目标account。这些是SmartOS区域或lx / KVM / bhyve品牌区域。

重新标记期间,以下meta标签可用于目标:

  • __meta_triton_groups:属于目标的组列表,由逗号分隔
  • __meta_triton_machine_alias:目标容器的别名
  • __meta_triton_machine_brand:目标容器的品牌
  • __meta_triton_machine_id:目标容器的UUID
  • __meta_triton_machine_image:目标容器的图像类型
  • __meta_triton_server_id:目标容器在其上运行的服务器UUID

cn

cn角色为组成Triton基础结构的每个计算节点(也称为“服务器”或“全局区域”)发现一个目标。本account必须是海卫运营商目前需要拥有至少一个container

重新标记期间,以下meta标签可用于目标:

  • __meta_triton_machine_alias:目标的主机名(需要triton-cmon 1.7.0或更高版本)
  • __meta_triton_machine_id:目标的UUID

请参阅以下有关Triton发现的配置选项:

# The information to access the Triton discovery API.

# The account to use for discovering new targets.
account: <string>

# The type of targets to discover, can be set to:
# * "container" to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton
# * "cn" to discover compute nodes (servers/global zones) making up the Triton infrastructure
[ role : <string> | default = "container" ]

# The DNS suffix which should be applied to target.
dns_suffix: <string>

# The Triton discovery endpoint (e.g. 'cmon.us-east-3b.triton.zone'). This is
# often the same value as dns_suffix.
endpoint: <string>

# A list of groups for which targets are retrieved, only supported when `role` == `container`.
# If omitted all containers owned by the requesting account are scraped.
groups:
  [ - <string> ... ]

# The port to use for discovery and metric scraping.
[ port: <int> | default = 9163 ]

# The interval which should be used for refreshing targets.
[ refresh_interval: <duration> | default = 60s ]

# The Triton discovery API version.
[ version: <int> | default = 1 ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

<static_config>

static_config允许指定目标列表和目标的通用标签集。这是在抓取配置中指定静态目标的规范方法。

# The targets specified by the static config.
targets:
  [ - '<host>' ]

# Labels assigned to all metrics scraped from the targets.
labels:
  [ <labelname>: <labelvalue> ... ]

<relabel_config>

重新标记是一种强大的工具,可以在刮擦目标之前动态重写目标的标签集。每个刮擦配置可以配置多个重新标记步骤。它们按照在配置文件中出现的顺序应用于每个目标的标签集。

最初,除了已配置的每个目标标签外,目标的job 标签还设置为job_name相应的scrape配置的值。该__address__标签被设定为<host>:<port>目标的地址。重新标记后,如果在重新标记过程中未设置标签,则默认将instance标签设置为值__address__。的__scheme____metrics_path__标签分别被设定为目标的方案和度量路径。该__param_<name> 标签设置称为国内首家通过URL参数的值<name>

__meta_在重新贴标签阶段,可能会加上其他带有前缀的标签。它们由提供目标的服务发现机制设置,并且在机制之间有所不同。

标签开始__会从标签集中移除目标重新标记完成后。

如果重新标记步骤仅需要临时存储标签值(作为后续重新标记步骤的输入),请使用__tmp标签名称前缀。保证该前缀不会被Prometheus自己使用。

# The source labels select values from existing labels. Their content is concatenated
# using the configured separator and matched against the configured regular expression
# for the replace, keep, and drop actions.
[ source_labels: '[' <labelname> [, ...] ']' ]

# Separator placed between concatenated source label values.
[ separator: <string> | default = ; ]

# Label to which the resulting value is written in a replace action.
# It is mandatory for replace actions. Regex capture groups are available.
[ target_label: <labelname> ]

# Regular expression against which the extracted value is matched.
[ regex: <regex> | default = (.*) ]

# Modulus to take of the hash of the source label values.
[ modulus: <int> ]

# Replacement value against which a regex replace is performed if the
# regular expression matches. Regex capture groups are available.
[ replacement: <string> | default = $1 ]

# Action to perform based on regex matching.
[ action: <relabel_action> | default = replace ]

<regex>是任何有效的 RE2正则表达式。这是必需的replacekeepdroplabelmaplabeldroplabelkeep行动。正则表达式固定在两端。要取消固定正则表达式,请使用.*<regex>.*

<relabel_action> 确定要采取的重新标记操作:

  • replaceregex与串联的匹配source_labels。然后,设置 target_labelreplacement与匹配组的引用(${1}${2},…)中replacement可以通过值取代。如果regex 不匹配,则不会进行替换。
  • keep:删除regex与串联不匹配的目标source_labels
  • drop:删除regex与串联的匹配的目标source_labels
  • hashmod:设置target_label为的modulus哈希值的source_labels
  • labelmapregex与所有标签名称匹配。然后匹配标签的值复制到由给定的标签名称replacement与匹配组的参考(${1}${2},…)在replacement由他们的价值取代。
  • labeldropregex与所有标签名称匹配。任何匹配的标签将从标签集中删除。
  • labelkeepregex与所有标签名称匹配。任何不匹配的标签都将从标签集中删除。

必须注意labeldroplabelkeep确保一旦删除标签,度量标准仍然会被唯一地标记。

<metric_relabel_configs>

进食前的最后一步是对样品进行公制重新标记。它具有与目标重新标记相同的配置格式和操作。指标重新标记不适用于自动生成的时间序列,例如up

一种用途是排除太昂贵而无法摄取的时间序列。

<alert_relabel_configs>

警报重新标记将应用于警报,然后再将其发送到Alertmanager。它具有与目标重新标记相同的配置格式和操作。在外部标签之后应用警报重新标签。

一种用途是确保具有不同外部标签的HA对Prometheus服务器发送相同的警报。

<alertmanager_config>

alertmanager_config节指定Prometheus服务器向其发送警报的Alertmanager实例。它还提供了用于配置如何与这些Alertmanager通信的参数。

警报管理器可以通过static_configs参数静态配置,也可以使用支持的服务发现机制之一动态发现。

此外,relabel_configs允许从发现的实体中选择Alertmanagers,并提供对使用的API路径的高级修改,该路径通过__alerts_path__标签公开。

# Per-target Alertmanager timeout when pushing alerts.
[ timeout: <duration> | default = 10s ]

# The api version of Alertmanager.
[ api_version: <string> | default = v1 ]

# Prefix for the HTTP path alerts are pushed to.
[ path_prefix: <path> | default = / ]

# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]

# Sets the `Authorization` header on every request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]

# Sets the `Authorization` header on every request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: <filename> ]

# Configures the scrape request's TLS settings.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# List of Azure service discovery configurations.
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# List of Consul service discovery configurations.
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# List of DNS service discovery configurations.
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# List of EC2 service discovery configurations.
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# List of file service discovery configurations.
file_sd_configs:
  [ - <file_sd_config> ... ]

# List of DigitalOcean service discovery configurations.
digitalocean_sd_configs:
  [ - <digitalocean_sd_config> ... ]

# List of Docker Swarm service discovery configurations.
dockerswarm_sd_configs:
  [ - <dockerswarm_sd_config> ... ]

# List of GCE service discovery configurations.
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# List of Marathon service discovery configurations.
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# List of OpenStack service discovery configurations.
openstack_sd_configs:
  [ - <openstack_sd_config> ... ]

# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# List of Triton service discovery configurations.
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# List of labeled statically configured Alertmanagers.
static_configs:
  [ - <static_config> ... ]

# List of Alertmanager relabel configurations.
relabel_configs:
  [ - <relabel_config> ... ]

<remote_write>

write_relabel_configs将重新标签应用于样本,然后再将其发送到远程端点。在外部标签之后应用写重新标记。这可以用来限制发送哪些样本。

有一个小样演示如何使用此功能。

# The URL of the endpoint to send samples to.
url: <string>

# Timeout for requests to the remote write endpoint.
[ remote_timeout: <duration> | default = 30s ]

# List of remote write relabel configurations.
write_relabel_configs:
  [ - <relabel_config> ... ]

# Name of the remote write config, which if specified must be unique among remote write configs. 
# The name will be used in metrics and logging in place of a generated value to help users distinguish between
# remote write configs.
[ name: <string> ]

# Sets the `Authorization` header on every remote write request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every remote write request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]

# Sets the `Authorization` header on every remote write request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: <filename> ]

# Configures the remote write request's TLS settings.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configures the queue used to write to remote storage.
queue_config:
  # Number of samples to buffer per shard before we block reading of more
  # samples from the WAL. It is recommended to have enough capacity in each
  # shard to buffer several requests to keep throughput up while processing
  # occasional slow remote requests.
  [ capacity: <int> | default = 500 ]
  # Maximum number of shards, i.e. amount of concurrency.
  [ max_shards: <int> | default = 1000 ]
  # Minimum number of shards, i.e. amount of concurrency.
  [ min_shards: <int> | default = 1 ]
  # Maximum number of samples per send.
  [ max_samples_per_send: <int> | default = 100]
  # Maximum time a sample will wait in buffer.
  [ batch_send_deadline: <duration> | default = 5s ]
  # Initial retry delay. Gets doubled for every retry.
  [ min_backoff: <duration> | default = 30ms ]
  # Maximum retry delay.
  [ max_backoff: <duration> | default = 100ms ]

有 此功能的集成列表 。

<remote_read>

# The URL of the endpoint to query from.
url: <string>

# Name of the remote read config, which if specified must be unique among remote read configs. 
# The name will be used in metrics and logging in place of a generated value to help users distinguish between
# remote read configs.
[ name: <string> ]

# An optional list of equality matchers which have to be
# present in a selector to query the remote read endpoint.
required_matchers:
  [ <labelname>: <labelvalue> ... ]

# Timeout for requests to the remote read endpoint.
[ remote_timeout: <duration> | default = 1m ]

# Whether reads should be made for queries for time ranges that
# the local storage should have complete data for.
[ read_recent: <boolean> | default = false ]

# Sets the `Authorization` header on every remote read request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every remote read request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]

# Sets the `Authorization` header on every remote read request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: <filename> ]

# Configures the remote read request's TLS settings.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

有 此功能的集成列表 。


小爱博客 , 版权所有
转载请注明原文链接:prometheus.yml配置文件详解
喜欢 (0)
【你的支持, 我的动力】
分享 (0)
发表我的评论
取消评论
表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址