vmagent is a tiny but brave agent, which helps you collect metrics from various sources
and stores them in VictoriaMetrics
or any other Prometheus-compatible storage system that supports the
While VictoriaMetrics provides an efficient solution to store and observe metrics, our users needed something fast
and RAM friendly to scrape metrics from Prometheus-compatible exporters to VictoriaMetrics.
Also, we found that users’ infrastructure are snowflakes - no two are alike, and we decided to add more flexibility
vmagent (like the ability to push metrics instead of pulling them). We did our best and plan to do even more.
- Can be used as drop-in replacement for Prometheus for scraping targets such as node_exporter. See Quick Start for details.
- Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See these docs for details.
- Accepts data via all the ingestion protocols supported by VictoriaMetrics:
- Influx line protocol via
http://<vmagent>:8429/write. See these docs.
- Graphite plaintext protocol if
-graphiteListenAddrcommand-line flag is set. See these docs.
- OpenTSDB telnet and http protocols if
-opentsdbListenAddrcommand-line flag is set. See these docs.
- Prometheus remote write protocol via
- JSON lines import protocol via
http://<vmagent>:8429/api/v1/import. See these docs.
- Data in Prometheus exposition format. See these docs for details.
- Arbitrary CSV data via
http://<vmagent>:8429/api/v1/import/csv. See these docs.
- Influx line protocol via
- Can replicate collected metrics simultaneously to multiple remote storage systems.
- Works in environments with unstable connections to remote storage. If the remote storage is unavailable, the collected metrics
are buffered at
-remoteWrite.tmpDataPath. The buffered metrics are sent to remote storage as soon as connection to remote storage is recovered. The maximum disk usage for the buffer can be limited with
- Uses lower amounts of RAM, CPU, disk IO and network bandwidth compared to Prometheus.
vmutils-* archive from releases page, unpack it
and pass the following flags to
vmagent binary in order to start scraping Prometheus targets:
-promscrape.configwith the path to Prometheus config file (it is usually located at
-remoteWrite.urlwith the remote storage endpoint such as VictoriaMetrics. The
-remoteWrite.urlargument can be specified multiple times in order to replicate data concurrently to an arbitrary amount of remote storage systems.
Example command line:
/path/to/vmagent -promscrape.config=/path/to/prometheus.yml -remoteWrite.url=https://victoria-metrics-host:8428/api/v1/write
If you only need to collect Influx data, then the following is sufficient:
Then send Influx data to
http://vmagent-host:8429. See these docs for more details.
vmagent is also available in docker images.
vmagent in order to see the full list of supported command-line flags with their descriptions.
IoT and Edge monitoring
vmagent can run and collect metrics in IoT and industrial networks with unreliable or scheduled connections to the remote storage.
It buffers the collected data in local files until the connection to remote storage becomes available and then sends the buffered
data to the remote storage. It re-tries sending the data to remote storage on any errors.
The maximum buffer size can be limited with
vmagent works on various architectures from IoT world - 32-bit arm, 64-bit arm, ppc64, 386, amd64.
See the corresponding Makefile rules for details.
Drop-in replacement for Prometheus
If you use Prometheus only for scraping metrics from various targets and forwarding these metrics to remote storage,
vmagent can replace such Prometheus setup. Usually
vmagent requires lower amounts of RAM, CPU and network bandwidth comparing to Prometheus for such a setup.
See these docs for details.
Replication and high availability
vmagent replicates the collected metrics among multiple remote storage instances configured via
If a single remote storage instance temporarily is out of service, then the collected data remains available in another remote storage instances.
vmagent buffers the collected data in files at
-remoteWrite.tmpDataPath until the remote storage becomes available again.
Then it sends the buffered data to the remote storage in order to prevent data gaps in the remote storage.
Relabeling and filtering
vmagent can add, remove or update labels on the collected data before sending it to remote storage. Additionally,
it can remove unwanted samples via Prometheus-like relabeling before sending the collected data to remote storage.
See these docs for details.
Splitting data streams among multiple systems
vmagent supports splitting the collected data between muliple destinations with the help of
which is applied independently for each configured
-remoteWrite.url destination. For instance, it is possible to replicate or split
data among long-term remote storage, short-term remote storage and real-time analytical system built on top of Kafka.
Note that each destination can receive its own subset of the collected data thanks to per-destination relabeling via
Prometheus remote_write proxy
vmagent may be used as a proxy for Prometheus data sent via Prometheus
remote_write protocol. It can accept data via
/api/v1/write endpoint, apply relabeling and filtering and then proxy it to another
vmagent can be configured to encrypt the incoming
remote_write requests with
-tls* command-line flags.
Additionally, Basic Auth can be enabled for the incoming
remote_write requests with
-httpAuth.* command-line flags.
How to collect metrics in Prometheus format
Pass the path to
-promscrape.config command-line flag.
vmagent takes into account the following
sections from Prometheus config file:
All the other sections are ignored, including remote_write section.
-remoteWrite.* command-line flags instead for configuring remote write settings.
The following scrape types in scrape_config section are supported:
static_configs- for scraping statically defined targets. See these docs for details.
file_sd_configs- for scraping targets defined in external files aka file-based service discover. See these docs for details.
kubernetes_sd_configs- for scraping targets in Kubernetes (k8s). See kubernetes_sd_config for details.
ec2_sd_configs- for scraping targets in Amazon EC2. See ec2_sd_config for details.
role_arnconfig param yet.
gce_sd_configs- for scraping targets in Google Compute Engine (GCE). See gce_sd_config for details.
vmagentprovides the following additional functionality for
projectarg is missing, then
vmagentuses the project for the instance where it runs;
zonearg is missing, then
vmagentuses the zone for the instance where it runs;
zonearg equals to
vmagentdiscovers all the zones for the given project;
zonemay contain arbitrary number of zones, i.e.
zone: [us-east1-a, us-east1-b].
consul_sd_configs- for scraping targets registered in Consul. See consul_sd_config for details.
dns_sd_configs- for scraping targets discovered from DNS records (SRV, A and AAAA). See dns_sd_config for details.
File feature requests at our issue tracker if you need other service discovery mechanisms to be supported by
vmagent also support the following additional options in
disable_compression: true- for disabling response compression on a per-job basis. By default
vmagentrequests compressed responses from scrape targets in order to save network bandwidth.
disable_keepalive: true- for disabling HTTP keep-alive connections on a per-job basis. By default
vmagentuses keep-alive connections to scrape targets in order to reduce overhead on connection re-establishing.
vmagent doesn’t support
refresh_interval option these scrape configs. Use the corresponding
command-line flag instead. For example,
refresh_interval for all the
entries to 60s. Run
vmagent -help in order to see default values for
Adding labels to metrics
Labels can be added to metrics via the following mechanisms:
global -> external_labelssection in
-promscrape.configfile. These labels are added only to metrics scraped from targets configured in
-remoteWrite.labelcommand-line flag. These labels are added to all the collected metrics before sending them to
vmagent supports Prometheus relabeling.
Additionally it provides the following extra actions:
replace_all: replaces all the occurences of
regexin the values of
replacementand stores the result in the
labelmap_all: replaces all the occurences of
regexin all the label names with the
keep_if_equal: keeps the entry if all label values from
drop_if_equal: drops the entry if all the label values from
The relabeling can be defined in the following places:
scrape_config -> relabel_configssection in
-promscrape.configfile. This relabeling is applied to target labels.
scrape_config -> metric_relabel_configssection in
-promscrape.configfile. This relabeling is applied to all the scraped metrics in the given
-remoteWrite.relabelConfigfile. This relabeling is aplied to all the collected metrics before sending them to remote storage.
-remoteWrite.urlRelabelConfigfiles. This relabeling is applied to metrics before sending them to the corresponding
Read more about relabeling in the following articles:
- Life of a label
- Discarding targets and timeseries with relabeling
- Dropping labels at scrape time
- Extracting labels from legacy metric names
- relabel_configs vs metric_relabel_configs
vmagent exports various metrics in Prometheus exposition format at
http://vmagent-host:8429/metrics page. It is recommended setting up regular scraping of this page
vmagent itself or via Prometheus, so the exported metrics could be analyzed later.
vmagent also exports target statuses at
http://vmagent-host:8429/targets page in plaintext format.
It is recommended increasing the maximum number of open files in the system (
ulimit -n) when scraping big number of targets, since
vmagentestablishes at least a single TCP connection per each target.
vmagentscrapes many unreliable targets, it can flood error log with scrape errors. These errors can be suppressed by passing
-promscrape.suppressScrapeErrorscommand-line flag to
vmagent. The most recent scrape error per each target can be observed at
It is recommended to increase
vmagentcollects more than 100K samples per second and
vmagent_remotewrite_pending_data_bytesmetric exported at
http://vmagent-host:8429/metricspage constantly grows.
vmagentbuffers scraped data at
-remoteWrite.tmpDataPathdirectory until it is sent to
-remoteWrite.url. The directory can grow large when remote storage is unavailable for extended periods of time and if
-remoteWrite.maxDiskUsagePerURLisn’t set. If you don’t want to send all the data from the directory to remote storage, simply stop
vmagentand delete the directory.
If you see
skipping duplicate scrape target with identical labelserrors when scraping Kubernetes pods, then it is likely these pods listen multiple ports. Just add the following relabeling rule to
relabel_configssection in order to filter out targets with unneeded ports:
- action: keep_if_equal source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_container_port_number]
How to build from sources
It is recommended using binary releases -
vmagent is located in
vmutils-* archives there.
- Install Go. The minimum supported version is Go 1.13.
make vmagentfrom the root folder of the repository. It builds
vmagentbinary and puts it into the
- Install docker.
make vmagent-prodfrom the root folder of the repository. It builds
vmagent-prodbinary and puts it into the
Building docker images
make package-vmagent. It builds
victoriametrics/vmagent:<PKG_TAG> docker image locally.
<PKG_TAG> is auto-generated image tag, which depends on source code in the repository.
<PKG_TAG> may be manually set via
PKG_TAG=foobar make package-vmagent.
By default the image is built on top of alpine image. It is possible to build the package on top of any other base image
by setting it via
<ROOT_IMAGE> environment variable. For example, the following command builds the image on top of scratch image:
ROOT_IMAGE=scratch make package-vmagent
vmagent provides handlers for collecting the following Go profiles:
- Memory profile. It can be collected with the following command:
curl -s http://<vmagent-host>:8429/debug/pprof/heap > mem.pprof
- CPU profile. It can be collected with the following command:
curl -s http://<vmagent-host>:8429/debug/pprof/profile > cpu.pprof
The command for collecting CPU profile waits for 30 seconds before returning.
The collected profiles may be analyzed with go tool pprof.