Below are approved public case studies and talks from VictoriaMetrics users. Join our community Slack channel and feel free asking for references, reviews and additional case studies from real VictoriaMetrics users there.


See slides and video from Remote Write Storage Wars talk at PromCon 2019. VictoriaMetrics is compared to Thanos, Corex and M3DB in the talk.


COLOPL is Japaneese Game Development company. It started using VictoriaMetrics after evaulating the following remote storage solutions for Prometheus:

  • Cortex
  • Thanos
  • M3DB
  • VictoriaMetrics

See slides and video from Large-scale, super-load system monitoring platform built with VictoriaMetrics talk at Prometheus Meetup Tokyo #3. is the leading web development platform.

We needed to redesign metric infrastructure from the ground up after the move to Kubernethes. A few approaches/designs have been tried before the one that works great has been chosen: Prometheus instance in every datacenter with 2 hours retention for local storage and remote write into HA pair of single-node VictoriaMetrics instances.


  • The number of active time series per VictoriaMetrics instance is 20M.
  • The total number of time series per VictoriaMetrics instance is 400M+.
  • Ingestion rate per VictoriaMetrics instance is 800K data points per second.
  • The average time series churn rate is ~3M per day.
  • The average query rate is ~1K per minute (mostly alert queries).
  • Query duration: median is ~70ms, 99th percentile is ~2sec.
  • Retention: 6 months.

Alternatives that we’ve played with before choosing VictoriaMetrics are: federated Prometheus, Cortex, IronDB and Thanos. Points that were critical to us when we were choosing a central tsdb, in order of importance:

  • At least 3 month worth of history.
  • Raw data, no aggregation, no sampling.
  • High query speed.
  • Clean fail state for HA (multi-node clusters may return partial data resulting in false alerts).
  • Enough head room/scaling capacity for future growth, up to 100M active time series.
  • Ability to split DB replicas per workload. Alert queries go to one replica, user queries go to another (speed for users, effective cache).

Optimizing for those points and our specific workload VictoriaMetrics proved to be the best option. As an icing on a cake we’ve got PromQL extensions - default 0 and histogram are my favorite ones, for example. What we specially like is having a lot of tsdb params easily available via config options, that makes tsdb easy to tune for specific use case. Also worth noting is a great community in Slack channel and of course maintainer support.

Alex Ulstein, Head of Monitoring,

Wedos is the Biggest Czech Hosting. We have our own private data center, that holds only our servers and technologies. The second data center, where the servers will be cooled in an oil bath, is being built. We started using cluster VictoriaMetrics to store Prometheus metrics from all our infrastructure after receiving positive references from our friends who successfully use VictoriaMetrics.


  • The number of acitve time series: 5M.
  • Ingestion rate: 170K data points per second.
  • Query duration: median is ~2ms, 99th percentile is ~50ms.

We like configuration simplicity and zero maintenance for VictoriaMetrics - once installed and forgot about it. It works out of the box without any issues.


Synthesio is the leading social intelligence tool for social media monitoring & social analytics.

We fully migrated from Metrictank to Victoria Metrics


  • Single node
  • Active time series - 5 Million
  • Datapoints: 1.25 Trillion
  • Ingestion rate - 550k datapoints per second
  • Disk usage - 150gb
  • Index size - 3gb
  • Query duration 99th percentile - 147ms
  • Churn rate - 100 new time series per hour

MHI Vestas Offshore Wind

The mission of MHI Vestas Offshore Wind is to co-develop offshore wind as an economically viable and sustainable energy resource to benefit future generations.

MHI Vestas Offshore Wind is using VictoriaMetrics to ingest and visualize sensor data from offshore wind turbines. The very efficient storage and ability to backfill was key in chosing VictoriaMetrics. MHI Vestas Offshore Wind is running the cluster version of VictoriaMetrics on Kubernetes using the Helm charts for deployment to be able to scale up capacity as the solution will be rolled out.

Numbers with current limited roll out:

  • Active time series: 270K
  • Ingestion rate: 70K/sec
  • Total number of datapoints: 850 billions
  • Data size on disk: 800 GiB
  • Retention time: 3 years


Dreamteam successfully uses single-node VictoriaMetrics in multiple environments.


  • Active time series: from 350K to 725K.
  • Total number of time series: from 100M to 320M.
  • Total number of datapoints: from 120 billions to 155 billions.
  • Retention: 3 months.

VictoriaMetrics in production environment runs on 2 M5 EC2 instances in “HA” mode, managed by Terraform and Ansible TF module. 2 Prometheus instances are writing to both VMs, with 2 Promxy replicas as load balancer for reads.


Brandwatch is the world’s pioneering digital consumer intelligence suite, helping over 2,000 of the world’s most admired brands and agencies to make insightful, data-driven business decisions.

The engineering department at Brandwatch has been using InfluxDB for storing application metrics for many years and when End-of-Life of InfluxDB version 1.x was announced we decided to re-evaluate our whole metrics collection and storage stack.

Main goals for the new metrics stack were:

  • improved performance
  • lower maintenance
  • support for native clustering in open source version
  • the less metrics shipment had to change, the better
  • achieving longer data retention would be great but not critical

We initially looked at CrateDB and TimescaleDB which both turned out to have limitations or requirements in the open source versions that made them unfit for our use case. Prometheus was also considered but push vs. pull metrics was a big change we did not want to include in the already significant change.

Once we found VictoriaMetrics it solved the following problems:

  • it is very light weight and we can now run virtual machines instead of dedicated hardware machines for metrics storage
  • very short startup time and any possible gaps in data can easily be filled in by using Promxy
  • we could continue using Telegraf as our metrics agent and ship identical metrics to both InfluxDB and VictoriaMetrics during a migration period (migration just about to start)
  • compression is really good so we can store more metrics and we can spin up new VictoriaMetrics instances for new data and keep read-only nodes with older data if we need to extend our retention period further than single virtual machine disks allow and we can aggregate all the data from VictoriaMetrics with Promxy

High availability is done the same way we did with InfluxDB, by running parallel single nodes of VictoriaMetrics.


  • active time series: up to 25 million
  • ingestion rate: ~300 000
  • total number of datapoints: 380 billion and growing
  • total number of entries in inverted index: 575 million and growing
  • daily time series churn rate: ~550 000
  • data size on disk: ~660GB and growing
  • index size on disk: ~9,3GB and growing
  • average datapoint size on disk: ~1.75 bytes

Query rates are insignificant as we have concentrated on data ingestion so far.

Anders Bomberg, Monitoring and Infrastructure Team Lead,