A modern load testing tool, using Go and JavaScript

"like unit testing, for performance"

k6 is a modern load testing tool, building on Load Impact's years of experience. It provides a clean, approachable JavaScript scripting API, distributed and cloud execution, and orchestration via a REST API.

Get Started    Guides

This section covers the important aspect of metrics management in k6. How and what kind of metrics k6 collects automatically (built-in metrics), and what custom metrics you can make k6 collect.

Built-in metrics

The built-in metrics are the ones you can see output to stdout when you run the simplest possible k6 test, e.g. k6 run github.com/loadimpact/k6/samples/http_get.js which will output something like the below:

All the http_req_... lines and the ones after them are built-in metrics that get written to stdout at the end of a test.

The following six built-in metrics will always be collected by k6:

Metric name



Current number of active virtual users



Max possible number of virtual users (VU resources are preallocated, to ensure performance will not be affected when scaling up the load level)



The aggregate number of times the VUs in the test have executed the JS script (the default function). Or, in case the test is not using a JS script but accessing a single URL, the number of times the VUs have requested that URL.



The amount of received data.



The amount of data sent.



Number of failed checks.

HTTP-specific built-in metrics

There are also built-in metrics that will only be generated when/if HTTP requests are made:

Metric name



How many HTTP requests has k6 generated, in total.



Time spent blocked (waiting for a free TCP connection slot) before initiating request. float



Time spent looking up remote host name in DNS. float



Time spent establishing TCP connection to remote host. float



Time spent handshaking TLS session with remote host.



Time spent sending data to remote host. float



Time spent waiting for response from remote host (a.k.a. "time to first byte", or "TTFB"). float



Time spent receiving response data from remote host. float



Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times). float

Accessing HTTP timings from a script

If you want to access the timing information from an individual HTTP request, the built-in HTTP timing metrics are also available in the HTTP Response object:

import http from "k6/http";

export default function() {
  var res = http.get("http://httpbin.org");
  console.log("Response time was " + String(res.timings.duration) + " ms");

In the above snippet, res is an HTTP Response object containing:

  • res.body (string containing the HTTP response body)
  • res.headers (object containing header-name/header-value pairs)
  • res.status (integer contaning HTTP response code received from server)
  • res.timings (object containing HTTP timing information for the request on ms)
    • res.timings.blocked = http_req_blocked
    • res.timings.looking_up = http_req_looking_up
    • res.timings.connecting = http_req_connecting
    • res.timings.sending = http_req_sending
    • res.timings.waiting = http_req_waiting
    • res.timings.receiving = http_req_receiving
    • res.timings.duration = http_req_duration

Custom metrics

You can also create your own metrics, that are reported at the end of a load test, just like HTTP timings:

import http from "k6/http";
import { Trend } from "k6/metrics";

var myTrend = new Trend("waiting_time");

export default function() {
   var r = http.get("https://httpbin.org");

The above code will:

  • create a Trend metric named “waiting_time” and referred to in the code using the variable name myTrend

Custom metrics will be reported at the end of a test. Here is how the output might look:

Metric types

All metrics (both the built-in ones and the custom ones) have a type. There are four different metrics types, and they are: Counter, Gauge, Rate and Trend.

Counter (cumulative metric)

import { Counter } from "k6/metrics";

var myCounter = new Counter("my_counter");

export default function() {

The above code will generate the following output:

The value of my_counter will be 3 (if you run it one single iteration - i.e. without specifying --iterations or --duration).

Note that there is currently no way of accessing the value of any custom metric from within Javascript. Note also that counters that have value zero (0) at the end of a test are a special case - they will NOT be printed to the stdout summary.

Gauge (keep the latest value only)

import { Gauge } from "k6/metrics";

var myGauge = new Gauge("my_gauge");

export default function() {

The above code will result in output like this:

The value of my_gauge will be 2 at the end of the test. As with the Counter metric above, a Gauge with value zero (0) will NOT be printed to the stdout summary at the end of the test.

Trend (collect trend statistics (min/max/avg/percentiles) for a series of values)

import { Trend } from "k6/metrics";

var myTrend = new Trend("my_trend");

export default function() {

The above code will make k6 print output like this:

A trend metric is really a container that holds a set of sample values, and which we can ask to output statistics (min, max, average, median or percentiles) about those samples. By default, k6 will print average, min, max, median, 90th percentile and 95th percentile.

Rate (keeps track of percentage of values in a series that are non-zero)

import { Rate } from "k6/metrics";

var myRate = new Rate("my_rate");

export default function() {

The above code will make k6 print output like this:

The value of my_rate at the end of the test will be 50%, indicating that half of the values added to the metric were non-zero.


  • custom metrics are only collected from VU threads at the end of a VU iteration, which means that for long-running scripts you may not see any custom metrics until a while into the test.

Metric graphs in Load Impact Insights

If you use Load Impact Insights, it can draw all the metrics as shown on the screenshot below. By default it shows only very basic metrics, but you will be able to add both system and custom metrics as more graphs, or as different metrics on the same one.

Updated 10 months ago


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.