A modern load testing tool, using Go and JavaScript

"like unit testing, for performance"

k6 is a modern load testing tool, building on Load Impact's years of experience. It provides a clean, approachable JavaScript scripting API, distributed and cloud execution, and orchestration via a REST API.

Get Started    Guides

Results output


By default, k6 will print runtime information and general results to stdout while the test is running, and a summary after the test has ended.

k6 may also output more granular result data using special output plugins. Currently, there are a few plugins that can output data:

Below we will explain what is sent to stdout, and how both of these output plugins work.

To know more about the metrics k6 collects, please see the Metrics management page

Runtime output to stdout

Typical stdout output from a k6 test run

Typical stdout output from a k6 test run

By default, k6 sends its output to stdout only. When started up, it will display a very tasteful ASCII splash screen with the k6 logo and version information, plus details about the test and options active. We will go through the things one by one here:

  • execution: local k6 is not being used to control another k6 instance (distributed execution).
  • output: - Output is sent to stdout only.
  • script: group.js (js) Shows what script we are running. The (js) at the end indicates that k6 thinks this file contains JavaScript code (that should be executed by the VUs).
  • duration: 0s, iterations: 1 The VUs in the test will only perform one single script iteration (calling the default function once) each, and there is no time limit set.
  • vus: 1, max: 1 Simulate 1 VU (virtual user), allocate resources for a "max" of 1 VU (meaning we can't scale up the load level in this case).
  • done [==============] 800ms / 800ms This is the progress bar that updates while the test is running, to indicate how far the test has come and how much time has passed.
  • █ my user scenario is the name of a group we have created in our JS script.
  • █ front page is the name of a sub-group that was created inside the previously mentioned group ("my user scenario").
  • ✓ 100.00% - status code is 200 is the result from a check() that was executed inside the "front page" group. Note how this check result is indented, to indicate that it belongs to the "front page" group. The "front page" group name, in turn, is indented to indicate it belongs to its parent group ("my user scenario").
  • █ features page is another group that belongs to the parent group "my user scenario".
  • ✓ 100.00% - status code is 200 and ✓ 100.00% - h1 message is correct are two more checks that belong to the "features page" group.
  • checks................: 100.00% tells us the percentage of our checks that passed.

And then comes the HTTP timing information. There are several metrics being reported here, and percentiles etc. for each of them:

  • http_req_blocked The time VUs spent waiting to be allocated a TCP connection from the connection pool.
  • http_req_connecting The time VUs spent performing TCP handshakes (setting up TCP connections to the remote host).
  • http_req_looking_up The time spent performing DNS lookups.
  • http_req_sending The time spent transmitting HTTP requests to the remote host.
  • http_req_waiting The time spent waiting for a response to come back from the remote host (after having sent a request).
  • http_req_receiving The time spent receiving a reply from the remote host.
  • http_req_duration Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times).

All of these are metrics of the Trend type, which means you can extract max, min, percentile, average values from them. On stdout they are printed like this:

http_req_duration.....: avg=46.32ms, max=46.32ms, med=46.32ms, min=46.32ms, p90=46.32ms, p95=46.32ms

After the HTTP timing metrics, there will be a few final lines of output:

  • http_reqs........: 2 The total number of HTTP requests made during the whole load test.
  • iterations........: 1 The total number of times all VUs in the test managed to run through the default() function.
  • vus.................: 1 How many VUs the test was configured to simulate.
  • vus_max........: 1 The number of pre-allocated VU slots the test was configured for (vus_max allows you to scale up the number of VUs in the test to max that number).

JSON output

You can also make k6 output detailed statistics in JSON format by using the --out/-o option for k6 run, like this:

k6 run --out json=my_test_result.json script.js

The JSON file will contain lines like these:

{"type":"Point","data":{"time":"2017-05-09T14:34:45.239531499+02:00","value":459.865729,"tags":{"group":"::my group::json","method":"GET","status":"200","url":"https://httpbin.org/get"}},"metric":"http_req_duration"}

Each line will either contain information about a metric, or log a data point (sample) for a metric. Lines consist of three items:

  • "type" - can have the values "Metric" or "Point" where "Metric" means the line is declaring a metric, and "Point" is an actual data point (sample) for a metric.
  • "data" - is a dictionary that contains lots of stuff, varying depending on the "type" above.
  • "metric" - the name of the metric.

"type": "Metric"

This line contains information about the nature of a metric. Here, "data" will contain the following:

  • "type" - the metric type ("gauge", "rate", "counter" or "trend")
  • "contains" - information on the type of data collected (can e.g. be "time" for timing metrics)
  • "tainted" - has this metric caused a threshold to fail?
  • "threshold" - are there any thresholds attached to this metric?
  • "submetrics" - any derived metrics created as a result of adding a threshold using tags.

"type": "Point"

This line contains actual data samples. Here, "data" will contain these fields:

  • "time" - timestamp when the sample was collected
  • "value" - the actual data sample; time values are in milliseconds
  • "tags" - dictionary with tagname-tagvalue pairs that can be used when filtering results data

Processing JSON output

We recommend using jq to process the k6 JSON output. jq is a lightweight and flexible command-line JSON processor.

You can quickly create filters to return a particular metric of the JSON file.

jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200")' myscript-output.json

And calculate an aggregated value of any metric.

// average
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s 'add/length'

// min
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s min

// max
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s max

For more advanced cases, check out the jq Manual

CSV output

New in version 0.26.0

It's also possible to output the results to a CSV file:

k6 run --out csv=my_test_result.csv script.js
K6_CSV_FILENAME="my_test_result.csv" k6 run --out csv script.js

This format is more compact and efficient than the JSON output, so it's a good choice if JSON file size is an issue.

Furthermore, it's possible to configure the interval at which metrics are written out to disk with K6_CSV_SAVE_INTERVAL (defaults to 1s).
For example: k6 run --out csv=file_name=somefile.csv,save_interval=2s script.js or K6_CSV_FILENAME=somefile.csv K6_CSV_SAVE_INTERVAL=2s k6 run --out csv script.js.

The first line of the output is the names of columns and looks like:


InfluxDB output

Detailed statistics can also be sent directly to an InfluxDB instance:

k6 run --out influxdb=http://localhost:8086/k6 script.js

The above will make k6 connect to an InfluxDB instance listening to port 8086 on localhost, and insert all test results data into a database named "k6" (which will be created if it doesn't exist).

Then you can use some other tool like Grafana to visualize the data.

Read more in this Tutorial about using k6 with InfluxDB and Grafana.

Apache Kafka output

You can also push the emitted metrics to Apache Kafka. You can configure the broker (or multiple ones), topic and message format directly from the command line parameter like this:
k6 run --out kafka=brokers=broker_host:8000,topic=k6
or if you want multiple brokers:
k6 --out kafka=brokers={broker1,broker2},topic=k6,format=json

You can also specify the message format k6 will use. By default, it will be the same as the JSON output, but you can also use the InfluxDB line protocol for direct "consumption" by InfluxDB:
k6 --out kafka=brokers=someBroker,topic=someTopic,format=influxdb

You can even modify some of the format settings such as tagsAsFields:
k6 --out kafka=brokers=someBroker,topic=someTopic,format=influxdb,influxdb.tagsAsFields={url,myCustomTag}

To know more about this integration, read the Integrating k6 with Apache Kafka blog post.

StatsD output

k6 can also push the metrics to a StatsD service like:

k6 run --out statsd script.js

The following options can be configured:


Address of the statsd service, currently only UDP is supported. The default value is localhost:8125


The namespace used as a prefix for all the metric names. The default value is k6.


Configure how often data batches are sent. The default value is 1s.


The buffer size. The default value is 20.

Datadog output

You can also store your k6 metrics into the Datadog platform.

According to Datadog: "The easiest way to get your custom application metrics into Datadog is to send them to DogStatsD". You can run the service as a Docker container like:

docker run \
            -e DD_API_KEY=<YOUR_DATADOG_API_KEY> \
            -p 8125:8125/udp \

Once you have the DogsStatsD service running, you can run your load test and push the metrics like:

k6 run --out datadog script.js

The Datadog plugin works like the StatsD plugin, but Datadog has a concept of metric tags, the key-value metadata pairs that will allow you to distinguish between requests for different URLs, response statuses, different groups, etc. These options can be configured:


Address of the DogsStatsD service, currently only UDP is supported. The default value is localhost:8125


The namespace used as a prefix for all the metric names. The default value is k6.


Configure how often data batches are sent. The default value is 1s.


The buffer size. The default value is 20.


This is a comma-separated list of tags that should NOT be sent to Datadog. All other metric tags that k6 emits will be sent. The default value is empty.

For an expanded tutorial, check out the blog post: "How to send k6 metrics to Datadog".

Load Impact Insights output

You can also stream your test results in real time to the Load Impact cloud.

Load Impact Insights provides support to automatically interpret and visualize your results.

K6_CLOUD_TOKEN=<LoadImpact token> k6 run --out cloud script.js


Starting with v0.18.0 K6CLOUD_TOKEN has been renamed K6_CLOUD_TOKEN. The old spelling will still work in v0.18.0 but a deprecation message will be printed to the terminal.

After running the command, the console shows the URL to access your test results.

You can read more about Load Impact Insights

Multiple outputs

You can simultaneously send the emitted metrics to several outputs by using the CLI --out flag multiple times, for example:
k6 run --out json=test.json --out influxdb=http://localhost:8086/k6

Updated 2 months ago

Results output

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.