k6

A modern load testing tool, using Go and JavaScript

"like unit testing, for performance"

k6 is a modern load testing tool, building on Load Impact's years of experience. It provides a clean, approachable JavaScript scripting API, distributed and cloud execution, and orchestration via a REST API.

Get Started    Documentation

Search results for "{{ search.query }}"

No results found for "{{search.query}}". 
View All Results

Results output

Overview

By default, k6 will print runtime information and general results to stdout while the test is running, and a summary after the test has ended.

k6 may also output more granular result data using special output plugins. Currently, there are three such plugins that can output data:

  • A JSON plugin that writes data in JSON format to a file
  • An InfluxDB plugin that writes data points to an InfluxDB instance
  • A Load Impact plugin that streams your test results to the Load Impact cloud platform

Below we will explain what is sent to stdout, and how both of these output plugins work.

Runtime output to stdout

Typical stdout output from a k6 test run

Typical stdout output from a k6 test run

By default, k6 sends its output to stdout only. When started up, it will display a very tasteful ASCII splash screen with the k6 logo and version information, plus details about the test and options active. We will go through the things one by one here:

  • execution: local k6 is not being used to control another k6 instance (distributed execution).
  • output: - Output is sent to stdout only.
  • script: group.js (js) Shows what script we are running. The (js) at the end indicates that k6 thinks this file contains JavaScript code (that should be executed by the VUs).
  • duration: 0s, iterations: 1 The VUs in the test will only perform one single script iteration (calling the default function once) each, and there is no time limit set.
  • vus: 1, max: 1 Simulate 1 VU (virtual user), allocate resources for a "max" of 1 VU (meaning we can't scale up the load level in this case).
  • web ui: http://127.0.0.1:6565/ There is a basic command-and-control web UI built into k6, which is accessible on this address. Also on the same address is the built-in REST API that can be used to control and query k6.
  • done [==============] 800ms / 800ms This is the progress bar that updates while the test is running, to indicate how far the test has come and how much time has passed.
  • █ my user scenario is the name of a group we have created in our JS script.
  • █ front page is the name of a sub-group that was created inside the previously mentioned group ("my user scenario").
  • ✓ 100.00% - status code is 200 is the result from a check() that was executed inside the "front page" group. Note how this check result is indented, to indicate that it belongs to the "front page" group. The "front page" group name, in turn, is indented to indicate it belongs to its parent group ("my user scenario").
  • █ features page is another group that belongs to the parent group "my user scenario".
  • ✓ 100.00% - status code is 200 and ✓ 100.00% - h1 message is correct are two more checks that belong to the "features page" group.
  • checks................: 100.00% tells us the percentage of our checks that passed.

And then comes the HTTP timing information. There are several metrics being reported here, and percentiles etc. for each of them:

  • http_req_blocked The time VUs spent waiting to be allocated a TCP connection from the connection pool.
  • http_req_connecting The time VUs spent performing TCP handshakes (setting up TCP connections to the remote host).
  • http_req_duration Total time for request, excluding time spent blocked (http_req_blocked), DNS lookup (http_req_looking_up) and TCP connect (http_req_connecting) time.
  • http_req_looking_up The time spent performing DNS lookups.
  • http_req_sending The time spent transmitting HTTP requests to the remote host.
  • http_req_waiting The time spent waiting for a response to come back from the remote host (after having sent a request).
  • http_req_receiving The time spent receiving a reply from the remote host.

All of these are metrics of the Trend type, which means you can extract max, min, percentile, average values from them. On stdout they are printed like this:

http_req_duration.....: avg=46.32ms, max=46.32ms, med=46.32ms, min=46.32ms, p90=46.32ms, p95=46.32ms

After the HTTP timing metrics, there will be a few final lines of output:

  • http_reqs........: 2 The total number of HTTP requests made during the whole load test.
  • iterations........: 1 The total number of times all VUs in the test managed to run through the default() function.
  • vus.................: 1 How many VUs the test was configured to simulate.
  • vus_max........: 1 The number of pre-allocated VU slots the test was configured for (vus_max allows you to scale up the number of VUs in the test to max that number).

JSON output

You can also make k6 output detailed statistics in JSON format by using the --out/-o option for k6 run, like this:

k6 run --out json=my_test_result.json script.js

The JSON file will contain lines like these:

{"type":"Metric","data":{"type":"gauge","contains":"default","tainted":null,"thresholds":[],"submetrics":null},"metric":"vus"}
{"type":"Point","data":{"time":"2017-05-09T14:34:45.625742514+02:00","value":5,"tags":null},"metric":"vus"}
{"type":"Metric","data":{"type":"trend","contains":"time","tainted":null,"thresholds":["avg\u003c1000"],"submetrics":null},"metric":"http_req_duration"}
{"type":"Point","data":{"time":"2017-05-09T14:34:45.239531499+02:00","value":459.865729,"tags":{"group":"::my group::json","method":"GET","status":"200","url":"https://httpbin.org/get"}},"metric":"http_req_duration"}

Each line will either contain information about a metric, or log a data point (sample) for a metric. Lines consist of three items:

  • "type" - can have the values "Metric" or "Point" where "Metric" means the line is declaring a metric, and "Point" is an actual data point (sample) for a metric.
  • "data" - is a dictionary that contains lots of stuff, varying depending on the "type" above.
  • "metric" - the name of the metric.

"type": "Metric"

This line contains information about the nature of a metric. Here, "data" will contain the following:

  • "type" - the metric type ("gauge", "rate", "counter" or "trend")
  • "contains" - information on the type of data collected (can e.g. be "time" for timing metrics)
  • "tainted" - has this metric caused a threshold to fail?
  • "threshold" - are there any thresholds attached to this metric?
  • "submetrics" - any derived metrics created as a result of adding a threshold using tags.

"type": "Point"

This line contains actual data samples. Here, "data" will contain these fields:

  • "time" - timestamp when the sample was collected
  • "value" - the actual data sample
  • "tags" - dictionary with tagname-tagvalue pairs that can be used when filtering results data

Processing JSON output

We recommend using jq to process the k6 JSON output. jq is a lightweight and flexible command-line JSON processor.

You can quickly create filters to return a particular metric of the JSON file.

jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200")' myscript-output.json

And calculate an aggregated value of any metric.

// average
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s 'add/length'

// min
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s min

// max
jq '. | select(.type=="Point" and .metric == "http_req_duration" and .data.tags.status >= "200") | .data.value' myscript-output.json | jq -s max

For more advanced cases, check out the jq Manual

InfluxDB output

Detailed statistics can also be sent directly to an InfluxDB instance:

k6 run --out influxdb=http://localhost:8086/k6 script.js

The above will make k6 connect to an InfluxDB instance listening to port 8086 on localhost, and insert all test results data into a database named "k6" (which will be created if it doesn't exist). Then you can use some other tool like e.g. Grafana to visualize the data.

To know more about the metrics k6 collects, please see the Metrics management page

Load Impact Insights output

You can also stream your test results in real time to the Load Impact cloud.

Load Impact Insights provides support to automatically interpret and visualize your results.

K6CLOUD_TOKEN=<LoadImpact token> k6 run --out cloud script.js

After running the command, the console shows the URL to access your test results.

You can read more about Load Impact Insights

Results output