Why include load testing in CI
A load test on merge is worth ten perf reviews in standup. Three concrete wins:
- Catch regressions early. An ORM change that removes an index, a new N+1 in a resolver, an accidental sync-in-loop: all obvious in a 60-second load run, invisible otherwise until production.
- Build a latency history for free. Store each run's JSON output as a CI artifact. Over a few weeks you have a per-commit time-series of p95 without wiring up a separate observability platform.
- Turn performance into a shared responsibility. When a failing build blocks merge on latency, the PR author fixes it, same as any other test.
Installing the Clobbr CLI
The CLI ships on npm. Install it globally in your CI job, or use npx to avoid the install step:
npm install -g @clobbr/cli
# or, without the global install:
npx @clobbr/cli --helpGitHub Actions example
Drop this into .github/workflows/load-test.yml:
name: load-test
on:
pull_request:
push:
branches: [main]
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install Clobbr CLI
run: npm install -g @clobbr/cli
- name: Run load test
env:
API_URL: ${{ secrets.STAGING_URL }}
API_TOKEN: ${{ secrets.STAGING_TOKEN }}
run: |
clobbr \
--url "$API_URL/v1/users" \
--iterations 200 \
--parallel \
--headers "Authorization: Bearer $API_TOKEN" \
--output json \
--success-rate 99 \
> load-test.json
- name: Upload result
if: always()
uses: actions/upload-artifact@v4
with:
name: load-test-result
path: load-test.jsonThe --success-rate flag fails the step if fewer than 99% of requests succeed. Add a follow-up step to parse load-test.json and enforce a p95 threshold too.
GitLab CI example
The equivalent .gitlab-ci.yml stanza:
load-test:
image: node:20
stage: test
script:
- npm install -g @clobbr/cli
- |
clobbr \
--url "$STAGING_URL/v1/users" \
--iterations 200 \
--parallel \
--headers "Authorization: Bearer $STAGING_TOKEN" \
--output json \
--success-rate 99 \
> load-test.json
artifacts:
when: always
paths:
- load-test.json
expire_in: 30 daysFailing the build on threshold breach
Two common strategies:
- Built-in success rate flag:
--success-rate 99fails the run if fewer than 99% of requests return a 2xx. Simplest, good first line of defense. - Parse JSON output for latency: export with
--output json, grab the p95 or p99 withjq, compare against a threshold, andexit 1if over. This lets you enforce latency gates specifically.
For per-commit drift detection, store each run's JSON as a CI artifact and diff the percentile numbers over time. A small script in your ops repo can alert you when p95 moves by more than, say, 20% versus the last green main-branch run.
// FAQ
Frequently asked questions
Which CI providers does Clobbr CLI support?
Should I run load tests on every pull request?
How do I fail the build on regression?
--output json, parse p95 or p99 in the next step, and exit 1 if it's over your limit.Where should I run the load test from?
Does this replace my existing monitoring or APM?
Can I run GraphQL or authenticated load tests in CI?
// Related
Related pages
REST API load testing
The same config used in CI: verbs, headers, payloads, percentile stats.
GraphQL API load testing
GraphQL auto-detection, per-operation stats, and CI wiring.
Blog: How to Load Test a GraphQL API
Full tutorial that ends in a CI workflow running on every pull request.
Compare: Clobbr vs k6
k6 is another CI-friendly option. Here's how the two compare.
// Ship it
Put load testing in your pipeline
Install @clobbr/cli, copy a workflow from parsecph/clobbr-ci-examples, merge. That's the whole setup. Desktop app optional; use it to design the test, then ship it to CI.
Lifetime license · macOS, Windows, CLI · no subscription · available Included on Setapp