Clobbr

// Use case

Load test your API in CI/CD with the Clobbr CLI

Ship performance regressions like you ship bugs, with a failing build. The Clobbr CLI takes the same config as the app, runs headless, and gates your pipeline on success rate or p95 thresholds. No account, no cloud.

Download the Clobbr api speed test app on the App StoreDownload the Clobbr api speed test app on the Microsoft StoreGet Clobbr api speed test CLI from npmView or download Clobbr api speed test on Github

Why include load testing in CI

A load test on merge is worth ten perf reviews in standup. Three concrete wins:

  • Catch regressions early. An ORM change that removes an index, a new N+1 in a resolver, an accidental sync-in-loop: all obvious in a 60-second load run, invisible otherwise until production.
  • Build a latency history for free. Store each run's JSON output as a CI artifact. Over a few weeks you have a per-commit time-series of p95 without wiring up a separate observability platform.
  • Turn performance into a shared responsibility. When a failing build blocks merge on latency, the PR author fixes it, same as any other test.

Installing the Clobbr CLI

The CLI ships on npm. Install it globally in your CI job, or use npx to avoid the install step:

npm install -g @clobbr/cli
# or, without the global install:
npx @clobbr/cli --help

GitHub Actions example

Drop this into .github/workflows/load-test.yml:

name: load-test
on:
  pull_request:
  push:
    branches: [main]

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 20

      - name: Install Clobbr CLI
        run: npm install -g @clobbr/cli

      - name: Run load test
        env:
          API_URL: ${{ secrets.STAGING_URL }}
          API_TOKEN: ${{ secrets.STAGING_TOKEN }}
        run: |
          clobbr \
            --url "$API_URL/v1/users" \
            --iterations 200 \
            --parallel \
            --headers "Authorization: Bearer $API_TOKEN" \
            --output json \
            --success-rate 99 \
            > load-test.json

      - name: Upload result
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: load-test-result
          path: load-test.json

The --success-rate flag fails the step if fewer than 99% of requests succeed. Add a follow-up step to parse load-test.json and enforce a p95 threshold too.

GitLab CI example

The equivalent .gitlab-ci.yml stanza:

load-test:
  image: node:20
  stage: test
  script:
    - npm install -g @clobbr/cli
    - |
      clobbr \
        --url "$STAGING_URL/v1/users" \
        --iterations 200 \
        --parallel \
        --headers "Authorization: Bearer $STAGING_TOKEN" \
        --output json \
        --success-rate 99 \
        > load-test.json
  artifacts:
    when: always
    paths:
      - load-test.json
    expire_in: 30 days

Failing the build on threshold breach

Two common strategies:

  • Built-in success rate flag: --success-rate 99 fails the run if fewer than 99% of requests return a 2xx. Simplest, good first line of defense.
  • Parse JSON output for latency: export with --output json, grab the p95 or p99 with jq, compare against a threshold, and exit 1 if over. This lets you enforce latency gates specifically.

For per-commit drift detection, store each run's JSON as a CI artifact and diff the percentile numbers over time. A small script in your ops repo can alert you when p95 moves by more than, say, 20% versus the last green main-branch run.

// FAQ

Frequently asked questions

Which CI providers does Clobbr CLI support?
Any CI that can run Node. We keep ready-to-copy workflow files for GitHub Actions, GitLab CI, and CircleCI at parsecph/clobbr-ci-examples. Travis, Drone, Jenkins, Buildkite, and the rest work the same way: a Node step running the CLI.
Should I run load tests on every pull request?
Run a short canary (50 to 100 iterations) on every PR to catch obvious regressions fast. Run the full load profile (500+ iterations, parallel mode) on merge to main, on a scheduled nightly job, or behind a label like 'needs-perf'. Full load tests on every PR slow feedback loops without much upside, since most code changes don't move p95 meaningfully.
How do I fail the build on regression?
The CLI exits non-zero when success rate drops below your threshold. For latency gates, export results to JSON with --output json, parse p95 or p99 in the next step, and exit 1 if it's over your limit.
Where should I run the load test from?
Against a staging environment that mirrors production, ideally. If you test against production, use a dedicated test tenant or a feature-flagged endpoint so you don't poison real metrics. Running against localhost works for a quick sanity check, but gives you wildly optimistic numbers: no network, no real database.
Does this replace my existing monitoring or APM?
No. CI load tests catch regressions in a controlled, reproducible environment. APM catches regressions in production on real traffic. They complement each other: CI tells you a change is probably slower before it ships, and APM tells you it definitely is after it ships.
Can I run GraphQL or authenticated load tests in CI?
Yes. The CLI takes the same request config as the GUI: headers, payload, scripted values, everything. Auth tokens come from CI secrets. For GraphQL, just supply the payload and Clobbr handles it the same way it does in the app.

// Ship it

Put load testing in your pipeline

Install @clobbr/cli, copy a workflow from parsecph/clobbr-ci-examples, merge. That's the whole setup. Desktop app optional; use it to design the test, then ship it to CI.

Download the Clobbr api speed test app on the App StoreDownload the Clobbr api speed test app on the Microsoft StoreGet Clobbr api speed test CLI from npmView or download Clobbr api speed test on Github

Lifetime license · macOS, Windows, CLI · no subscription · available Included on Setapp