Testing Workflow

Automated tests give coding agents feedback before humans do. This is one of the things that makes agents more autonomous and therefore more useful for completing larger coding tasks.

The problem: agents are "lazy"

We've all seen coding agents:

  • forgetting to run some tests
  • not writing tests for new functionality
  • declaring the job done while tests are still failing

CodeSpeak runs agents under the hood and enforces discipline about tests.

How the commands work together

CodeSpeak gives you four commands with increasing levels of test enforcement:

CommandWhat it doesWhen to use
codespeak implImplement the spec, no dedicated test phaseQuick-and-dirty iteration โ€” you'll test manually
codespeak buildImplement + enforce tests passProduction-quality builds
codespeak testRun tests, fix failures iterativelyAfter impl, or when you want to make sure existing tests pass
codespeak coverageRun tests with coverage, add missing testsWhen you want to invest in a better test suite

A typical workflow:

  1. Write or edit your spec
  2. Run codespeak impl for fast iteration
  3. When the implementation looks right, run codespeak test to make sure all tests pass
  4. Run codespeak coverage --target 100 --max-iterations 5 to fill in missing tests
  5. From now on, use codespeak build to enforce tests on every change

Auto-configuring the test runner

CodeSpeak needs to know how to run tests in your project. On your first build, you'll see:

A placeholder test runner configuration was added to codespeak.json for spec 'spec.cs.md'.
Please fill it in with actual values, or run 'codespeak test --auto-configure --spec spec.cs.md' to auto-detect it.
Built successfully.

Let CodeSpeak discover the test runner automatically:

codespeak test --auto-configure --spec spec.cs.md
Auto-configured test runner for spec 'spec.cs.md'

From now on, codespeak build will automatically run tests after implementation. If tests fail, CodeSpeak will attempt to fix them before reporting success.

See also