Skip to main content

2 posts tagged with "testing"

View All Tags

Building Self-Validating Codebases: The Requirements System

· 5 min read
Ian Derrington
Founder & CEO, Supernal Intelligence

One of the most critical challenges in AI-assisted development is ensuring that code changes maintain compliance with requirements, especially in regulated environments. How do you know an AI agent hasn't inadvertently broken a critical business rule while implementing a feature? How do you maintain traceability between requirements, code, and tests?

The answer lies in making requirements machine-readable, version-controlled, and automatically validated.

The Problem with Traditional Requirements

Traditional requirements management suffers from several fundamental problems:

Disconnection: Requirements live in separate documents (Word, Confluence, Jira) that become stale the moment code changes.

Manual Validation: Testing against requirements is a manual process prone to human error and interpretation differences.

Poor Traceability: When code changes, tracking which requirements are affected requires manual detective work.

Context Loss: AI agents can't understand requirements documents written in natural language prose.

Testing AI-Generated Code: From Gherkin to Continuous Validation

· 10 min read
Ian Derrington
Founder & CEO, Supernal Intelligence

AI agents can write code faster than humans, but how do you ensure that code actually works? More importantly, how do you validate it meets business requirements and regulatory standards? Traditional testing approaches weren't designed for AI-generated code at machine speed.

The solution lies in making tests as intelligent as the code they validate - automated, requirement-driven, and continuously evolving.

The Challenge of AI-Generated Code

When AI agents write code, traditional testing approaches break down:

Speed Mismatch: AI generates code in seconds; humans write tests in hours

Implicit Knowledge: AI doesn't inherently know your business rules or edge cases

Coverage Gaps: Without guidance, AI may implement features but miss critical test scenarios

Regression Risk: Rapid changes can break existing functionality in subtle ways

Compliance Requirements: Regulated industries need traceability from requirements to tests to code