Skip to main content

3 posts tagged with "Validation"

Computer System Validation (CSV) and automated testing approaches

View All Tags

Building Self-Validating Codebases: The Requirements System

· 5 min read
Ian Derrington
Founder & CEO, Supernal Intelligence

One of the most critical challenges in AI-assisted development is ensuring that code changes maintain compliance with requirements, especially in regulated environments. How do you know an AI agent hasn't inadvertently broken a critical business rule while implementing a feature? How do you maintain traceability between requirements, code, and tests?

The answer lies in making requirements machine-readable, version-controlled, and automatically validated.

The Problem with Traditional Requirements

Traditional requirements management suffers from several fundamental problems:

Disconnection: Requirements live in separate documents (Word, Confluence, Jira) that become stale the moment code changes.

Manual Validation: Testing against requirements is a manual process prone to human error and interpretation differences.

Poor Traceability: When code changes, tracking which requirements are affected requires manual detective work.

Context Loss: AI agents can't understand requirements documents written in natural language prose.

Testing AI-Generated Code: From Gherkin to Continuous Validation

· 10 min read
Ian Derrington
Founder & CEO, Supernal Intelligence

AI agents can write code faster than humans, but how do you ensure that code actually works? More importantly, how do you validate it meets business requirements and regulatory standards? Traditional testing approaches weren't designed for AI-generated code at machine speed.

The solution lies in making tests as intelligent as the code they validate - automated, requirement-driven, and continuously evolving.

The Challenge of AI-Generated Code

When AI agents write code, traditional testing approaches break down:

Speed Mismatch: AI generates code in seconds; humans write tests in hours

Implicit Knowledge: AI doesn't inherently know your business rules or edge cases

Coverage Gaps: Without guidance, AI may implement features but miss critical test scenarios

Regression Risk: Rapid changes can break existing functionality in subtle ways

Compliance Requirements: Regulated industries need traceability from requirements to tests to code

Self-Validating Codebases: Automated Compliance for Regulated Industries

· 6 min read
Ian Derrington
Founder & CEO, Supernal Intelligence

I've spent years working with development teams in heavily regulated industries, and there's a constant tension I see everywhere: the need to move fast versus the need to prove that your software won't harm people or compromise critical systems.

It's a real tension, not an imaginary one. When you're developing software that controls medical devices, manages financial transactions, or operates in aerospace systems, the cost of failure isn't just a bad user experience - it can be life-threatening or financially catastrophic.

But the traditional approaches to software validation, developed decades ago when software was simpler and development cycles were measured in years rather than weeks, are becoming increasingly difficult to reconcile with modern development practices.

The Validation Bottleneck

I remember talking to a team at a medical device company who told me they spent more time documenting their software than writing it. They had detailed requirements traceability matrices that had to be updated by hand every time the code changed. They wrote test protocols separately from their automated tests, creating two different versions of truth that constantly diverged.

Every small change required weeks of validation work. Not because the change was complex, but because the validation process itself was so manual and bureaucratic that it couldn't keep up with the pace of development.

The tragedy is that these teams often have excellent automated testing, comprehensive code review processes, and sophisticated CI/CD pipelines. But none of that matters from a regulatory perspective if you can't prove it in the specific format that auditors expect.