
Within the period of AI-generated software program, builders nonetheless want to verify their code is clear. That’s the place TestSprite needs to assist.
The Seattle startup introduced $6.7 million in seed funding to develop its platform that routinely assessments and displays code written by AI instruments equivalent to GitHub Copilot, Cursor, and Windsurf.
TestSprite’s autonomous agent integrates instantly into growth environments, operating assessments all through the coding course of moderately than as a separate step after deployment.
“As AI writes extra code, validation turns into the bottleneck,” stated CEO Yunhao Jiao. “TestSprite solves that by making testing autonomous and steady, matching AI pace.”
The platform can generate and run front- and back-end assessments throughout growth to make sure AI-written code works as anticipated, assist AI IDEs (Built-in Growth Environments) repair software program based mostly on TestSprite’s integration testing studies, and constantly replace and rerun check circumstances to watch deployed software program for ongoing reliability.
Based final yr, TestSprite says its person base grew from 6,000 to 35,000 in three months, and income has doubled every month since launching its 2.0 model and new Mannequin Context Protocol (MCP) integration. The corporate employs about 25 folks.
Jiao is a former engineer at Amazon and a pure language processing researcher. He co-founded TestSprite with Rui Li, a former Google engineer.
Jiao stated TestSprite doesn’t compete with AI coding copilots, however enhances them by specializing in steady validation and check era. Builders can set off assessments utilizing easy natural-language instructions, equivalent to “Check my payment-related options,” instantly inside their IDEs.
The seed spherical was led by Bellevue, Wash.-based Trilogy Fairness Companions, with participation from Techstars, Jinqiu Capital, MiraclePlus, Hat-trick Capital, Baidu Ventures, and EdgeCase Capital Companions. Whole funding up to now is about $8.1 million.