🤖 AI Summary
DocSpring announced a unified, end-to-end testing and generation pipeline that uses a single RSwag test suite as the source of truth to produce the OpenAPI schema, client SDKs, live end-to-end client tests, and the runnable code examples in its docs. After struggling with fragile OpenAPI Generator extensions and bespoke polling logic for async PDF jobs, the team built a synchronous API proxy (sync.api.docspring.com and sync.api-eu.docspring.com) that handles waiting/retries and returns a download_url, letting lightweight generated clients make a single long-lived request (with an optional ?wait=false for async). AI coding tools helped accelerate the final integration.
Technically, each RSwag test now drives: (1) Rails request specs validating DB-side behavior, (2) generated SDKs in nine languages (C#, Elixir, Go, Java, JavaScript, PHP, Python, Ruby, TypeScript), (3) end-to-end tests that invoke real client methods against a Capybara test server and assert JSON responses, and (4) the exact code snippets used in docs. The project runs 300 RSwag examples covering 40 API ops in two modes across nine SDKs (≈2,700 end-to-end tests), catching real issues like numeric-type mismatches, file-upload bugs, and generator/template defects before releases. The result is a reproducible, maintainable developer workflow that reduces drift between spec, SDKs, tests, and docs — a pattern valuable to teams building robust ML/AI platform APIs and multi-language SDKs.
Loading comments...
login to comment
loading comments...
no comments yet