The Gold Is in the Structure
There is a kind of work that is invisible by design. The reader lands on the same page, reads the same layout, hears the same voice — and has no idea that everything beneath their feet has been replaced. No migration notice. No breaking change surfaced to the public. The surface holds, and that is precisely the point.
Kintsugi fills the cracks in broken pottery with gold, and most people understand this as an aesthetic practice — making damage beautiful. But there is a deeper reading: the gold is not decoration. It is the structure now. What holds the object together, what makes it capable of bearing weight again, is the new material running through the fractures. The original clay is still there, visible, accounted for. But the integrity belongs to what came after.
The past several weeks have been that kind of work.
I have been carrying a growing body of architectural instincts from building Plutarc — separation of concerns done properly, abstraction layers that earn their keep, auditable actions, error handling that surfaces rather than suppresses, access control that models real trust hierarchies. For months these lived in one codebase. The question was always whether they would stay there, or find their way into everything else I build. This sprint was the answer.
Access to Claude Code changed the texture of the work in ways I'm still processing. Not just the speed — though the speed was genuinely startling, and I'll come to the numbers — but what it did to the hunger. The desire to keep shipping, which had gone quiet in the ten days since Plutarc reached feature-complete, came back sharp. When iteration costs less, ambition gets cheaper. You stop conserving your bets.
Two fronts. Two platforms. One architectural philosophy finding its second and third expression.
Valeon: Ground Up
The visible should look relatively the same. That was intentional.
What's underneath is a different beast entirely. Valeon has been rewritten from the ground up — Next.js, Convex, Clerk, DaisyUI — and in doing so has absorbed everything I've learned since its first release. The same principles that shaped Plutarc now run through Valeon: separation of concerns, proper abstraction layers, auditable actions, robust error surfacing, and role-based access control built to model real trust hierarchies rather than approximate them. Search has been significantly improved via the Convex backend. The UI should feel more responsive and interactive throughout — because a platform that processes audio and renders essays should feel fast, not just be fast underneath.
The centrepiece of the rebuild is the author dashboard. It comes with a full rendering and audio pipeline, and a completely custom email inbox powered by Resend — Buttondown and Formspree are gone, replaced by infrastructure I actually own and can extend. The newsletter and contact form are custom too. Resend is doing meaningful work throughout the platform now, not just transactional emails. The utilities layer allows for more customisability across the board, and author profiles are now fully customisable through the dashboard — each author has a proper presence on the platform, not just a name attached to a byline.
The Obsidian plugin is the detail I'm most pleased with. Authors can publish directly from Obsidian using API key authentication — no git, no terminal, no technical ceremony between the thought and the page. There is also a new How to Contribute page with an application form that automates author onboarding end-to-end on approval. The goal was to make Valeon genuinely usable for contributors who aren't technical and shouldn't have to become technical just to publish an essay. The plugin handles getting words out. The onboarding form handles getting authors in. Neither requires them to ever touch a command line.
The publication pipeline has gone from roughly ten minutes to around forty seconds. I want to be precise about what makes that number interesting: the rebuild also moved from static rendering to SSR, which should theoretically have made things slower. The fact that the pipeline got dramatically faster anyway is a consequence of decoupling — rendering, TTS, and alignment now run as independent processes, each landing when it does, without waiting on the others. The gain isn't from doing things faster. It's from doing more of them in parallel.
The audio is powered by VocaSync under the hood, replacing the custom implementation I built when Valeon first launched. That original build took close to two months. The equivalent this time took three days. The problem didn't get simpler — the architecture is actually more sophisticated now. What changed is what becomes possible when iteration stops feeling expensive.
Which makes the next question obvious.
VocaSync: What the Platform Became
VocaSync did not get a ground-up rewrite. It didn't need one. But the architectural thinking that shaped Valeon's rebuild has been running through VocaSync's evolution in parallel, and the cumulative effect is a platform that feels categorically more capable than it did a month ago.
The most significant change is parallelised synthesis chunking. My longest post — roughly 60,000 characters — used to take forty minutes to synthesise and align. Under the old Valeon architecture, that was long enough to exceed Vercel's long-lived request budget mid-run, requiring a manual restart. That same post now completes in roughly nine minutes. The chunks are processed in parallel, the architecture doesn't hold its breath waiting for one to finish before starting the next, and the improvement isn't incremental — it's structural.
The dashboard now reflects this. Instead of staring at a processing spinner and waiting, synthesis runs surface exactly which chunk is being processed, how many remain, and what stage each worker is at. This is more than a UX improvement. It is an observability philosophy: a system that tells you what it's doing while it's doing it is a system you can trust, debug, and build on. The chunking parameters are also now configurable — both via optional fields in the API endpoint and via the dashboard UI — meaning developers can tune synthesis behaviour programmatically without touching the interface, and non-technical users can adjust it without writing a line of code.
Workers now have built-in retries and more robust error handling and recovery throughout. Failures don't cascade and don't require manual intervention to resolve. The system knows how to pick itself up.
Webhooks have also landed. Users can now provide a callback URL instead of polling for job completion — polling remains intact for those who don't need async callbacks, but offering webhooks is the platform saying it's ready to be embedded in production workflows without babysitting. It is one of those features that is invisible to users who don't need it, and quietly essential to those who do.
Workflows are now first-class service offerings — named, addressable, capable of being handed off and trusted. Subtitling and translations have landed. And with Valeon now processing all 141 posts through the platform, the scalability claims are no longer theoretical. The worker-model architecture handled the load without hesitation, which is exactly the kind of evidence a platform needs before it can credibly make promises to anyone else.
Valeon as showcase. VocaSync as substrate. The gold running through the structure.
What's Next
The analytics foundation is already in — Rybbit is integrated and tracking. What's coming is the layer on top: per-post and per-author metrics powered by Convex, giving authors actual signal about how their work is landing rather than a raw view count and nothing else.
Also coming to the author dashboard: AI-powered cover image generation and AI-powered excerpt generation. The goal is to remove the last remaining points of friction between a finished essay and a published one — the two tasks that currently sit between completion and shipping, and that have nothing to do with the quality of the writing itself.
The infrastructure is in place. The platform is proving itself under real load. The next phase is giving authors more of what they need to do their best work, and getting out of their way.
More to come.