Writing Agent
A multi-step writing system for controlled rewriting and tone management
Writing Agent is a production-minded writing system that replaces one-shot prompting with a structured rewrite pipeline. Instead of generating a single answer and hoping it is good enough, the system plans, drafts, evaluates, corrects, and polishes the output so text changes can be reviewed more systematically.
The Problem
Most AI writing tools behave like black boxes. They rewrite once, give little visibility into quality, and offer weak guarantees around faithfulness, clarity, or tone.
The Outcome
Writing Agent turns rewriting into a controlled workflow with evaluation, correction, tone controls, and traceable output, making the system more dependable for real communication work.
Example: Rewrite with evaluation and correction
A user submits a message that needs to be shorter, more direct, and still preserve key terms. The system plans the rewrite, generates a draft, scores it against a rubric, and runs a correction pass if the draft misses the required quality bar before returning the final version.
The Problem: One-shot rewriting is hard to trust
For higher-stakes communication, a rewrite is not enough on its own. The real challenge is knowing whether the output stayed faithful to the source, respected tone requirements, preserved mandatory terms, and avoided harmful or low-quality changes.
The Outcome: Auditable rewriting with built-in quality checks
Writing Agent is designed to make text transformation more controlled and more inspectable in practice.
- Structured generation: Output moves through defined stages instead of a single prompt call.
- Integrated evaluation: Drafts are checked against a rubric before they are accepted.
- Targeted correction: Failed drafts can be repaired through a focused fix pass rather than discarded blindly.
Technical Architecture and Production Features
Pipeline
Multi-step writing workflow
A rewrite pipeline built for reviewable output.
- Domain-driven stages: The flow separates planning, drafting, critique, fixing, and polish into distinct services.
- Automated self-correction: The critic checks the draft against criteria such as clarity, faithfulness, tone, and length, then triggers correction when needed.
- Pluggable model backends: The system can work with different LLM providers or local models through an abstraction layer.
Guardrails
FastAPI with controlled output rules
A backend designed to protect quality, input handling, and runtime safety.
- Resource protection: Request limits, payload caps, and timeout controls reduce abuse and runtime instability.
- Unified errors: Application failures are returned in a predictable structure rather than leaking infrastructure details.
- Content checks: Must-keep terms, tone requirements, and toxicity rules are enforced as explicit controls.
Observability
Tracing and runtime visibility
Designed to show how output quality changes across the pipeline.
- Tone controls: Users can adjust warmth, directness, hedging, and related style controls through structured settings.
- Step-level traces: Each request produces local traces tied to a request id so the pipeline can be inspected after execution.
- Operational metrics: Health, readiness, and metrics endpoints expose runtime behavior for monitoring.
Repository Layout: Designed for modularity
The codebase separates transport, orchestration, domain logic, and infrastructure so the workflow stays testable and extensible.
src/writing_agent/
├── api/ # FastAPI transport, schemas, middleware, and rate-limiting
├── app/ # Application orchestration (Pipeline flow, Tone Merger)
├── domain/ # Core business logic (Planner, Editor, Critic, Fixer, Polish)
├── infra/ # External state, LLM adapters, metrics, tracing, and config
├── assets/ # Tone presets, audience mappings, and toxicity rules
└── cli/ # Local developer tools and utilities
web/ # Vanilla JS/HTML/CSS static frontend and visualizationsThis clear organization is fundamental to building scalable and auditable AI systems.