AI already writes working code. The pain starts when you ship it:
abstractions feel wrong, complexity gets hidden, reviewers rewrite half.
AI Lint externalizes senior engineering judgment—so agents
build code that fits the language, the framework, and the long-term shape of your system.
Not a linter. Not a style guide. If a machine can enforce it mechanically, it doesn't belong here.
AI models produce code that compiles. Then you review it and feel the familiar dread: it's not wrong enough to reject quickly—just wrong enough to quietly poison the codebase.
"Works" code that fights the language, invents needless abstractions, hides state, and turns maintainability into a slow leak.
Correct ≠ idiomaticThe AI isn't failing at syntax. It's failing at judgment: what patterns belong, what tradeoffs are acceptable, what risks must be surfaced.
Semantics • taste • architectureRule of thumb: If a rule could be handled by ESLint, ShellCheck, a type checker, or static analysis, it does not belong in AI Lint. AI Lint is for the hard part: "does this belong?"
Drop in the doctrine. Wire your agent once. From then on, the agent consults AI Lint for architectural and taste decisions—and pauses when a human should choose.
AI Lint ships as additive packs. Overlay App and Systems packs as needed.
Copy the included prompts into AGENTS.md / CLAUDE.md / COPILOT instructions.
When making design decisions, the agent treats AI Lint as the authority.
If a choice violates doctrine, the agent surfaces it and asks for an override.
"I can implement that pattern, but it violates AI Lint Doctrine #4 (No Invisible State).
It would make debugging UserSession nearly impossible later.
Shall we use a dependency injection pattern instead?"
The free edition explains the frame and wiring. Paid packs provide deep doctrine, rejects, and exhibits for real languages and frameworks.
Non-commercial; no redistribution; no model training.
If your org chart is complicated, this is the clean path.
We're building doctrine for the domains where AI judgment failures hurt the most. New packs ship as overlay ZIPs. Buy only what you need. Bundles available.
SQL, PostgreSQL, indexes, N+1, migrations, transaction isolation
Kubernetes, Docker, Helm, resource limits, health checks, secrets
REST, versioning, pagination, error formats, idempotency keys
JWT, sessions, OAuth, RBAC, token rotation, confused deputy
Sagas, idempotency, exactly-once, circuit breakers, retry policies
Logs, metrics, traces, cardinality, structured events, correlation
Swift, Kotlin, React Native, lifecycle, memory, offline-first
React, state management, component boundaries, accessibility
Want a specific pack prioritized? Let us know.
AI Lint was built after years of using AI to ship paid products—inside legacy codebases and production systems. It's the doctrine I wanted my agents to follow—and couldn't find.
Clarity over cleverness. Visible complexity over hidden magic. Causality you can debug. Explicit risk when you decide to break a rule.
Style policing. Mechanical "best practices." Boilerplate. Anything that a linter, formatter, or static analyzer can enforce.
No. It's doctrine for judgment—semantics, architecture, and taste. If a machine can enforce it mechanically, it doesn't belong.
Unzip ai-lint-core into your repo. Then unzip packs (App, Systems) over it. Files merge by directory.
AI Lint expects conflicts. The override protocol makes tradeoffs explicit. A good agent surfaces the conflict and asks for a decision.
Yes (internal use). The whole point is to encode your standards. The paid license permits internal modification and use in private repos.
No. New packs are sold separately as overlay ZIPs. The bundle is Apps + Systems.
Yes. If you want to encode hard-won knowledge into doctrine packs (languages, frameworks, security), email us. This becomes a virtuous cycle.
We're looking for engineers who've shipped in production and have opinions about what belongs. If you've debugged the same AI mistakes repeatedly—in React, Kubernetes, PostgreSQL, or anywhere else—that's doctrine waiting to be written.
Real scars. Patterns you've seen break in production. Framework fights you've lost. The stuff that's obvious to you but invisible to AI.
Revenue share on packs you author. Your name on the doctrine. A way to encode your judgment into something that scales.