Add additional docs
Signed-off-by: James Ketrenos <james_git@ketrenos.com>
This commit is contained in:
440
30-3-30.md
Normal file
440
30-3-30.md
Normal file
@@ -0,0 +1,440 @@
|
||||
# CLEAR: The 30-3-30
|
||||
|
||||
## 30 Seconds: The Hook
|
||||
|
||||
You've spent years mastering DDD, Clean Architecture, SOLID—building systems humans can maintain. **CLEAR doesn't replace those principles; it amplifies them for teams augmented by LLMs.** Your existing methodology stays—CLEAR adds five critical shifts that let you safely delegate to AI agents while maintaining architectural integrity. The result: 3-5x velocity on the tasks LLMs excel at, without sacrificing the design discipline you've fought to establish.
|
||||
|
||||
---
|
||||
|
||||
## 3 Minutes: The Core
|
||||
|
||||
### Why Now?
|
||||
|
||||
Your team is already using Copilot, Claude, or ChatGPT. Maybe experimentally, maybe pervasively. You've seen the pattern: **LLMs generate code faster than humans can review it.** Without guardrails, you get velocity at the cost of architectural decay.
|
||||
|
||||
CLEAR adapts your existing practices to this reality.
|
||||
|
||||
### The Five Shifts
|
||||
|
||||
**1. Constrained (Evolves: Explicit Dependencies)**
|
||||
- **Before:** Dependencies were explicit in code structure
|
||||
- **Now:** Constraints and invariants must be explicit in machine-readable form
|
||||
- **Action:** Convert your architectural decisions into linting rules, type constraints, and contract tests
|
||||
- **Example:** If DDD says "domain entities can't depend on infrastructure," write an architecture test that fails when violated. LLMs respect mechanical enforcement, not comments.
|
||||
|
||||
**2. Limited (Amplifies: Bounded Contexts)**
|
||||
- **Before:** Bounded contexts organized teams and prevented coupling
|
||||
- **Now:** They define safe zones for agentic automation
|
||||
- **Action:** Make each bounded context an autonomous workspace with comprehensive tests
|
||||
- **Example:** Your "Billing" context can be entirely regenerated by an LLM from specs because it has clear boundaries and 95% test coverage. Your team focuses on "Payments" (high risk) while LLMs maintain "Invoicing" (well-defined).
|
||||
|
||||
**3. Ephemeral (Extends: Dependency Inversion)**
|
||||
- **Before:** Interfaces let you swap implementations
|
||||
- **Now:** Some implementations should be treated as disposable artifacts
|
||||
- **Action:** Mark which code is canonical (your domain models) vs derived (mappers, DTOs, boilerplate)
|
||||
- **Example:** Your Clean Architecture domain layer is precious. The API controllers? Let the LLM regenerate them from OpenAPI specs whenever requirements change.
|
||||
|
||||
**4. Assertive (Supercharges: Testing)**
|
||||
- **Before:** Tests caught regressions
|
||||
- **Now:** Tests ARE the requirements that enable autonomous operation
|
||||
- **Action:** Shift from "good coverage" to "comprehensive contracts"—property tests, mutation testing, runtime assertions
|
||||
- **Example:** An LLM refactors your repository layer. Instead of manual code review, your property tests verify that all CRUD operations maintain invariants. Review becomes: "Are the tests still comprehensive?" not "Is every line correct?"
|
||||
|
||||
**5. Reality-Aligned (Doubles Down On: Domain Modeling)**
|
||||
- **Before:** Good domain models made code maintainable
|
||||
- **Now:** They make code LLM-generatable
|
||||
- **Action:** Invest even more heavily in ubiquitous language and explicit domain concepts
|
||||
- **Example:** When your domain model precisely captures "Order State Machine with compensation logic," an LLM can generate correct implementations across multiple bounded contexts. When it's fuzzy, you get plausible-but-wrong code.
|
||||
|
||||
### The Velocity Multiplier
|
||||
|
||||
CLEAR lets you safely delegate:
|
||||
- **Boilerplate generation:** 90%+ LLM-driven (API layers, DTOs, database migrations)
|
||||
- **Test expansion:** LLMs generate edge cases from property specs
|
||||
- **Refactoring:** LLMs execute, humans verify via test suites
|
||||
- **Documentation:** LLMs maintain API docs, ADRs from code
|
||||
|
||||
You focus on:
|
||||
- **Constraint definition:** What must never break?
|
||||
- **Domain modeling:** What is the true reality?
|
||||
- **Boundary design:** Where are the verification checkpoints?
|
||||
- **Curation:** What stays, what gets regenerated?
|
||||
|
||||
---
|
||||
|
||||
## 30 Minutes: The Deep Dive
|
||||
|
||||
### Part 1: Applying CLEAR to Your Existing Methodology (10 min)
|
||||
|
||||
#### If You Practice DDD
|
||||
|
||||
**Your Current State:**
|
||||
You have bounded contexts, aggregates, domain events, repositories, and a ubiquitous language. Your architecture protects the domain from infrastructure concerns.
|
||||
|
||||
**CLEAR Enhancements:**
|
||||
|
||||
**Constrained:**
|
||||
- Create architecture tests that enforce: "Domain layer has zero dependencies on infrastructure"
|
||||
- Codify domain invariants as runtime assertions (Design by Contract)
|
||||
- Example:
|
||||
```python
|
||||
# Before: Comment in code review
|
||||
# "Aggregate roots must enforce invariants"
|
||||
|
||||
# After: Automated architecture test
|
||||
def test_aggregate_invariants_enforced():
|
||||
for aggregate in discover_aggregates():
|
||||
assert has_validation_in_constructor(aggregate)
|
||||
assert has_no_public_setters(aggregate)
|
||||
```
|
||||
|
||||
**Limited:**
|
||||
Each bounded context becomes an LLM workspace:
|
||||
- Context Map → LLM task boundaries
|
||||
- Shared Kernel → Human-maintained (too risky)
|
||||
- Anti-Corruption Layer → LLM-generated from specs
|
||||
- Your "Payment Processing" context (complex, regulatory) stays human-driven. Your "Notification Service" context (simple, well-defined) becomes LLM-maintained.
|
||||
|
||||
**Ephemeral:**
|
||||
Mark derived artifacts:
|
||||
- Domain models: Canonical (humans design)
|
||||
- Application services: Semi-canonical (human-reviewed LLM generation)
|
||||
- Infrastructure repositories: Derived (LLM-generated from interfaces)
|
||||
- API controllers: Derived (regenerate from OpenAPI + domain model)
|
||||
|
||||
**Assertive:**
|
||||
Your repository interfaces become contracts with property tests:
|
||||
```python
|
||||
@property_test
|
||||
def test_repository_respects_aggregate_boundaries(aggregate_id):
|
||||
# Given any valid aggregate
|
||||
original = repo.get(aggregate_id)
|
||||
|
||||
# When saving and retrieving
|
||||
repo.save(original)
|
||||
retrieved = repo.get(aggregate_id)
|
||||
|
||||
# Then all invariants hold
|
||||
assert retrieved.maintains_invariants()
|
||||
assert retrieved == original
|
||||
```
|
||||
|
||||
LLMs can now safely modify repository implementations.
|
||||
|
||||
**Reality-Aligned:**
|
||||
Double down on ubiquitous language:
|
||||
- Maintain a glossary as structured data (JSON/YAML)
|
||||
- Generate type definitions from glossary
|
||||
- LLMs use glossary as authoritative source
|
||||
|
||||
**Velocity Gain:** LLMs handle 60-70% of infrastructure and application layer code. You focus on domain modeling and strategic design.
|
||||
|
||||
---
|
||||
|
||||
#### If You Practice Clean Architecture / Hexagonal
|
||||
|
||||
**Your Current State:**
|
||||
You have concentric layers: Domain → Application → Infrastructure. Dependencies point inward. You use ports and adapters.
|
||||
|
||||
**CLEAR Enhancements:**
|
||||
|
||||
**Constrained:**
|
||||
Your dependency rule becomes mechanically enforced:
|
||||
```typescript
|
||||
// .eslintrc - Custom rule
|
||||
"no-inward-dependencies": {
|
||||
"domain": [], // Zero dependencies
|
||||
"application": ["domain"],
|
||||
"infrastructure": ["domain", "application"]
|
||||
}
|
||||
```
|
||||
|
||||
LLMs can't violate layer boundaries even accidentally.
|
||||
|
||||
**Limited:**
|
||||
Each port/adapter pair becomes an LLM task boundary:
|
||||
- Port definition: Human-designed (canonical)
|
||||
- Adapter implementation: LLM-generated (ephemeral)
|
||||
- Tests: Comprehensive contracts (assertive)
|
||||
|
||||
**Ephemeral Strategy:**
|
||||
```
|
||||
CANONICAL:
|
||||
/domain/* - Hand-crafted entities, value objects, business rules
|
||||
/application/ports/* - Interface definitions
|
||||
|
||||
DERIVED (LLM-regeneratable):
|
||||
/infrastructure/adapters/* - Database, API clients, message queues
|
||||
/infrastructure/web/* - Controllers, DTOs, serialization
|
||||
```
|
||||
|
||||
**Assertive:**
|
||||
Port contracts verified through adapter-agnostic tests:
|
||||
```java
|
||||
@ParameterizedTest
|
||||
@MethodSource("allUserRepositoryImplementations")
|
||||
void testUserRepositoryContract(UserRepository repo) {
|
||||
// Same tests run against:
|
||||
// - InMemoryUserRepository (test double)
|
||||
// - PostgresUserRepository (LLM-generated)
|
||||
// - MongoUserRepository (LLM-generated)
|
||||
|
||||
User user = createValidUser();
|
||||
repo.save(user);
|
||||
assertEquals(user, repo.findById(user.id()));
|
||||
}
|
||||
```
|
||||
|
||||
**Reality-Aligned:**
|
||||
Your use cases map directly to domain reality. Clear use case boundaries help LLMs generate correct orchestration code.
|
||||
|
||||
**Velocity Gain:** LLMs generate 80%+ of infrastructure layer. You architect ports, they implement adapters.
|
||||
|
||||
---
|
||||
|
||||
### Part 2: Migration Strategy (10 min)
|
||||
|
||||
#### Week 1: Assessment & Constraint Codification
|
||||
|
||||
**Day 1-2: Identify Your Implicit Rules**
|
||||
Gather your team. Ask: "What are the unwritten rules we enforce in code review?"
|
||||
|
||||
Common examples:
|
||||
- "Controllers should be thin"
|
||||
- "No business logic in infrastructure"
|
||||
- "All aggregates must validate on construction"
|
||||
- "Database queries must use the repository pattern"
|
||||
|
||||
**Day 3-5: Codify Top 5 as Mechanical Tests**
|
||||
Pick your most frequently violated rules. Make them fail builds.
|
||||
|
||||
Tools:
|
||||
- **Architecture:** ArchUnit (Java), ts-arch (TypeScript), pytest-archon (Python)
|
||||
- **Constraints:** Type systems, Pydantic, Zod
|
||||
- **Custom linters:** ESLint custom rules, Pylint plugins
|
||||
|
||||
Example output:
|
||||
```python
|
||||
# architecture_test.py
|
||||
class ArchitectureRules:
|
||||
def test_domain_has_no_infrastructure_deps(self):
|
||||
rule = (
|
||||
classes()
|
||||
.that().reside_in_a_package("..domain..")
|
||||
.should().only_depend_on_classes_that()
|
||||
.reside_in_packages("..domain..", "java.lang..")
|
||||
)
|
||||
rule.check(import_all_classes())
|
||||
```
|
||||
|
||||
#### Week 2-3: Boundary Identification & Test Hardening
|
||||
|
||||
**Identify LLM-Friendly Zones:**
|
||||
Rate your modules on two axes:
|
||||
1. **Complexity:** How hard to understand? (1-10)
|
||||
2. **Risk:** What's the blast radius of a bug? (1-10)
|
||||
|
||||
```
|
||||
High Risk + High Complexity → Human-maintained
|
||||
High Risk + Low Complexity → LLM with human review
|
||||
Low Risk + High Complexity → Candidate for redesign
|
||||
Low Risk + Low Complexity → Full LLM autonomy
|
||||
```
|
||||
|
||||
**Harden Boundaries:**
|
||||
For modules in "LLM autonomy" quadrant:
|
||||
- Achieve 90%+ test coverage
|
||||
- Add property-based tests
|
||||
- Add contract tests for all interfaces
|
||||
- Document invariants explicitly
|
||||
|
||||
#### Week 4: First LLM Delegation
|
||||
|
||||
**Pick a Low-Risk Module:**
|
||||
Ideal candidate:
|
||||
- Well-tested boundary
|
||||
- Clear spec
|
||||
- Low complexity
|
||||
- Changes frequently
|
||||
|
||||
Example: API client wrappers, data mappers, serialization logic
|
||||
|
||||
**The Experiment:**
|
||||
1. Write comprehensive property tests first
|
||||
2. Delete the implementation
|
||||
3. Give LLM the tests + specs
|
||||
4. Let LLM regenerate
|
||||
5. Run tests
|
||||
6. Measure: Time saved vs confidence level
|
||||
|
||||
**Success Metrics:**
|
||||
- Tests pass on first generation: Good boundary design
|
||||
- Tests pass after 2-3 iterations: Acceptable
|
||||
- Can't get tests to pass: Boundary needs redesign or more human involvement
|
||||
|
||||
#### Week 5-8: Expand & Refine
|
||||
|
||||
Based on Week 4 results:
|
||||
- **If successful:** Identify 3-5 more modules for LLM delegation
|
||||
- **If mixed:** Refine your constraints and tests
|
||||
- **If failed:** Reassess boundaries or keep human-maintained
|
||||
|
||||
---
|
||||
|
||||
### Part 3: Concrete Patterns & Anti-Patterns (10 min)
|
||||
|
||||
#### Pattern: The Regeneration Script
|
||||
|
||||
Make ephemeral code obviously ephemeral:
|
||||
|
||||
```typescript
|
||||
// generated/api-client.ts
|
||||
/**
|
||||
* AUTO-GENERATED - DO NOT EDIT
|
||||
*
|
||||
* Generated from: openapi-spec.yaml
|
||||
* Last generated: 2024-02-25T10:30:00Z
|
||||
*
|
||||
* To regenerate:
|
||||
* npm run generate:api-client
|
||||
*
|
||||
* This file is ephemeral. Changes will be overwritten.
|
||||
* Edit the spec or the generator, not this file.
|
||||
*/
|
||||
```
|
||||
|
||||
Include generation in CI/CD. If manual edits occur, build fails.
|
||||
|
||||
#### Pattern: The Constraint Manifest
|
||||
|
||||
Create a machine-readable architecture document:
|
||||
|
||||
```yaml
|
||||
# .architecture/constraints.yaml
|
||||
boundaries:
|
||||
- name: "domain-purity"
|
||||
rule: "Domain layer must have zero infrastructure dependencies"
|
||||
enforcement: "architecture-test"
|
||||
|
||||
- name: "aggregate-validation"
|
||||
rule: "All aggregates must validate invariants in constructor"
|
||||
enforcement: "runtime-assertion"
|
||||
|
||||
- name: "repository-pattern"
|
||||
rule: "Database access only through repository interfaces"
|
||||
enforcement: "linter + architecture-test"
|
||||
|
||||
delegation_zones:
|
||||
- path: "src/infrastructure/adapters/*"
|
||||
autonomy: "full"
|
||||
reason: "Well-tested ports, low risk implementations"
|
||||
|
||||
- path: "src/domain/*"
|
||||
autonomy: "none"
|
||||
reason: "Core business logic, requires human insight"
|
||||
```
|
||||
|
||||
Point LLMs to this file in prompts.
|
||||
|
||||
#### Pattern: The Contract Test Suite
|
||||
|
||||
For any module you want LLMs to maintain:
|
||||
|
||||
```python
|
||||
# tests/contracts/test_user_repository_contract.py
|
||||
class UserRepositoryContract:
|
||||
"""
|
||||
Contract that ANY UserRepository implementation must satisfy.
|
||||
This enables LLM-generated implementations.
|
||||
"""
|
||||
|
||||
@pytest.fixture
|
||||
def repo(self) -> UserRepository:
|
||||
"""Override in concrete test classes"""
|
||||
raise NotImplementedError
|
||||
|
||||
def test_save_and_retrieve_maintains_identity(self, repo):
|
||||
user = User(id="123", email="test@example.com")
|
||||
repo.save(user)
|
||||
retrieved = repo.find_by_id("123")
|
||||
assert retrieved.id == user.id
|
||||
|
||||
# 20+ more contract tests...
|
||||
|
||||
# Implementation test simply inherits contract
|
||||
class TestPostgresUserRepository(UserRepositoryContract):
|
||||
@pytest.fixture
|
||||
def repo(self):
|
||||
return PostgresUserRepository(test_db_connection)
|
||||
```
|
||||
|
||||
LLMs generate implementations. Contracts ensure correctness.
|
||||
|
||||
#### Anti-Pattern: LLM-Generated Domain Models
|
||||
|
||||
**Don't:**
|
||||
```
|
||||
Prompt: "Generate domain models for e-commerce order management"
|
||||
```
|
||||
|
||||
LLMs will give you plausible-but-wrong models based on generic patterns.
|
||||
|
||||
**Do:**
|
||||
```
|
||||
1. Human architects design domain model through event storming
|
||||
2. Human writes domain model code with rich invariants
|
||||
3. LLM generates surrounding infrastructure (repos, DTOs, migrations)
|
||||
```
|
||||
|
||||
Domain modeling is strategic. Infrastructure is tactical.
|
||||
|
||||
#### Anti-Pattern: Undifferentiated Codebase
|
||||
|
||||
**Don't:** Treat all code equally. LLMs can't tell what's precious vs disposable.
|
||||
|
||||
**Do:** Visual/structural differentiation:
|
||||
```
|
||||
src/
|
||||
domain/ # CANONICAL - human-maintained
|
||||
application/ # SEMI-CANONICAL - human-reviewed
|
||||
infrastructure/
|
||||
generated/ # EPHEMERAL - LLM-maintained
|
||||
custom/ # CANONICAL - human-maintained
|
||||
```
|
||||
|
||||
#### Anti-Pattern: Test-After Development
|
||||
|
||||
**Don't:** Generate code, then write tests to verify.
|
||||
|
||||
**Do:** Contract-first development:
|
||||
1. Write comprehensive property tests (the contract)
|
||||
2. LLM generates implementation to satisfy contract
|
||||
3. Tests verify correctness mechanically
|
||||
|
||||
This inverts the verification burden from "review every line" to "are the tests complete?"
|
||||
|
||||
---
|
||||
|
||||
### Conclusion: Your Action Plan
|
||||
|
||||
**This Week:**
|
||||
1. List your top 5 implicit architectural rules
|
||||
2. Pick one to codify as a failing test
|
||||
3. Identify one low-risk, high-churn module
|
||||
|
||||
**This Month:**
|
||||
1. Codify all 5 rules
|
||||
2. Harden tests on one module to contract-test level
|
||||
3. Run one LLM regeneration experiment
|
||||
|
||||
**This Quarter:**
|
||||
1. Establish CLEAR patterns in 3-5 modules
|
||||
2. Measure velocity gains vs baseline
|
||||
3. Refine constraint manifest based on learnings
|
||||
|
||||
**The Shift:**
|
||||
You're not abandoning DDD, Clean Architecture, or SOLID. You're **making them LLM-compatible**. Your existing discipline becomes the foundation that enables safe automation.
|
||||
|
||||
CLEAR is how you keep architectural integrity while gaining 3-5x velocity on the 60-80% of code that doesn't require human insight.
|
||||
|
||||
**Your methodology + CLEAR = Sustainable velocity in the LLM era.**
|
||||
Reference in New Issue
Block a user