Add additional docs
Signed-off-by: James Ketrenos <james_git@ketrenos.com>
This commit is contained in:
440
30-3-30.md
Normal file
440
30-3-30.md
Normal file
@@ -0,0 +1,440 @@
|
||||
# CLEAR: The 30-3-30
|
||||
|
||||
## 30 Seconds: The Hook
|
||||
|
||||
You've spent years mastering DDD, Clean Architecture, SOLID—building systems humans can maintain. **CLEAR doesn't replace those principles; it amplifies them for teams augmented by LLMs.** Your existing methodology stays—CLEAR adds five critical shifts that let you safely delegate to AI agents while maintaining architectural integrity. The result: 3-5x velocity on the tasks LLMs excel at, without sacrificing the design discipline you've fought to establish.
|
||||
|
||||
---
|
||||
|
||||
## 3 Minutes: The Core
|
||||
|
||||
### Why Now?
|
||||
|
||||
Your team is already using Copilot, Claude, or ChatGPT. Maybe experimentally, maybe pervasively. You've seen the pattern: **LLMs generate code faster than humans can review it.** Without guardrails, you get velocity at the cost of architectural decay.
|
||||
|
||||
CLEAR adapts your existing practices to this reality.
|
||||
|
||||
### The Five Shifts
|
||||
|
||||
**1. Constrained (Evolves: Explicit Dependencies)**
|
||||
- **Before:** Dependencies were explicit in code structure
|
||||
- **Now:** Constraints and invariants must be explicit in machine-readable form
|
||||
- **Action:** Convert your architectural decisions into linting rules, type constraints, and contract tests
|
||||
- **Example:** If DDD says "domain entities can't depend on infrastructure," write an architecture test that fails when violated. LLMs respect mechanical enforcement, not comments.
|
||||
|
||||
**2. Limited (Amplifies: Bounded Contexts)**
|
||||
- **Before:** Bounded contexts organized teams and prevented coupling
|
||||
- **Now:** They define safe zones for agentic automation
|
||||
- **Action:** Make each bounded context an autonomous workspace with comprehensive tests
|
||||
- **Example:** Your "Billing" context can be entirely regenerated by an LLM from specs because it has clear boundaries and 95% test coverage. Your team focuses on "Payments" (high risk) while LLMs maintain "Invoicing" (well-defined).
|
||||
|
||||
**3. Ephemeral (Extends: Dependency Inversion)**
|
||||
- **Before:** Interfaces let you swap implementations
|
||||
- **Now:** Some implementations should be treated as disposable artifacts
|
||||
- **Action:** Mark which code is canonical (your domain models) vs derived (mappers, DTOs, boilerplate)
|
||||
- **Example:** Your Clean Architecture domain layer is precious. The API controllers? Let the LLM regenerate them from OpenAPI specs whenever requirements change.
|
||||
|
||||
**4. Assertive (Supercharges: Testing)**
|
||||
- **Before:** Tests caught regressions
|
||||
- **Now:** Tests ARE the requirements that enable autonomous operation
|
||||
- **Action:** Shift from "good coverage" to "comprehensive contracts"—property tests, mutation testing, runtime assertions
|
||||
- **Example:** An LLM refactors your repository layer. Instead of manual code review, your property tests verify that all CRUD operations maintain invariants. Review becomes: "Are the tests still comprehensive?" not "Is every line correct?"
|
||||
|
||||
**5. Reality-Aligned (Doubles Down On: Domain Modeling)**
|
||||
- **Before:** Good domain models made code maintainable
|
||||
- **Now:** They make code LLM-generatable
|
||||
- **Action:** Invest even more heavily in ubiquitous language and explicit domain concepts
|
||||
- **Example:** When your domain model precisely captures "Order State Machine with compensation logic," an LLM can generate correct implementations across multiple bounded contexts. When it's fuzzy, you get plausible-but-wrong code.
|
||||
|
||||
### The Velocity Multiplier
|
||||
|
||||
CLEAR lets you safely delegate:
|
||||
- **Boilerplate generation:** 90%+ LLM-driven (API layers, DTOs, database migrations)
|
||||
- **Test expansion:** LLMs generate edge cases from property specs
|
||||
- **Refactoring:** LLMs execute, humans verify via test suites
|
||||
- **Documentation:** LLMs maintain API docs, ADRs from code
|
||||
|
||||
You focus on:
|
||||
- **Constraint definition:** What must never break?
|
||||
- **Domain modeling:** What is the true reality?
|
||||
- **Boundary design:** Where are the verification checkpoints?
|
||||
- **Curation:** What stays, what gets regenerated?
|
||||
|
||||
---
|
||||
|
||||
## 30 Minutes: The Deep Dive
|
||||
|
||||
### Part 1: Applying CLEAR to Your Existing Methodology (10 min)
|
||||
|
||||
#### If You Practice DDD
|
||||
|
||||
**Your Current State:**
|
||||
You have bounded contexts, aggregates, domain events, repositories, and a ubiquitous language. Your architecture protects the domain from infrastructure concerns.
|
||||
|
||||
**CLEAR Enhancements:**
|
||||
|
||||
**Constrained:**
|
||||
- Create architecture tests that enforce: "Domain layer has zero dependencies on infrastructure"
|
||||
- Codify domain invariants as runtime assertions (Design by Contract)
|
||||
- Example:
|
||||
```python
|
||||
# Before: Comment in code review
|
||||
# "Aggregate roots must enforce invariants"
|
||||
|
||||
# After: Automated architecture test
|
||||
def test_aggregate_invariants_enforced():
|
||||
for aggregate in discover_aggregates():
|
||||
assert has_validation_in_constructor(aggregate)
|
||||
assert has_no_public_setters(aggregate)
|
||||
```
|
||||
|
||||
**Limited:**
|
||||
Each bounded context becomes an LLM workspace:
|
||||
- Context Map → LLM task boundaries
|
||||
- Shared Kernel → Human-maintained (too risky)
|
||||
- Anti-Corruption Layer → LLM-generated from specs
|
||||
- Your "Payment Processing" context (complex, regulatory) stays human-driven. Your "Notification Service" context (simple, well-defined) becomes LLM-maintained.
|
||||
|
||||
**Ephemeral:**
|
||||
Mark derived artifacts:
|
||||
- Domain models: Canonical (humans design)
|
||||
- Application services: Semi-canonical (human-reviewed LLM generation)
|
||||
- Infrastructure repositories: Derived (LLM-generated from interfaces)
|
||||
- API controllers: Derived (regenerate from OpenAPI + domain model)
|
||||
|
||||
**Assertive:**
|
||||
Your repository interfaces become contracts with property tests:
|
||||
```python
|
||||
@property_test
|
||||
def test_repository_respects_aggregate_boundaries(aggregate_id):
|
||||
# Given any valid aggregate
|
||||
original = repo.get(aggregate_id)
|
||||
|
||||
# When saving and retrieving
|
||||
repo.save(original)
|
||||
retrieved = repo.get(aggregate_id)
|
||||
|
||||
# Then all invariants hold
|
||||
assert retrieved.maintains_invariants()
|
||||
assert retrieved == original
|
||||
```
|
||||
|
||||
LLMs can now safely modify repository implementations.
|
||||
|
||||
**Reality-Aligned:**
|
||||
Double down on ubiquitous language:
|
||||
- Maintain a glossary as structured data (JSON/YAML)
|
||||
- Generate type definitions from glossary
|
||||
- LLMs use glossary as authoritative source
|
||||
|
||||
**Velocity Gain:** LLMs handle 60-70% of infrastructure and application layer code. You focus on domain modeling and strategic design.
|
||||
|
||||
---
|
||||
|
||||
#### If You Practice Clean Architecture / Hexagonal
|
||||
|
||||
**Your Current State:**
|
||||
You have concentric layers: Domain → Application → Infrastructure. Dependencies point inward. You use ports and adapters.
|
||||
|
||||
**CLEAR Enhancements:**
|
||||
|
||||
**Constrained:**
|
||||
Your dependency rule becomes mechanically enforced:
|
||||
```typescript
|
||||
// .eslintrc - Custom rule
|
||||
"no-inward-dependencies": {
|
||||
"domain": [], // Zero dependencies
|
||||
"application": ["domain"],
|
||||
"infrastructure": ["domain", "application"]
|
||||
}
|
||||
```
|
||||
|
||||
LLMs can't violate layer boundaries even accidentally.
|
||||
|
||||
**Limited:**
|
||||
Each port/adapter pair becomes an LLM task boundary:
|
||||
- Port definition: Human-designed (canonical)
|
||||
- Adapter implementation: LLM-generated (ephemeral)
|
||||
- Tests: Comprehensive contracts (assertive)
|
||||
|
||||
**Ephemeral Strategy:**
|
||||
```
|
||||
CANONICAL:
|
||||
/domain/* - Hand-crafted entities, value objects, business rules
|
||||
/application/ports/* - Interface definitions
|
||||
|
||||
DERIVED (LLM-regeneratable):
|
||||
/infrastructure/adapters/* - Database, API clients, message queues
|
||||
/infrastructure/web/* - Controllers, DTOs, serialization
|
||||
```
|
||||
|
||||
**Assertive:**
|
||||
Port contracts verified through adapter-agnostic tests:
|
||||
```java
|
||||
@ParameterizedTest
|
||||
@MethodSource("allUserRepositoryImplementations")
|
||||
void testUserRepositoryContract(UserRepository repo) {
|
||||
// Same tests run against:
|
||||
// - InMemoryUserRepository (test double)
|
||||
// - PostgresUserRepository (LLM-generated)
|
||||
// - MongoUserRepository (LLM-generated)
|
||||
|
||||
User user = createValidUser();
|
||||
repo.save(user);
|
||||
assertEquals(user, repo.findById(user.id()));
|
||||
}
|
||||
```
|
||||
|
||||
**Reality-Aligned:**
|
||||
Your use cases map directly to domain reality. Clear use case boundaries help LLMs generate correct orchestration code.
|
||||
|
||||
**Velocity Gain:** LLMs generate 80%+ of infrastructure layer. You architect ports, they implement adapters.
|
||||
|
||||
---
|
||||
|
||||
### Part 2: Migration Strategy (10 min)
|
||||
|
||||
#### Week 1: Assessment & Constraint Codification
|
||||
|
||||
**Day 1-2: Identify Your Implicit Rules**
|
||||
Gather your team. Ask: "What are the unwritten rules we enforce in code review?"
|
||||
|
||||
Common examples:
|
||||
- "Controllers should be thin"
|
||||
- "No business logic in infrastructure"
|
||||
- "All aggregates must validate on construction"
|
||||
- "Database queries must use the repository pattern"
|
||||
|
||||
**Day 3-5: Codify Top 5 as Mechanical Tests**
|
||||
Pick your most frequently violated rules. Make them fail builds.
|
||||
|
||||
Tools:
|
||||
- **Architecture:** ArchUnit (Java), ts-arch (TypeScript), pytest-archon (Python)
|
||||
- **Constraints:** Type systems, Pydantic, Zod
|
||||
- **Custom linters:** ESLint custom rules, Pylint plugins
|
||||
|
||||
Example output:
|
||||
```python
|
||||
# architecture_test.py
|
||||
class ArchitectureRules:
|
||||
def test_domain_has_no_infrastructure_deps(self):
|
||||
rule = (
|
||||
classes()
|
||||
.that().reside_in_a_package("..domain..")
|
||||
.should().only_depend_on_classes_that()
|
||||
.reside_in_packages("..domain..", "java.lang..")
|
||||
)
|
||||
rule.check(import_all_classes())
|
||||
```
|
||||
|
||||
#### Week 2-3: Boundary Identification & Test Hardening
|
||||
|
||||
**Identify LLM-Friendly Zones:**
|
||||
Rate your modules on two axes:
|
||||
1. **Complexity:** How hard to understand? (1-10)
|
||||
2. **Risk:** What's the blast radius of a bug? (1-10)
|
||||
|
||||
```
|
||||
High Risk + High Complexity → Human-maintained
|
||||
High Risk + Low Complexity → LLM with human review
|
||||
Low Risk + High Complexity → Candidate for redesign
|
||||
Low Risk + Low Complexity → Full LLM autonomy
|
||||
```
|
||||
|
||||
**Harden Boundaries:**
|
||||
For modules in "LLM autonomy" quadrant:
|
||||
- Achieve 90%+ test coverage
|
||||
- Add property-based tests
|
||||
- Add contract tests for all interfaces
|
||||
- Document invariants explicitly
|
||||
|
||||
#### Week 4: First LLM Delegation
|
||||
|
||||
**Pick a Low-Risk Module:**
|
||||
Ideal candidate:
|
||||
- Well-tested boundary
|
||||
- Clear spec
|
||||
- Low complexity
|
||||
- Changes frequently
|
||||
|
||||
Example: API client wrappers, data mappers, serialization logic
|
||||
|
||||
**The Experiment:**
|
||||
1. Write comprehensive property tests first
|
||||
2. Delete the implementation
|
||||
3. Give LLM the tests + specs
|
||||
4. Let LLM regenerate
|
||||
5. Run tests
|
||||
6. Measure: Time saved vs confidence level
|
||||
|
||||
**Success Metrics:**
|
||||
- Tests pass on first generation: Good boundary design
|
||||
- Tests pass after 2-3 iterations: Acceptable
|
||||
- Can't get tests to pass: Boundary needs redesign or more human involvement
|
||||
|
||||
#### Week 5-8: Expand & Refine
|
||||
|
||||
Based on Week 4 results:
|
||||
- **If successful:** Identify 3-5 more modules for LLM delegation
|
||||
- **If mixed:** Refine your constraints and tests
|
||||
- **If failed:** Reassess boundaries or keep human-maintained
|
||||
|
||||
---
|
||||
|
||||
### Part 3: Concrete Patterns & Anti-Patterns (10 min)
|
||||
|
||||
#### Pattern: The Regeneration Script
|
||||
|
||||
Make ephemeral code obviously ephemeral:
|
||||
|
||||
```typescript
|
||||
// generated/api-client.ts
|
||||
/**
|
||||
* AUTO-GENERATED - DO NOT EDIT
|
||||
*
|
||||
* Generated from: openapi-spec.yaml
|
||||
* Last generated: 2024-02-25T10:30:00Z
|
||||
*
|
||||
* To regenerate:
|
||||
* npm run generate:api-client
|
||||
*
|
||||
* This file is ephemeral. Changes will be overwritten.
|
||||
* Edit the spec or the generator, not this file.
|
||||
*/
|
||||
```
|
||||
|
||||
Include generation in CI/CD. If manual edits occur, build fails.
|
||||
|
||||
#### Pattern: The Constraint Manifest
|
||||
|
||||
Create a machine-readable architecture document:
|
||||
|
||||
```yaml
|
||||
# .architecture/constraints.yaml
|
||||
boundaries:
|
||||
- name: "domain-purity"
|
||||
rule: "Domain layer must have zero infrastructure dependencies"
|
||||
enforcement: "architecture-test"
|
||||
|
||||
- name: "aggregate-validation"
|
||||
rule: "All aggregates must validate invariants in constructor"
|
||||
enforcement: "runtime-assertion"
|
||||
|
||||
- name: "repository-pattern"
|
||||
rule: "Database access only through repository interfaces"
|
||||
enforcement: "linter + architecture-test"
|
||||
|
||||
delegation_zones:
|
||||
- path: "src/infrastructure/adapters/*"
|
||||
autonomy: "full"
|
||||
reason: "Well-tested ports, low risk implementations"
|
||||
|
||||
- path: "src/domain/*"
|
||||
autonomy: "none"
|
||||
reason: "Core business logic, requires human insight"
|
||||
```
|
||||
|
||||
Point LLMs to this file in prompts.
|
||||
|
||||
#### Pattern: The Contract Test Suite
|
||||
|
||||
For any module you want LLMs to maintain:
|
||||
|
||||
```python
|
||||
# tests/contracts/test_user_repository_contract.py
|
||||
class UserRepositoryContract:
|
||||
"""
|
||||
Contract that ANY UserRepository implementation must satisfy.
|
||||
This enables LLM-generated implementations.
|
||||
"""
|
||||
|
||||
@pytest.fixture
|
||||
def repo(self) -> UserRepository:
|
||||
"""Override in concrete test classes"""
|
||||
raise NotImplementedError
|
||||
|
||||
def test_save_and_retrieve_maintains_identity(self, repo):
|
||||
user = User(id="123", email="test@example.com")
|
||||
repo.save(user)
|
||||
retrieved = repo.find_by_id("123")
|
||||
assert retrieved.id == user.id
|
||||
|
||||
# 20+ more contract tests...
|
||||
|
||||
# Implementation test simply inherits contract
|
||||
class TestPostgresUserRepository(UserRepositoryContract):
|
||||
@pytest.fixture
|
||||
def repo(self):
|
||||
return PostgresUserRepository(test_db_connection)
|
||||
```
|
||||
|
||||
LLMs generate implementations. Contracts ensure correctness.
|
||||
|
||||
#### Anti-Pattern: LLM-Generated Domain Models
|
||||
|
||||
**Don't:**
|
||||
```
|
||||
Prompt: "Generate domain models for e-commerce order management"
|
||||
```
|
||||
|
||||
LLMs will give you plausible-but-wrong models based on generic patterns.
|
||||
|
||||
**Do:**
|
||||
```
|
||||
1. Human architects design domain model through event storming
|
||||
2. Human writes domain model code with rich invariants
|
||||
3. LLM generates surrounding infrastructure (repos, DTOs, migrations)
|
||||
```
|
||||
|
||||
Domain modeling is strategic. Infrastructure is tactical.
|
||||
|
||||
#### Anti-Pattern: Undifferentiated Codebase
|
||||
|
||||
**Don't:** Treat all code equally. LLMs can't tell what's precious vs disposable.
|
||||
|
||||
**Do:** Visual/structural differentiation:
|
||||
```
|
||||
src/
|
||||
domain/ # CANONICAL - human-maintained
|
||||
application/ # SEMI-CANONICAL - human-reviewed
|
||||
infrastructure/
|
||||
generated/ # EPHEMERAL - LLM-maintained
|
||||
custom/ # CANONICAL - human-maintained
|
||||
```
|
||||
|
||||
#### Anti-Pattern: Test-After Development
|
||||
|
||||
**Don't:** Generate code, then write tests to verify.
|
||||
|
||||
**Do:** Contract-first development:
|
||||
1. Write comprehensive property tests (the contract)
|
||||
2. LLM generates implementation to satisfy contract
|
||||
3. Tests verify correctness mechanically
|
||||
|
||||
This inverts the verification burden from "review every line" to "are the tests complete?"
|
||||
|
||||
---
|
||||
|
||||
### Conclusion: Your Action Plan
|
||||
|
||||
**This Week:**
|
||||
1. List your top 5 implicit architectural rules
|
||||
2. Pick one to codify as a failing test
|
||||
3. Identify one low-risk, high-churn module
|
||||
|
||||
**This Month:**
|
||||
1. Codify all 5 rules
|
||||
2. Harden tests on one module to contract-test level
|
||||
3. Run one LLM regeneration experiment
|
||||
|
||||
**This Quarter:**
|
||||
1. Establish CLEAR patterns in 3-5 modules
|
||||
2. Measure velocity gains vs baseline
|
||||
3. Refine constraint manifest based on learnings
|
||||
|
||||
**The Shift:**
|
||||
You're not abandoning DDD, Clean Architecture, or SOLID. You're **making them LLM-compatible**. Your existing discipline becomes the foundation that enables safe automation.
|
||||
|
||||
CLEAR is how you keep architectural integrity while gaining 3-5x velocity on the 60-80% of code that doesn't require human insight.
|
||||
|
||||
**Your methodology + CLEAR = Sustainable velocity in the LLM era.**
|
||||
20
CLEAR.md
Normal file
20
CLEAR.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# CLEAR: Design Principles for the LLM Era
|
||||
|
||||
**C - Constrained**
|
||||
Make invariants, rules, and constraints explicit. Tribal knowledge, unwritten conventions, and "we just don't do that" must be codified in types, contracts, tests, and documentation. LLMs can't infer what you never wrote down.
|
||||
|
||||
**L - Limited**
|
||||
Design bounded autonomous zones with contained blast radius. Each module is a workspace where an LLM can operate freely without risking the broader system. Clear boundaries enable safe delegation.
|
||||
|
||||
**E - Ephemeral**
|
||||
Optimize for regeneration over preservation. Distinguish canonical code (precious, hand-crafted) from derived code (disposable, LLM-generated). Make it obvious what can be rewritten from specs versus what must be maintained.
|
||||
|
||||
**A - Assertive**
|
||||
Build observable correctness through runtime verification, property tests, and comprehensive type systems. When an LLM makes changes, you need immediate, mechanical feedback—not manual inspection.
|
||||
|
||||
**R - Reality-Aligned**
|
||||
Align code structure tightly with domain reality and intent. Misalignment between your model and reality gets amplified catastrophically. Invest in ubiquitous language and domain modeling—the clearer your concepts, the better the LLM performs.
|
||||
|
||||
---
|
||||
|
||||
**CLEAR code is safe to generate, easy to verify, and hard to break.**
|
||||
18
INDUSTRY.md
Normal file
18
INDUSTRY.md
Normal file
@@ -0,0 +1,18 @@
|
||||
Looking across major software design methodologies (DDD, MVC, Clean Architecture, Hexagonal Architecture, Layered Architecture, Microservices, SOLID principles, etc.), here are 5 core patterns that emerge consistently:
|
||||
|
||||
## 1. **Separation of Concerns**
|
||||
Every methodology divides systems into distinct parts with specific responsibilities. MVC splits presentation/logic/data, DDD separates domain/infrastructure/application, Clean Architecture uses concentric layers. The fundamental insight: different aspects of a problem should be isolated from each other.
|
||||
|
||||
## 2. **Dependency Management & Direction**
|
||||
Methodologies establish rules for what can depend on what. Clean Architecture and Hexagonal Architecture enforce dependencies pointing inward toward business logic. DDD protects the domain from infrastructure concerns. The Dependency Inversion Principle (from SOLID) makes this explicit. The pattern: control the flow of dependencies to protect core logic from volatile details.
|
||||
|
||||
## 3. **Abstraction at Boundaries**
|
||||
Where different concerns meet, methodologies introduce abstractions (interfaces, ports, contracts, facades). MVC uses controller abstractions, Hexagonal Architecture uses ports, DDD uses repositories and services. This creates flexibility and testability by decoupling implementations from contracts.
|
||||
|
||||
## 4. **Single Responsibility**
|
||||
Each unit (class, module, service, layer) should have one reason to change. This appears in SOLID's Single Responsibility Principle, in MVC's role separation, in microservices' bounded contexts, and in DDD's aggregate boundaries. The insight: focused components are easier to understand, test, and modify.
|
||||
|
||||
## 5. **Cohesion & Modularity**
|
||||
Related functionality clusters together while unrelated functionality stays separate. DDD groups by business capabilities (bounded contexts), microservices by business domains, Component-Based Architecture by features. High cohesion within modules, loose coupling between them—this pattern enables independent development, testing, and deployment.
|
||||
|
||||
**The meta-pattern**: All methodologies are strategies for managing complexity through organized thinking about boundaries, responsibilities, and relationships between parts of a system.
|
||||
63
LLM-SHIFT.md
Normal file
63
LLM-SHIFT.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# The LLM Shift: From Comprehension Limits to Intention Clarity
|
||||
|
||||
## What Fundamentally Changes
|
||||
|
||||
**1. Cognitive Load → Verification Load**
|
||||
LLMs can hold 100k+ tokens in context and traverse codebases instantly. The bottleneck shifts from "Can I understand this?" to "Can I verify this is correct?" Design now optimizes for **inspectability** and **testability** over raw simplicity.
|
||||
|
||||
*New pattern:* Boundaries become verification checkpoints. Each module needs clear invariants and contracts that can be mechanically verified, not just conceptually understood.
|
||||
|
||||
**2. Minimize Coupling → Maximize Autonomy**
|
||||
When an LLM agent can modify code, loose coupling becomes existential. A poorly bounded module lets the LLM cascade changes across the system. Design for **blast radius containment**.
|
||||
|
||||
*New pattern:* Modules become "agentic workspaces" - zones where an LLM can operate freely without risking the broader system. Tests act as guard rails, not just safety nets.
|
||||
|
||||
**3. Explicit Dependencies → Explicit Invariants**
|
||||
LLMs can trace dependencies easily. What they can't infer are the "why" and the constraints. The critical information shifts from "what depends on what" to "what must never break."
|
||||
|
||||
*New pattern:* Property-based tests, design-by-contract, and formal invariants become first-class citizens. Document **constraints**, not structure.
|
||||
|
||||
## What Intensifies
|
||||
|
||||
**4. Optimize for Deletion → Optimize for Regeneration**
|
||||
If an LLM can rewrite a module from specs, deletion is cheap. But so is generation. The new risk: **code obesity** - systems that balloon because generation is easier than curation.
|
||||
|
||||
*New pattern:* Design for ephemerality. Distinguish "canonical" (hand-crafted, precious) from "derived" (LLM-generated, disposable). Make derived code obvious and replaceable.
|
||||
|
||||
**5. Align with Reality → Align with Intent**
|
||||
LLMs amplify misalignment catastrophically. If your domain model doesn't match reality, the LLM will generate mountains of wrong-but-plausible code.
|
||||
|
||||
*New pattern:* Invest heavily in **ubiquitous language** and **domain modeling**. The clearer your domain concepts, the better the LLM performs. DDD becomes more important, not less.
|
||||
|
||||
## The New Forces
|
||||
|
||||
**6. Make the Implicit Explicit**
|
||||
Humans rely on context and intuition. LLMs need it spelled out. Tribal knowledge, unwritten rules, "we just don't do that here" - these must be codified.
|
||||
|
||||
*New tooling:* Architectural Decision Records (ADRs), explicit design constraints, linting rules that encode team wisdom, comprehensive example-driven documentation.
|
||||
|
||||
**7. Design for Observable Correctness**
|
||||
When an LLM makes a change, you need to know instantly if it broke something. The faster feedback, the more autonomy you can grant.
|
||||
|
||||
*New investment:* Type systems, contract tests, property tests, mutation testing, runtime assertions. Make correctness *visible*, not just possible.
|
||||
|
||||
---
|
||||
|
||||
## The Principal's New Stance
|
||||
|
||||
**You become a curator of constraints, not complexity.**
|
||||
|
||||
Your role shifts from "help humans navigate complexity" to "define safe operating boundaries for human-LLM teams."
|
||||
|
||||
- **Less time:** Explaining how the system works (LLM does this)
|
||||
- **More time:** Defining what the system must never do (LLM needs this)
|
||||
|
||||
- **Less time:** Refactoring for readability (LLM reads anything)
|
||||
- **More time:** Architecting for verifiability (LLM changes must be checkable)
|
||||
|
||||
- **Less time:** Documentation of implementation (LLM can traverse)
|
||||
- **More time:** Documentation of intent and constraints (LLM can't infer)
|
||||
|
||||
**The paradox:** LLMs make bad design more *tolerable* in the short term (can navigate spaghetti) but more *catastrophic* in the long term (can generate more spaghetti faster than humans can audit).
|
||||
|
||||
The answer isn't to abandon design principles - it's to **double down on the ones LLMs can't compensate for**: clear boundaries, explicit invariants, comprehensive verification, and ruthless curation.
|
||||
116
OPPORTUNITY.md
Normal file
116
OPPORTUNITY.md
Normal file
@@ -0,0 +1,116 @@
|
||||
Several emerging patterns and
|
||||
methodologies are gaining significant
|
||||
momentum in 2024-2025 that strongly
|
||||
align with CLEAR's principles. Here's
|
||||
what's happening:
|
||||
|
||||
## 1. **Specification-Driven Development (SDD)** ✨ CLOSEST MATCH
|
||||
|
||||
**What it is:** Development methodology where detailed specifications serve as the foundation for automated code generation, with AI agents expanding high-level requirements into structured specs that drive implementation [Augment Code](https://www.augmentcode.com/guides/mastering-spec-driven-development-with-prompted-ai-workflows-a-step-by-step-implementation-guide) [Medium](https://noailabs.medium.com/specification-driven-development-sdd-66a14368f9d6) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Constrained:** Specs encode requirements as machine-readable contracts
|
||||
- **Ephemeral:** Code treated as derived artifact from specifications, stored in version control as source of truth [SoftwareSeni](https://www.softwareseni.com/spec-driven-development-in-2025-the-complete-guide-to-using-ai-to-write-production-code/)
|
||||
- **Reality-Aligned:** Forces explicit domain modeling before generation
|
||||
|
||||
**Adoption:** GitHub's Spec Kit is the open-source reference implementation; GitHub Copilot now supports AGENTS.md files to guide AI behavior [SoftwareSeni](https://www.softwareseni.com/spec-driven-development-in-2025-the-complete-guide-to-using-ai-to-write-production-code/) . Major platforms (Cursor, Windsurf, Claude Code) are building SDD workflows.
|
||||
|
||||
**Key Quote:** "Modern SDD relies on living, version-controlled markdown files that act as a 'single source of truth' for both human developers and their AI partners" [Medium](https://noailabs.medium.com/specification-driven-development-sdd-66a14368f9d6)
|
||||
|
||||
---
|
||||
|
||||
## 2. **Architecture Decision Records (ADRs) with LLM Integration**
|
||||
|
||||
**What it is:** Formal documentation of architectural decisions that can be consumed by LLMs as constraints, with fitness functions that validate code against documented decisions [GitHub](https://github.com/joelparkerhenderson/architecture-decision-record) [Equal Experts](https://www.equalexperts.com/blog/our-thinking/accelerating-architectural-decision-records-adrs-with-generative-ai/) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Constrained:** Teams embed rules directly into prompts and use guardrails like "References MUST exist" [Equal Experts](https://www.equalexperts.com/blog/our-thinking/accelerating-architectural-decision-records-adrs-with-generative-ai/)
|
||||
- **Reality-Aligned:** ADRs capture the "why" behind decisions
|
||||
|
||||
**Momentum:** Featured in Azure Well-Architected Framework (October 2024), with growing LLM tooling for automated ADR generation and validation [Architectural Decision Records](https://adr.github.io/) .
|
||||
|
||||
---
|
||||
|
||||
## 3. **Contract Testing & Property-Based Testing Renaissance**
|
||||
|
||||
**What it is:** Testing approach that verifies agreements between services, with 42% of IT professionals at large organizations actively deploying AI requiring automated testing to keep pace with AI-assisted code generation [HyperTest](https://www.hypertest.co/contract-testing/best-api-contract-testing-tools) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Assertive:** Contract tests become the verification mechanism
|
||||
- **Limited:** Contracts define safe module boundaries
|
||||
|
||||
**Growth:** Integration bugs discovered in production cost organizations an average of $8.2 million annually; contract testing reduces debugging time by up to 70% [HyperTest](https://www.hypertest.co/contract-testing/best-api-contract-testing-tools) . Tools like Pact, Spring Cloud Contract gaining AI-aware features.
|
||||
|
||||
---
|
||||
|
||||
## 4. **Model Context Protocol (MCP)** 🚀 EXPLOSIVE GROWTH
|
||||
|
||||
**What it is:** Open standard introduced by Anthropic in November 2024 for connecting AI systems to external data sources and tools, adopted by OpenAI, Google DeepMind, and thousands of developers [Model Context Protocol](https://modelcontextprotocol.io/specification/2025-11-25) [Wikipedia](https://en.wikipedia.org/wiki/Model_Context_Protocol) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Limited:** Standardizes bounded contexts for AI agents to operate within [Model Context Protocol](https://modelcontextprotocol.io/specification/2025-11-25)
|
||||
- **Constrained:** Protocol requires explicit user consent before tool invocation, with security implications documented [Model Context Protocol](https://modelcontextprotocol.io/specification/2025-11-25)
|
||||
|
||||
**Adoption metrics:** Over 97 million monthly SDK downloads, 10,000+ active servers, donated to Linux Foundation's Agentic AI Foundation in December 2025 [Modelcontextprotocol](http://blog.modelcontextprotocol.io/) [Gupta Deepak](https://guptadeepak.com/the-complete-guide-to-model-context-protocol-mcp-enterprise-adoption-market-trends-and-implementation-strategies/) .
|
||||
|
||||
**Why it matters for CLEAR:** MCP provides the infrastructure layer for bounded autonomous zones. Each MCP server is effectively a "workspace" where agents can operate safely.
|
||||
|
||||
---
|
||||
|
||||
## 5. **Architectural Testing Tools** (ArchUnit, TS-Arch)
|
||||
|
||||
**What it is:** Libraries that check architecture rules as automated tests—validating dependencies, layer boundaries, and design patterns in plain unit test frameworks [GitHub](https://github.com/joelparkerhenderson/architecture-decision-record) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Constrained:** Makes implicit rules explicit and mechanical
|
||||
- **Assertive:** Architecture becomes testable
|
||||
|
||||
**Trend:** Growing adoption alongside AI coding tools as teams need automated enforcement of design principles.
|
||||
|
||||
---
|
||||
|
||||
## 6. **Agentic AI with Guardrails**
|
||||
|
||||
**What it is:** Gartner predicts by 2028, 33% of enterprise software applications will include agentic AI (up from less than 1% in 2024), with emphasis on human oversight and guardrails [QualiZeal](https://qualizeal.com/the-rise-of-agentic-ai-transforming-software-testing-in-2025-and-beyond/) [Tricentis](https://www.tricentis.com/blog/5-ai-trends-shaping-software-testing-in-2025) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Limited:** Blast radius containment
|
||||
- **Constrained:** Explicit permission boundaries
|
||||
- **Assertive:** Quality gates and validation
|
||||
|
||||
**Key insight:** Regardless of how autonomous AI becomes, a certain level of human oversight will always be required [Tricentis](https://www.tricentis.com/blog/5-ai-trends-shaping-software-testing-in-2025) .
|
||||
|
||||
---
|
||||
|
||||
## 7. **Requirements-First AI Development**
|
||||
|
||||
**What it is:** Growing recognition that 70% of software projects fail due to requirements issues, with increased investment in capturing and refining requirements before AI code generation [The New Stack](https://thenewstack.io/in-2025-llms-will-be-the-secret-sauce-in-software-development/) .
|
||||
|
||||
**CLEAR alignment:**
|
||||
- **Reality-Aligned:** Domain models precisely capturing reality enable LLMs to generate correct implementations; fuzzy models produce plausible-but-wrong code [The New Stack](https://thenewstack.io/in-2025-llms-will-be-the-secret-sauce-in-software-development/)
|
||||
- **Constrained:** Detailed requirements become constraints
|
||||
|
||||
---
|
||||
|
||||
## What's Missing (Opportunity for CLEAR)
|
||||
|
||||
While these patterns are emerging, **there's no unified framework that synthesizes them**. Teams are:
|
||||
- Using ADRs for some decisions
|
||||
- Experimenting with contract testing
|
||||
- Trying spec-driven approaches
|
||||
- Setting up MCP servers
|
||||
|
||||
But they lack **a coherent design philosophy** that explains how these pieces fit together in the LLM era.
|
||||
|
||||
**CLEAR's advantage:** It provides the conceptual model that unifies these disparate practices into a cohesive methodology. It's not competing with these patterns—it's the **meta-framework** that explains why they all matter and how to use them together.
|
||||
|
||||
---
|
||||
|
||||
## The Gap CLEAR Fills
|
||||
|
||||
Current state: Tactical adoption of individual tools
|
||||
What's needed: Strategic framework for LLM-era architecture
|
||||
|
||||
CLEAR could become the "Agile Manifesto" moment for AI-augmented development—a clear set of principles that practitioners can rally around, with existing tools and patterns as the implementation layer.
|
||||
|
||||
**Next step:** Position CLEAR as the unifying philosophy behind these emerging patterns, similar to how DDD unified tactical patterns (repositories, aggregates) under strategic design principles.
|
||||
24
SYNERGY.md
Normal file
24
SYNERGY.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Core Tenants of Software Design
|
||||
|
||||
**Software design is the art of managing complexity through deliberate structure.**
|
||||
|
||||
## The Five Forces
|
||||
|
||||
**1. Minimize Cognitive Load**
|
||||
Code is read 10x more than written. Design for human comprehension first, machine execution second. If a system requires holding more than 7±2 concepts in working memory to understand, it's too complex.
|
||||
|
||||
**2. Isolate What Changes from What Stays Stable**
|
||||
Protect the stable core from volatile periphery. Dependencies flow toward stability. Change is inevitable—design for it by identifying and encapsulating variation points.
|
||||
|
||||
**3. Make Dependencies Explicit and Intentional**
|
||||
Hidden coupling kills systems slowly. Name dependencies, make them visible, control their direction. If you can't draw the dependency graph, you can't reason about the system.
|
||||
|
||||
**4. Optimize for Deletion**
|
||||
The best code is code you don't have to write or maintain. Boundaries enable removal. Modularity enables replacement. Design so that any piece can be deleted without cascading changes.
|
||||
|
||||
**5. Align Structure with Reality**
|
||||
Code structure should mirror the problem domain and team structure (Conway's Law). Misalignment creates friction. When structure matches reality, the right changes become easy and the wrong ones become hard.
|
||||
|
||||
---
|
||||
|
||||
**The Principal's Stance:** You don't enforce methodology—you cultivate understanding of forces. Show the pain of ignoring them. Make the invisible visible. Teach teams to see coupling, complexity, and change. Empower them to choose their own constraints wisely.
|
||||
Reference in New Issue
Block a user