mirror of
https://github.com/anthropics/claude-plugins-official.git
synced 2026-05-13 06:55:53 -03:00
Fixes found by running the discovery workflow against the AWS CardDemo mainframe sample (~50 KLOC of COBOL/CICS/JCL/BMS/VSAM): - modernize-assess: add scc -> cloc -> find/wc fallback chain with the COCOMO-II formula so Step 1 works when scc isn't installed; same for portfolio-mode cloc/lizard. Drop the reference to a specific agent-spawning tool name (just "in parallel"). Sharpen the structural- map subagent prompt: 5-12 domains, subgraph clustering, ~40-edge cap, repo-relative paths, dangling-reference check. - modernize-map: expand the parse-target list with the things a literal-minded reader would miss on a real mainframe codebase — CICS CSD DEFINE TRANSACTION/FILE for entry points and online file I/O, EXEC CICS file ops, SELECT...ASSIGN TO joined with JCL DD, EXEC SQL table refs (not JCL DD), SEND/RECEIVE MAP, dynamic data-name XCTL resolution, COBOL fixed-format column slicing. Without these the dead-code list is wrong (most CICS programs look unreachable). Also write a machine-readable topology.json alongside the summary. - modernize-extract-rules: add a Priority (P0/P1/P2) field with a heuristic, and an optional Suspected-defect field. modernize-brief reads P0 rules to build the behavior contract, but the Rule Card had no priority slot — the chain was broken. - modernize-brief: read the new P0 tags; flag low-confidence P0 rules as SME blockers. - modernize-reimagine: drop "for the demo" wording. - security-auditor agent: add mainframe/COBOL coverage items (RACF, JCL/PROC creds, BMS field validation, DB2 dynamic SQL, copybook PII) and mark web-only items as such so it adapts to the target stack. - README: add Optional Tooling section and a symlink example for the expected layout.
84 lines
3.7 KiB
Markdown
84 lines
3.7 KiB
Markdown
---
|
|
description: Multi-agent greenfield rebuild — extract specs from legacy, design AI-native, scaffold & validate with HITL
|
|
argument-hint: <system-dir> <target-vision>
|
|
---
|
|
|
|
**Reimagine** `legacy/$1` as: $2
|
|
|
|
This is not a port — it's a rebuild from extracted intent. The legacy system
|
|
becomes the *specification source*, not the structural template. This command
|
|
orchestrates a multi-agent team with explicit human checkpoints.
|
|
|
|
## Phase A — Specification mining (parallel agents)
|
|
|
|
Spawn concurrently and show the user that all three are running:
|
|
|
|
1. **business-rules-extractor** — "Extract every business rule from legacy/$1
|
|
into Given/When/Then form. Output to a structured list I can parse."
|
|
|
|
2. **legacy-analyst** — "Catalog every external interface of legacy/$1:
|
|
inbound (screens, APIs, batch triggers, queues) and outbound (reports,
|
|
files, downstream calls, DB writes). For each: name, direction, payload
|
|
shape, frequency/SLA if discernible."
|
|
|
|
3. **legacy-analyst** — "Identify the core domain entities in legacy/$1 and
|
|
their relationships. Return as an entity list + Mermaid erDiagram."
|
|
|
|
Collect results. Write `analysis/$1/AI_NATIVE_SPEC.md` containing:
|
|
- **Capabilities** (what the system must do — derived from rules + interfaces)
|
|
- **Domain Model** (entities + erDiagram)
|
|
- **Interface Contracts** (each external interface as an OpenAPI fragment or
|
|
AsyncAPI fragment)
|
|
- **Non-functional requirements** inferred from legacy (batch windows, volumes)
|
|
- **Behavior Contract** (the Given/When/Then rules — these are the acceptance tests)
|
|
|
|
## Phase B — HITL checkpoint #1
|
|
|
|
Present the spec summary. Ask the user **one focused question**: "Which of
|
|
these capabilities are P0 for the reimagined system, and are there any we
|
|
should deliberately drop?" Wait for the answer. Record it in the spec.
|
|
|
|
## Phase C — Architecture (single agent, then critique)
|
|
|
|
Design the target architecture for "$2":
|
|
- Mermaid C4 Container diagram
|
|
- Service boundaries with rationale (which rules/entities live where)
|
|
- Technology choices with one-line justification each
|
|
- Data migration approach from legacy stores
|
|
|
|
Then spawn **architecture-critic**: "Review this proposed architecture for
|
|
$2 against the spec in analysis/$1/AI_NATIVE_SPEC.md. Identify over-engineering,
|
|
missed requirements, scaling risks, and simpler alternatives." Incorporate
|
|
the critique. Write the result to `analysis/$1/REIMAGINED_ARCHITECTURE.md`.
|
|
|
|
## Phase D — HITL checkpoint #2
|
|
|
|
Enter plan mode. Present the architecture. Wait for approval.
|
|
|
|
## Phase E — Parallel scaffolding
|
|
|
|
For each service in the approved architecture (cap at 3 to keep the run
|
|
tractable; tell the user which you deferred), spawn a **general-purpose agent
|
|
in parallel**:
|
|
|
|
"Scaffold the <service-name> service per analysis/$1/REIMAGINED_ARCHITECTURE.md
|
|
and AI_NATIVE_SPEC.md. Create: project skeleton, domain model, API stubs
|
|
matching the interface contracts, and **executable acceptance tests** for every
|
|
behavior-contract rule assigned to this service (mark unimplemented ones as
|
|
expected-failure/skip with the rule ID). Write to modernized/$1-reimagined/<service-name>/."
|
|
|
|
Show the agents' progress. When all complete, run the acceptance test suites
|
|
and report: total tests, passing (scaffolded behavior), pending (rule IDs
|
|
awaiting implementation).
|
|
|
|
## Phase F — Knowledge graph handoff
|
|
|
|
Write `modernized/$1-reimagined/CLAUDE.md` — the persistent context file for
|
|
the new system, containing: architecture summary, service responsibilities,
|
|
where the spec lives, how to run tests, and the legacy→modern traceability
|
|
map. This file IS the knowledge graph that future agents and engineers will
|
|
load.
|
|
|
|
Report: services scaffolded, acceptance tests defined, % behaviors with a
|
|
home, location of all artifacts.
|