A canonical specification for how ShurIQ workflows are defined, triggered, executed, and audited — synthesized from MindStudio's skill-chaining framework and the orchestration infrastructure already running underneath ShurIQ and Totem Protocol.
ShurIQ has more orchestration infrastructure than MindStudio's article describes — 44 skills, 11 agent specs, 21 capabilities, five state stores, ten MCP servers, twelve wired hook events, two parallel scheduling planes. And yet the system still runs on hand-rolled bash and one-off prompt scripts.
MindStudio's framework names the missing piece in three words: shared state, router, contracts. ShurIQ has shared state (fragmented across five stores), partial output contracts (rigorous in some layers, missing in others), and the router only as a spec — orchestrator-agent.md describes the routing semantics; nothing actually executes them.
This blueprint specifies the router. It introduces a canonical pipeline schema with audience taxonomy as a first-class primitive, a three-component runtime architecture, and a four-phase migration path that uses the urgent fix (a currently-failing scheduled job) as the natural first test case. Roughly two weeks of work.
The article is small in actual content, but the load-bearing abstraction is clean. It reduces multi-step Claude Code work to a state machine with five primitives.
stage field as routing signal."The output of one skill becomes the structured input to the next — automatically, without human intervention. That's not automation — that's just delegating your manual work to a different layer."
Where MindStudio sits in the middle: they are not the orchestrator. They sell pre-authenticated method calls — 120+ external services with auth, retries, and rate limiting handled. Their wedge collapses the integration layer. The developer still owns the chain logic.
| Primitive | ShurIQ instantiation |
|---|---|
| Capability | 44 skill folders + 11 agent specs + 21 capability specs |
| Composition | totem-spec frontmatter; intelligence-brief 8-phase pipeline; SBPI 9-phase nightly pipeline |
| Router | Spec only. No runtime executor exists. |
| State | D1 + Oxigraph (76k+ triples) + InfraNodus + vault frontmatter + Letta/OpenMemory |
| Contract | grammar.ts (19-section TS schema) + sbpi.ttl OWL + SHACL shapes + frontmatter schemas |
| Method calls | InfraNodus, DEVONthink, OpenMemory, Slack, GitHub, Cloudflare, Fathom, NotebookLM, gws-cli, Rube/Composio |
| Trigger | 5 launchd plists + scheduler plugin + 12 hook events + manual /skill |
The infrastructure exists. The pieces work in isolation. Chains exist (intelligence-brief, SBPI nightly), but each is hand-rolled rather than produced from a shared abstraction. What's missing is the single coherent execution layer that binds these primitives into auditable, repeatable chains.
Five state stores exist. Conflict-resolution order is documented (vault > letta > infranodus > claude_memory > git) as a convention in CLAUDE.md. No runtime enforces it. A pipeline that wants to know "what's the current rubric version for this client" has to read four places.
The Orchestrator Agent exists as a markdown spec describing routing semantics. There is no headless runtime that loads a pipeline definition, maintains a stage field, and drives state transitions. The scheduler.py plugin runs single claude -p invocations; it does not chain them. The intelligence-brief and SBPI pipelines are imperative scripts, not state-machine driven.
Rigorous contracts in some layers (OWL/SHACL for RDF, grammar.ts for reports, frontmatter schemas for artifacts). Absent in others — skill-to-skill handoff, schedule entry → output, agent-to-agent calls. Skills don't declare their output schema; the next step has to re-parse.
A new artifact type extending the existing frontmatter conventions. Declares: trigger, audience, preconditions, state bindings, steps, output contracts. Below is the working draft for the Stack Rank Weekly pipeline — chosen because it's also the urgent fix.
type: shuriq_pipeline
pipeline_id: stack-rank-weekly-microco
version: 0.1.0
# AUDIENCE — replaces ad-hoc internal/external split
audience:
orientation: external # external | internal | hybrid
stakeholders:
- engagement_cycle: "MICRO-2026-Q2"
- subscription_tier: "shur-iq-vertical-microco"
visibility: subscriber # public | subscriber | client | internal
# TRIGGER — what starts this pipeline
trigger:
kind: cron # cron | hook | agent-call | manual | webhook
schedule: "0 10 * * 0" # Sunday 10 AM
fallback: manual
# PRECONDITIONS — gates before pipeline runs
preconditions:
- rubric_version: "sas-microco@>=0.2"
- ontology_graph: "urn:shur:vertical:microco:v0.2"
- consensus_floor: 0.40
- mcp_servers: [infranodus, openmemory]
- auth_status: [wrangler, gws, slack]
# STATE BINDING — explicit reads and writes
state:
read:
- oxigraph: "urn:shur:vertical:microco:*"
- infranodus: "microco-mi-*"
write:
- oxigraph: "urn:shur:report:R-microco-{date}-stack-rank"
- d1: "shuriq.reports"
- cloudflare_pages: "shuriq-microco-stack-rank"
- slack: "C09KJN28399"
# STEPS — capabilities composed into a chain
steps:
- id: harvest
capability: skill:scout
inputs: { vertical: microco, since_days: 7 }
output_contract: { gaps: list, signals: list }
- id: score
capability: agent:Intelligence
inputs: { gaps: $harvest.gaps, signals: $harvest.signals }
output_contract: { score_records: list[sbpi:ScoreRecord] }
- id: validate
capability: shacl_check
inputs: { triples: $score.score_records }
on_fail: halt
- id: render
capability: skill:intelligence-brief
inputs: { score_records: $score.score_records, archetype: editorial-brief }
- id: deploy
capability: skill:publish-site
- id: notify
capability: skill:publish-to-slack
inputs: { url: $deploy.url, channel: C09KJN28399 }
skill:, agent:, MCP tool, headless claude -p — all addressable through one interface.~200 LOC Python. Reads pipeline YAML or markdown frontmatter, resolves capability references, validates preconditions and contracts against schema, emits a runnable plan as JSON. Lives at system/agents/claude/pipeline-runtime/compiler.py.
A headless Claude Code instance built as an Agent SDK app. Loads compiled plan, drives stage transitions, maintains a per-run state document (D1 row + Oxigraph named graph), handles step timeouts, retries, partial-failure recovery. Replaces the bare claude -p calls in scheduler.py wrappers with structured invocations. Scaffolded via the agent-sdk-dev:new-sdk-app skill.
One YAML at system/agents/claude/triggers.yaml listing every cron, hook, and webhook with the pipeline it fires. launchd plists, scheduler/registry.json, and .claude/settings.json hooks all generated from this file. Eliminates the current four-place trigger sprawl.
claude-office-hook stays — it becomes the trigger transport for hook-kind pipelines. Memory layers stay; pipelines bind to them via the state block.
| Pipeline | Audience | Trigger | Existing scaffolding |
|---|---|---|---|
| Stack Rank Weekly Report | external — subscribers | cron Sun 10:00 | scheduler/wrappers/sbpi-weekly-report.sh ⚠ in error |
| Stack Rank Nightly AutoResearch | internal — KG enrichment | cron 06:13 | scheduler/wrappers/sbpi-nightly-insights.sh ⚠ in error |
| Daily Insight Report | external — subscribers | cron + hook on new evidence | daily-synthesis skill |
| Client Engagement Studio Run | external — single client | manual intake → SDK loop | app/functions/api/reports/[id]/generate.ts |
| Internal KG Sweep | internal — ontology evolution | cron weekly | mapupdate + ontology-management |
Each has most of the parts. Each is missing the router that ties trigger → preconditions → state → steps → contracts → outputs into one supervised execution. Stack Rank Weekly is both the urgent fix and the natural first conversion target — its wrapper is in error state and needs work anyway.
MindStudio's middle is integration plumbing. It sells a thicker JSON-RPC across 120+ services. The wedge is auth, retry, and rate-limit logic the developer no longer has to write. Useful — and a problem ShurIQ already solved through MCP, Claude plugins, and gws-cli.
The ShurIQ middle is the Discourse Grammar layer. The OWL/SHACL/SPARQL stack at projects/microco/competitive-intel/semantic-layer/ is the unique IP. Every signal becomes a CLM/EVD/SRC chain. Every score is provenance-tracked. Every claim survives stakeholder challenge. The grammar overlays per audience: legal-canonical for compliance contexts, folksonomy-permissive for community signals.
The pipeline runtime exists to keep that grammar honest at scale. Every artifact written is SHACL-validated. Every score links back to source. Every rubric version is diffable. MindStudio's framework cannot do this because it has no ontology layer.
The runtime is what turns ShurIQ from a collection of well-formed local artifacts into a defensible, queryable, regulatorily-grounded reasoning system that scales without losing provenance.
| Phase | Deliverable | Effort |
|---|---|---|
| 1 · Schema | Pipeline schema added to vault CLAUDE.md as a frontmatter type. This blueprint published as canonical reference. | 0.5 day |
| 2 · First conversion | Stack Rank Weekly migrated to pipeline definition. Existing claude -p execution retained, but state binding and contracts are introduced. |
1–2 days |
| 3 · Executor | Pipeline Executor built as Agent SDK app at system/agents/claude/pipeline-runtime/. Scaffolded via agent-sdk-dev:new-sdk-app. Replaces bare claude -p calls. |
3–5 days |
| 4 · Migration | Other four pipelines migrated. Triggers consolidated into triggers.yaml. launchd plists generated from the registry. |
~1 day per pipeline |
Phase 2 ships value before Phase 3 lands — the Stack Rank Weekly fix is real even with the existing executor. Phase 3 turns the rest of the system into a real router.
Daemon means one process per machine, queue-driven, more responsive to webhook triggers, adds operational surface. Per-invocation means launchd spawns it, runs the pipeline, exits — simpler, matches the existing launchd → scheduler.py → claude -p pattern.
Recommendation: per-invocation for v1. Daemon for v2 once webhook triggers exist and warrant it.
The documented order (vault > letta > infranodus > claude_memory > git) is convention. Should the runtime refuse writes that violate hierarchy, or record violations and alert?
Recommendation: enforce on declared state.write paths. Log on ambient reads.
A pipeline runs against sbpi:ScoreRecord v0.2; the upper ontology bumps to v0.3; old runs need to remain valid.
Recommendation: every step pins contract version. Runtime warns on minor drift, halts on major.
A pipeline produces both an internal KG enrichment write and a subscriber-visible deliverable.
Recommendation: model as two pipelines linked by triggers: [pipeline:other-id]. Avoid hybrid as a special case; let composition do the work.