Shur Creative Partners — Internal Architecture Document v0.1 · 2026-04-28 · Draft
Totem Framework — System Blueprint

The ShurIQ Pipeline Runtime,
or how we get a Router.

A canonical specification for how ShurIQ workflows are defined, triggered, executed, and audited — synthesized from MindStudio's skill-chaining framework and the orchestration infrastructure already running underneath ShurIQ and Totem Protocol.

Spec ID
shuriq-pipeline-runtime / v0.1.0
Status
Draft (createdByClaude · awaiting review)
Audience
Internal (Shur Creative Partners + Totem)
Reading time
~14 min
Source spec
projects/Totem-framework/SHURIQ-PIPELINE-RUNTIME-BLUEPRINT.md
§1 — Opening

The piece that's missing isn't another integration. It's a router.

ShurIQ has more orchestration infrastructure than MindStudio's article describes — 44 skills, 11 agent specs, 21 capabilities, five state stores, ten MCP servers, twelve wired hook events, two parallel scheduling planes. And yet the system still runs on hand-rolled bash and one-off prompt scripts.

MindStudio's framework names the missing piece in three words: shared state, router, contracts. ShurIQ has shared state (fragmented across five stores), partial output contracts (rigorous in some layers, missing in others), and the router only as a spec — orchestrator-agent.md describes the routing semantics; nothing actually executes them.

This blueprint specifies the router. It introduces a canonical pipeline schema with audience taxonomy as a first-class primitive, a three-component runtime architecture, and a four-phase migration path that uses the urgent fix (a currently-failing scheduled job) as the natural first test case. Roughly two weeks of work.

The thesis in one line MindStudio sells a thicker JSON-RPC. ShurIQ is selling defensible, queryable, regulatorily-grounded reasoning — and the runtime is what keeps that grammar honest at scale.
§2 — Mental Model

MindStudio's framework, stripped to primitives.

The article is small in actual content, but the load-bearing abstraction is clean. It reduces multi-step Claude Code work to a state machine with five primitives.

Capability
Skill
A discrete, reusable unit performing one transformation. Slash command, MCP tool, or script.
Composition
Workflow / Chain
A sequence where one capability's output is the next one's input. No human between steps.
Router
Orchestrator
Reads state, picks the next capability to run. The state-machine driver.
State
Shared JSON
Canonical record both router and capabilities read and write. Carries a stage field as routing signal.
Contract
Output schema
Each capability declares the fields its output must include. May add more; cannot omit required.
Trigger
Invocation
In MindStudio's article, only manual. The Totem instantiation extends this to cron, hook, agent-call, webhook.
"The output of one skill becomes the structured input to the next — automatically, without human intervention. That's not automation — that's just delegating your manual work to a different layer."

Where MindStudio sits in the middle: they are not the orchestrator. They sell pre-authenticated method calls — 120+ external services with auth, retries, and rate limiting handled. Their wedge collapses the integration layer. The developer still owns the chain logic.

§3 — Mapping

ShurIQ is not analogous to a MindStudio user. It's analogous to MindStudio itself, plus a layer beneath it.

PrimitiveShurIQ instantiation
Capability44 skill folders + 11 agent specs + 21 capability specs
Compositiontotem-spec frontmatter; intelligence-brief 8-phase pipeline; SBPI 9-phase nightly pipeline
RouterSpec only. No runtime executor exists.
StateD1 + Oxigraph (76k+ triples) + InfraNodus + vault frontmatter + Letta/OpenMemory
Contractgrammar.ts (19-section TS schema) + sbpi.ttl OWL + SHACL shapes + frontmatter schemas
Method callsInfraNodus, DEVONthink, OpenMemory, Slack, GitHub, Cloudflare, Fathom, NotebookLM, gws-cli, Rube/Composio
Trigger5 launchd plists + scheduler plugin + 12 hook events + manual /skill

The infrastructure exists. The pieces work in isolation. Chains exist (intelligence-brief, SBPI nightly), but each is hand-rolled rather than produced from a shared abstraction. What's missing is the single coherent execution layer that binds these primitives into auditable, repeatable chains.

§4 — Diagnosis

Three required elements; one is missing entirely.

Shared state — partially solved

Five state stores exist. Conflict-resolution order is documented (vault > letta > infranodus > claude_memory > git) as a convention in CLAUDE.md. No runtime enforces it. A pipeline that wants to know "what's the current rubric version for this client" has to read four places.

Router — missing

The Orchestrator Agent exists as a markdown spec describing routing semantics. There is no headless runtime that loads a pipeline definition, maintains a stage field, and drives state transitions. The scheduler.py plugin runs single claude -p invocations; it does not chain them. The intelligence-brief and SBPI pipelines are imperative scripts, not state-machine driven.

Output contracts — fragmented

Rigorous contracts in some layers (OWL/SHACL for RDF, grammar.ts for reports, frontmatter schemas for artifacts). Absent in others — skill-to-skill handoff, schedule entry → output, agent-to-agent calls. Skills don't declare their output schema; the next step has to re-parse.

The user's ask, in MindStudio's vocabulary "Hooks, scheduled cron, custom Claude agents using the Agent SDK, headless programs that spin up servers and instantiate headless Claude Code sessions, triggered by other agents." That is the ShurIQ Pipeline Runtime — and it doesn't exist as a unified thing.
§5 — Schema

A canonical pipeline definition.

A new artifact type extending the existing frontmatter conventions. Declares: trigger, audience, preconditions, state bindings, steps, output contracts. Below is the working draft for the Stack Rank Weekly pipeline — chosen because it's also the urgent fix.

type: shuriq_pipeline
pipeline_id: stack-rank-weekly-microco
version: 0.1.0

# AUDIENCE — replaces ad-hoc internal/external split
audience:
  orientation: external          # external | internal | hybrid
  stakeholders:
    - engagement_cycle: "MICRO-2026-Q2"
    - subscription_tier: "shur-iq-vertical-microco"
  visibility: subscriber         # public | subscriber | client | internal

# TRIGGER — what starts this pipeline
trigger:
  kind: cron                     # cron | hook | agent-call | manual | webhook
  schedule: "0 10 * * 0"         # Sunday 10 AM
  fallback: manual

# PRECONDITIONS — gates before pipeline runs
preconditions:
  - rubric_version: "sas-microco@>=0.2"
  - ontology_graph: "urn:shur:vertical:microco:v0.2"
  - consensus_floor: 0.40
  - mcp_servers: [infranodus, openmemory]
  - auth_status: [wrangler, gws, slack]

# STATE BINDING — explicit reads and writes
state:
  read:
    - oxigraph: "urn:shur:vertical:microco:*"
    - infranodus: "microco-mi-*"
  write:
    - oxigraph: "urn:shur:report:R-microco-{date}-stack-rank"
    - d1: "shuriq.reports"
    - cloudflare_pages: "shuriq-microco-stack-rank"
    - slack: "C09KJN28399"

# STEPS — capabilities composed into a chain
steps:
  - id: harvest
    capability: skill:scout
    inputs: { vertical: microco, since_days: 7 }
    output_contract: { gaps: list, signals: list }

  - id: score
    capability: agent:Intelligence
    inputs: { gaps: $harvest.gaps, signals: $harvest.signals }
    output_contract: { score_records: list[sbpi:ScoreRecord] }

  - id: validate
    capability: shacl_check
    inputs: { triples: $score.score_records }
    on_fail: halt

  - id: render
    capability: skill:intelligence-brief
    inputs: { score_records: $score.score_records, archetype: editorial-brief }

  - id: deploy
    capability: skill:publish-site

  - id: notify
    capability: skill:publish-to-slack
    inputs: { url: $deploy.url, channel: C09KJN28399 }

What this gives that doesn't exist today

  1. Audience taxonomy is first-class. External client / subscriber / internal KG enrichment is no longer a comment in a bash script — it's executable metadata that gates visibility, deployment target, and notification.
  2. State is bound, not implicit. A step declares which graph/table/file it touches; the runtime can resolve conflicts, log writes, audit lineage.
  3. Capability is polymorphic. skill:, agent:, MCP tool, headless claude -p — all addressable through one interface.
  4. Contracts are typed. OWL/SHACL applies at step boundaries, not just at the report-generation layer.
§6 — Architecture

Three new components. Each maps to existing infra.

Trigger Layer launchd · scheduler plugin · hooks · manual · webhook triggers.yaml single source of truth Pipeline Compiler resolves capabilities validates contracts Pipeline Executor Claude Agent SDK app drives state transitions Capability Layer ~/.dotfiles/ai/skills · agents · MCP servers · claude -p State Layer · D1 · Oxigraph · InfraNodus · vault frontmatter · Letta
Figure 1 — Pipeline runtime stack. Compiler and Executor are new; everything else is already running.

A. Pipeline Compiler

~200 LOC Python. Reads pipeline YAML or markdown frontmatter, resolves capability references, validates preconditions and contracts against schema, emits a runnable plan as JSON. Lives at system/agents/claude/pipeline-runtime/compiler.py.

B. Pipeline Executor — the missing Router

A headless Claude Code instance built as an Agent SDK app. Loads compiled plan, drives stage transitions, maintains a per-run state document (D1 row + Oxigraph named graph), handles step timeouts, retries, partial-failure recovery. Replaces the bare claude -p calls in scheduler.py wrappers with structured invocations. Scaffolded via the agent-sdk-dev:new-sdk-app skill.

C. Trigger Registry — single source of truth

One YAML at system/agents/claude/triggers.yaml listing every cron, hook, and webhook with the pipeline it fires. launchd plists, scheduler/registry.json, and .claude/settings.json hooks all generated from this file. Eliminates the current four-place trigger sprawl.

What gets reused intact MCP servers stay. Skills stay in ~/.dotfiles/ai/skills/. claude-office-hook stays — it becomes the trigger transport for hook-kind pipelines. Memory layers stay; pipelines bind to them via the state block.
§7 — Migration Targets

Five concrete pipelines, each missing the same router.

PipelineAudienceTriggerExisting scaffolding
Stack Rank Weekly Report external — subscribers cron Sun 10:00 scheduler/wrappers/sbpi-weekly-report.sh ⚠ in error
Stack Rank Nightly AutoResearch internal — KG enrichment cron 06:13 scheduler/wrappers/sbpi-nightly-insights.sh ⚠ in error
Daily Insight Report external — subscribers cron + hook on new evidence daily-synthesis skill
Client Engagement Studio Run external — single client manual intake → SDK loop app/functions/api/reports/[id]/generate.ts
Internal KG Sweep internal — ontology evolution cron weekly mapupdate + ontology-management

Each has most of the parts. Each is missing the router that ties trigger → preconditions → state → steps → contracts → outputs into one supervised execution. Stack Rank Weekly is both the urgent fix and the natural first conversion target — its wrapper is in error state and needs work anyway.

§8 — Differentiator

Why building this is more valuable than adopting MindStudio.

MindStudio's middle is integration plumbing. It sells a thicker JSON-RPC across 120+ services. The wedge is auth, retry, and rate-limit logic the developer no longer has to write. Useful — and a problem ShurIQ already solved through MCP, Claude plugins, and gws-cli.

The ShurIQ middle is the Discourse Grammar layer. The OWL/SHACL/SPARQL stack at projects/microco/competitive-intel/semantic-layer/ is the unique IP. Every signal becomes a CLM/EVD/SRC chain. Every score is provenance-tracked. Every claim survives stakeholder challenge. The grammar overlays per audience: legal-canonical for compliance contexts, folksonomy-permissive for community signals.

The pipeline runtime exists to keep that grammar honest at scale. Every artifact written is SHACL-validated. Every score links back to source. Every rubric version is diffable. MindStudio's framework cannot do this because it has no ontology layer.

The runtime is what turns ShurIQ from a collection of well-formed local artifacts into a defensible, queryable, regulatorily-grounded reasoning system that scales without losing provenance.
§9 — Phasing

Four phases. Two weeks of focused work.

PhaseDeliverableEffort
1 · Schema Pipeline schema added to vault CLAUDE.md as a frontmatter type. This blueprint published as canonical reference. 0.5 day
2 · First conversion Stack Rank Weekly migrated to pipeline definition. Existing claude -p execution retained, but state binding and contracts are introduced. 1–2 days
3 · Executor Pipeline Executor built as Agent SDK app at system/agents/claude/pipeline-runtime/. Scaffolded via agent-sdk-dev:new-sdk-app. Replaces bare claude -p calls. 3–5 days
4 · Migration Other four pipelines migrated. Triggers consolidated into triggers.yaml. launchd plists generated from the registry. ~1 day per pipeline

Phase 2 ships value before Phase 3 lands — the Stack Rank Weekly fix is real even with the existing executor. Phase 3 turns the rest of the system into a real router.

§10 — Open Questions

Decisions that should be settled before Phase 2.

Q1 · Executor — daemon or per-invocation?

Daemon means one process per machine, queue-driven, more responsive to webhook triggers, adds operational surface. Per-invocation means launchd spawns it, runs the pipeline, exits — simpler, matches the existing launchd → scheduler.py → claude -p pattern.

Recommendation: per-invocation for v1. Daemon for v2 once webhook triggers exist and warrant it.

Q2 · State conflict resolution — enforce or log?

The documented order (vault > letta > infranodus > claude_memory > git) is convention. Should the runtime refuse writes that violate hierarchy, or record violations and alert?

Recommendation: enforce on declared state.write paths. Log on ambient reads.

Q3 · Contract evolution — versioning policy?

A pipeline runs against sbpi:ScoreRecord v0.2; the upper ontology bumps to v0.3; old runs need to remain valid.

Recommendation: every step pins contract version. Runtime warns on minor drift, halts on major.

Q4 · Audience — does hybrid need its own semantics?

A pipeline produces both an internal KG enrichment write and a subscriber-visible deliverable.

Recommendation: model as two pipelines linked by triggers: [pipeline:other-id]. Avoid hybrid as a special case; let composition do the work.