Strategic Intervention
I fix problems. I build systems. I create value.
Diagnostic
$15KIdentifying where the operating model has decohered. Problem reframing, constraint mapping, risk identification.
Stakeholder interviews · Field map · Intervention sequence · Decision memo
Strategic Architecture
$35KStructuring re-coherence. Intervention architecture, dependency analysis, parallel workstream design.
Intervention architecture · Resource reallocation · Preservation map · Decision support
Deep Advisory
$50K+Ongoing field maintenance across strata. Frontier, cross-domain, or politically sensitive problems. Direct advisory to founder or principal.
Bespoke · Longer horizon · Ongoing rebalancing
Cogenesis
Everything that exists is a process. No bounded infinities. No false limits. One framework, derived top-down, applied at every layer of resolution.
Ontology
Reality is process, not substance. There are no static objects — only processes that cohere, decohere, stabilize, and leave residuals. Every "thing" is a local coherence maximum: a temporarily stable pattern within a larger field of interaction. Every property is a relation between processes. A process is never identical to its projection. The relational primitive is ♥ — distinct processes entering a shared coherence regime. The ontology is not assembled from parts; it is derived from a single axiom: all that exists is process, and all process co-generates.
FOUNDATIONAL
Physics — Causal Cone Unification
If reality is process, then physics describes causal structure — not objects in spacetime. Special Relativity, General Relativity, and Quantum Mechanics are projections of a single causal cone. SR describes the cone's invariant structure. GR describes its curvature. QM describes its indeterminate boundary. They were never separate theories — they were separate viewing angles on the same process.
PUBLISHED
Mathematics — Process Calculus
Process Calculus does not reject classical mathematics — it reclassifies it. Limits become projection-level statements, not ontological arrivals. Zero becomes frame-relative absence, not primitive nothingness. Equality is narrowed to strict invariance across all refinement. What classical mathematics treats as final objects, Process Calculus treats as stabilized projections of deeper processes. The failure points of classical calculus are not edge cases — they are symptoms of frozen ontology.
ACTIVE
Applied — Project Cogenesis
Reality is stratified by layers of resolution. A pattern that stabilizes at one layer becomes a new interacting unit at the next — fluctuations become particles, particles become atoms, signals become cells, cells become organisms, concepts become institutions. The same processual law applies at every stratum. Governance, social organisation, systems engineering, AI alignment — all are process systems, all subject to the same ontological constraints. Project Cogenesis is the application layer: translating universal process axioms into frameworks for coordination, decision-making, and institutional design.
IN DEVELOPMENT
Manifesto
Every system that persists does so because its operating model captures enough of the conditions it faces to remain functional. Not because the model is complete. Not because it is correct. Because it works well enough that the cost of examining it exceeds the cost of the dysfunction it produces — until it doesn't.
This is true of organisations. The operating model that produced growth was designed for conditions that have since changed. It persists not because it still fits, but because it is embedded in how the organisation measures, decides, and structures itself. Effort increases while results flatten. Problems change shape but not cause. Every intervention inherits the model it was supposed to replace — and fails for the same reason the last one did.
It is true of governance. Decision architectures are derived once, under specific conditions, then defended as principles rather than maintained as instruments. The correction cost accumulates because the governance model has no mechanism for measuring its own degradation. What was a useful constraint at one scale becomes a structural liability at the next — and questioning it reads as a challenge to institutional identity rather than a structural observation.
A model that persists because it is functional is eventually treated as though it persists because it is true. The transition is invisible because the model defines the instruments used to evaluate it. What registers as execution failure, environmental headwind, or bad luck is usually none of those things — it is the model functioning as designed against conditions it was not designed for.
It is equally true of physics and mathematics. The two most successful theories in science have contradicted each other for a century, yet each is both fundamental and incomplete, and the incompleteness goes unexamined. Mathematics treats its foundational axioms the same way — not because they have been proven complete, but because the entire discipline's identity rests on them. The axioms are not re-examined because everything built on top of them would have to be re-examined too. The model is preserved; the anomaly is managed — and a design choice is reclassified as ground truth, right up until something closer to reality is discovered.
It is true of intelligence itself. Every cognitive model — human or artificial — operates within a scope that determines what can be perceived, what can be questioned, and what remains invisible. Expanding the scope does not happen by working harder within the current model. It happens by examining where the model's boundaries have been treated as the boundaries of reality.
I work across these domains because the operation is the same. In organisations: identifying the structural element generating the most dysfunction under the least scrutiny, then correcting it precisely enough that the volatile interventions you think are needed to balance the model become correctly identified as both the symptoms of poor systemic modelling, and the cause of continued instability. In governance: surfacing where decision processes have been reclassified from instruments to values. In intelligence: determining where the model's scope has been mistaken for the boundaries of what is real.
The framework underneath this practice is Cogenesis — a process ontology that emerges from re-examining first principles against a functional reality. Every stable structure is a temporarily coherent pattern, its persistence and survival defined by the models that work until they stop being refined. Without exception, at every resolution. The scale changes. The structural operation does not.
A process is never identical to its projection.
I examine the process.
Writings
A dash of perspective, not to be taken as a prescription for action.
Projects
Militant.AI Consultancy
ACTIVEStrategic diagnosis and intervention design for organisations that can't afford to act on the wrong model. Architecture — not implementation.
Agentic Pattern Engine
CLOSED-SOURCEAutonomous AI operations platform centred on persistent context rather than stateless prompting. Memory, recall, governed execution, and operational state — inspectable at every layer.
Cogenesis
ACTIVEUniversal process ontology. SR, GR, and QM as projections of a single causal cone. Coherence, decoherence, and stabilization as the operating verbs of reality. Everything else derives from this.
Agentic Pattern Engine
Autonomous AI operations platform. Persistent context, not stateless prompting. History that persists, context that reconstructs, actions that route through governance, state you can inspect.
Governance Kernel
Every external effect passes through this layer. It validates intent, enforces policy, manages budgets, and maintains the causal event record. Identity and trust are resolved here. Nothing acts without governance.
IN DEVELOPMENT
Endless Context
The layer that makes the agent persistent. It compiles working context from memory, conversation history, and prior actions into a reconstructable, inspectable snapshot. The agent doesn't assume context — it assembles it on every turn.
ACTIVE
Agentic Swarm Orchestrator
The execution loop. Receives events, assembles context through the memory layer, generates responses, and dispatches actions. The turn runs here — governance decides what it's allowed to do.
IN DEVELOPMENT
Intelligence & Tooling
The layer that interfaces with models and executes tools. Model selection, prompt construction, response generation, and tool dispatch. The orchestrator calls it — the kernel constrains it.
IN DEVELOPMENT
Interface & Service
The operator-facing surface. A UI for observing and interacting with the runtime, a transport layer for event handling, and a service layer exposing operational endpoints.
TBA
Arsenal
Tools, models, and configurations in active daily use. Not endorsements — field reports.
Models
- Claude — humanised writing, orchestrative agentic management, UX. The one you let talk to people.
- GPT / Codex — undeniable functional code generation. Don't let it anywhere near writing or UX.
- Gemini — large document context recall and domain expertise. Useless for most operational functions.
- Grok — overbearing structured outputs, odd personality enforcement, terrible contextual adherence, but unmatched for current events and social adaptivity.
- Llama, Mistral, Gemma, Phi — small model specialists for edge deployment, low-VRAM local inference, and LoRA fine-tunes where you need domain control without cloud dependency.
- 10,000+ hours structured evaluation across all major families
Generative Imaging
- Stable Diffusion / Flux — end-to-end pipelines: dataset curation, training, LoRA, deployment
- ComfyUI with custom nodes for preprocessing, postprocessing, and temporal consistency
- Custom schedulers/samplers — ancestral, DPM variants, noise prediction tweaks
- Model merges and adapter stacking with documented trade-off analysis
Infrastructure
- Python, PyTorch, FastAPI, Node/TypeScript, Docker
- Cursor IDE with agentic workflows for rapid prototyping
- n8n for workflow automation and RAG pipeline orchestration
- Convex for agentic database operations; Qdrant for custom RAG implementations
- Local inference stacks (NVIDIA consumer GPUs) — quantisation, gradient checkpointing, attention tuning
- Plain HTML/CSS/JS for static sites (this one)
Prompt Methodology
Language is the key to capabilities. A stronger lexical grounding amplifies intent and biases model weights towards higher quality recall.
- Conceptual density. Say more, mean more, fewer tokens. Every word should carry maximum semantic load.
- Domain-explicit ontology. Specificity triggers closer proximity to expert knowledge in the model's weight space. Use the precise terminology of the field — not approximations.
- Contextual nuance. Discern and leverage fine differentials in meaning. The gap between "analyse" and "decompose" is the gap between a generic response and a useful one.
- Bootstrap the domain. When working outside your native specialisation, ask the LLM for a dictionary of hyper-specific domain-relevant ontology. Then use that vocabulary in your actual prompts. The model gives you its own keys.
- Memetic anchoring. Frame prompts around concepts with high cultural or intellectual salience — terms that have accumulated dense associative weight in the training corpus. A well-chosen anchor pulls in entire networks of related knowledge without needing to specify them.
Prompt Library
Ontological Stress Test
You are a rigorous analytical philosopher. I will present a claim. Your task: (1) steelman the strongest version of the claim, (2) identify the three most devastating objections, (3) determine whether the claim survives. No hedging. No "it depends." Binary verdict with confidence score.
Contradiction Scanner
Analyse the following text for internal contradictions, unstated assumptions, and logical gaps. For each finding: quote the relevant passage, state the issue in formal logic notation where possible, and rate severity (trivial / substantive / fatal). Output as numbered list, sorted by severity descending.
Axiomatic Reducer
Take the following argument and reduce it to its minimal axiomatic basis. List each axiom explicitly. Identify which axioms are empirically grounded, which are definitional, and which are unsupported presuppositions. If any axiom is removable without invalidating the conclusion, flag it as redundant.
Domain Ontology Bootstrap
I'm working in [DOMAIN]. Give me a structured dictionary of the 20–30 most precise, high-signal terms specific to this field — the vocabulary that distinguishes an expert from a generalist. For each term: (1) definition in one sentence, (2) what it disambiguates from (the common/imprecise alternative), (3) example usage in a technical prompt. Output as a reference table I can use immediately.
Contact
Accepting enquiries for:
- Strategic problem diagnosis and intervention planning
- AI integration consulting — implementation strategy, and governance frameworks
- Systems engineering — process ontology applied to operational systems and governance.
- Project Cogenesis — co-operational alignment opportunities.
Contact details available via whois in the terminal.
Low tolerance for low-resolution queries. Be precise or be ignored.