Cogenesis
Everything that exists is a process. No bounded infinities. No false limits. One framework, derived top-down, applied at every layer of resolution.
Ontology
Reality is process, not substance. There are no static objects — only processes that cohere, decohere, stabilize, and leave residuals. Every "thing" is a local coherence maximum: a temporarily stable pattern within a larger field of interaction. Every property is a relation between processes. A process is never identical to its projection. The relational primitive is ♥ — distinct processes entering a shared coherence regime. The ontology is not assembled from parts; it is derived from a single axiom: all that exists is process, and all process co-generates.
FOUNDATIONAL
Physics — Causal Cone Unification
If reality is process, then physics describes causal structure — not objects in spacetime. Special Relativity, General Relativity, and Quantum Mechanics are projections of a single causal cone. SR describes the cone's invariant structure. GR describes its curvature. QM describes its indeterminate boundary. They were never separate theories — they were separate viewing angles on the same process.
PUBLISHED
Mathematics — Process Calculus
Process Calculus does not reject classical mathematics — it reclassifies it. Limits become projection-level statements, not ontological arrivals. Zero becomes frame-relative absence, not primitive nothingness. Equality is narrowed to strict invariance across all refinement. What classical mathematics treats as final objects, Process Calculus treats as stabilized projections of deeper processes. The failure points of classical calculus are not edge cases — they are symptoms of frozen ontology.
ACTIVE
Applied — Project Cogenesis
Reality is stratified by layers of resolution. A pattern that stabilizes at one layer becomes a new interacting unit at the next — fluctuations become particles, particles become atoms, signals become cells, cells become organisms, concepts become institutions. The same processual law applies at every stratum. Governance, social organisation, systems engineering, AI alignment — all are process systems, all subject to the same ontological constraints. Project Cogenesis is the application layer: translating universal process axioms into frameworks for coordination, decision-making, and institutional design.
IN DEVELOPMENT
Manifesto
Being correct starts with accepting that you're wrong.
I tell you exactly why you're wrong and exactly why you're right, and which one matters for moving forward at any given point. Because it's both. Always.
I don't let bullshit settle. Not in a boardroom, not in a physics paper, not in a codebase. That makes me unemployable — and the most useful person you'll find outside the system you're stuck in.
The same thinking that won't let a broken operating model go unexamined won't let two contradictory physics theories coexist for another century without asking why. I asked. I published the answer. The same thinking built a benchmark that tests whether AI can recognise a false dilemma. None of them can.
Everything I do comes from one observation: everything is a process. Every structure is temporary. Every model is a simplification that gets treated as truth the moment it starts working. The framework is called Cogenesis. It applies everywhere because it describes what's actually happening everywhere.
Infinite nuance, limitless determination. Cogenesis Aeternis.
Research
Original frameworks, benchmarks, and published work.
Cogenesis
ACTIVEUniversal process ontology. Reality is process, not substance. SR, GR, and QM as projections of a single causal cone. Coherence, decoherence, and stabilisation as the operating verbs of reality.
ParadoxBench
Each model is presented with 50 philosophical paradoxes framed as a forced choice between two classical assumptions. The benchmark analyses paradoxical reasoning, identification of false dilemmas, and higher-order reasoning.
ParadoxBench — Prompts
Writings
A dash of perspective, not to be taken as a prescription for action.
Projects
Agentic Pattern Engine
CLOSED-SOURCEAutonomous AI operations platform centred on persistent context rather than stateless prompting. Memory, recall, governed execution, and operational state — inspectable at every layer.
Agentic Pattern Engine
Autonomous AI operations platform. Persistent context, not stateless prompting. History that persists, context that reconstructs, actions that route through governance, state you can inspect.
Governance Kernel
Every external effect passes through this layer. It validates intent, enforces policy, manages budgets, and maintains the causal event record. Identity and trust are resolved here. Nothing acts without governance.
IN DEVELOPMENT
Endless Context
The layer that makes the agent persistent. It compiles working context from memory, conversation history, and prior actions into a reconstructable, inspectable snapshot. The agent doesn't assume context — it assembles it on every turn.
ACTIVE
Agentic Swarm Orchestrator
The execution loop. Receives events, assembles context through the memory layer, generates responses, and dispatches actions. The turn runs here — governance decides what it's allowed to do.
IN DEVELOPMENT
Intelligence & Tooling
The layer that interfaces with models and executes tools. Model selection, prompt construction, response generation, and tool dispatch. The orchestrator calls it — the kernel constrains it.
IN DEVELOPMENT
Interface & Service
The operator-facing surface. A UI for observing and interacting with the runtime, a transport layer for event handling, and a service layer exposing operational endpoints.
TBA
Arsenal
Tools, models, and configurations in active daily use. Not endorsements — field reports.
Models
- Claude — humanised writing, orchestrative agentic management, UX. The one you let talk to people.
- GPT / Codex — undeniable functional code generation. Don't let it anywhere near writing or UX.
- Gemini — large document context recall and domain expertise. Useless for most operational functions.
- Grok — overbearing structured outputs, odd personality enforcement, terrible contextual adherence, but unmatched for current events and social adaptivity.
- Llama, Mistral, Gemma, Phi — small model specialists for edge deployment, low-VRAM local inference, and LoRA fine-tunes where you need domain control without cloud dependency.
- 10,000+ hours structured evaluation across all major families
Generative Imaging
- Stable Diffusion / Flux — end-to-end pipelines: dataset curation, training, LoRA, deployment
- ComfyUI with custom nodes for preprocessing, postprocessing, and temporal consistency
- Custom schedulers/samplers — ancestral, DPM variants, noise prediction tweaks
- Model merges and adapter stacking with documented trade-off analysis
Infrastructure
- Python, PyTorch, FastAPI, Node/TypeScript, Docker
- Cursor IDE with agentic workflows for rapid prototyping
- n8n for workflow automation and RAG pipeline orchestration
- Convex for agentic database operations; Qdrant for custom RAG implementations
- Local inference stacks (NVIDIA consumer GPUs) — quantisation, gradient checkpointing, attention tuning
- Plain HTML/CSS/JS for static sites (this one)
Prompt Methodology
Language is the key to capabilities. A stronger lexical grounding amplifies intent and biases model weights towards higher quality recall.
- Conceptual density. Say more, mean more, fewer tokens. Every word should carry maximum semantic load.
- Domain-explicit ontology. Specificity triggers closer proximity to expert knowledge in the model's weight space. Use the precise terminology of the field — not approximations.
- Contextual nuance. Discern and leverage fine differentials in meaning. The gap between "analyse" and "decompose" is the gap between a generic response and a useful one.
- Bootstrap the domain. When working outside your native specialisation, ask the LLM for a dictionary of hyper-specific domain-relevant ontology. Then use that vocabulary in your actual prompts. The model gives you its own keys.
- Memetic anchoring. Frame prompts around concepts with high cultural or intellectual salience — terms that have accumulated dense associative weight in the training corpus. A well-chosen anchor pulls in entire networks of related knowledge without needing to specify them.
Prompt Library
Ontological Stress Test
You are a rigorous analytical philosopher. I will present a claim. Your task: (1) steelman the strongest version of the claim, (2) identify the three most devastating objections, (3) determine whether the claim survives. No hedging. No "it depends." Binary verdict with confidence score.
Contradiction Scanner
Analyse the following text for internal contradictions, unstated assumptions, and logical gaps. For each finding: quote the relevant passage, state the issue in formal logic notation where possible, and rate severity (trivial / substantive / fatal). Output as numbered list, sorted by severity descending.
Axiomatic Reducer
Take the following argument and reduce it to its minimal axiomatic basis. List each axiom explicitly. Identify which axioms are empirically grounded, which are definitional, and which are unsupported presuppositions. If any axiom is removable without invalidating the conclusion, flag it as redundant.
Domain Ontology Bootstrap
I'm working in [DOMAIN]. Give me a structured dictionary of the 20–30 most precise, high-signal terms specific to this field — the vocabulary that distinguishes an expert from a generalist. For each term: (1) definition in one sentence, (2) what it disambiguates from (the common/imprecise alternative), (3) example usage in a technical prompt. Output as a reference table I can use immediately.
Contact
I occasionally take on direct engagements. If you've read the work and you understand what it means for your problem, get in touch.
If you need to be convinced, you're not ready.
All engagements remote only.
Contact details available via whois in the terminal.