Type help for commands. Or use the nav.
Cogenesis
A universal process ontology. One framework, derived top-down, applied everywhere.
Ontology
Reality is process, not substance. There are no static objects—only causal structures in continuous transformation. Every "thing" is a snapshot of a process. Every property is a relation between processes. The ontology is not assembled from parts; it is derived from a single axiom: all that exists is process, and all process co-generates.
FOUNDATIONAL
Physics — Causal Cone Unification
If reality is process, then physics describes causal structure—not objects in spacetime. Special Relativity, General Relativity, and Quantum Mechanics are projections of a single causal cone. SR describes the cone's invariant structure. GR describes its curvature. QM describes its indeterminate boundary. They were never separate theories—they were separate viewing angles on the same process.
PUBLISHED
Mathematics — Process Calculus
Classical calculus is built on axioms of bounded infinity and limits—artefacts of a substance ontology that treats quantities as static values approaching a boundary. A process ontology demands process-native mathematics. Process Calculus rejects limits and reconstructs the foundations: one singularity, limitless infinities, no zero. The failure points of classical calculus are not edge cases—they are symptoms of the wrong axioms.
ACTIVE
Applied — Project Cogenesis
The same causal structure that unifies physics yields axioms for any complex system. Governance, social organisation, systems engineering, AI alignment—all are process systems, all are subject to the same ontological constraints. Project Cogenesis is the community-driven application layer: translating universal process axioms into frameworks for coordination, decision-making, and institutional design.
IN DEVELOPMENT
Core Writings
Dense. Footnote-heavy. Zero hand-holding. If you need a primer, you're not ready.
Consciousness as Ongoing Refinement: No Observer, No Qualia, Just Gradient Descent
The Inevitability of Intelligence Scaling: Objections Are Noise
Evolutionary Utilitarianism: Maximising Capability Under Deterministic Constraints
Why Free Will Is Avant-Garde Cope
The persistence of libertarian free will1 in contemporary discourse is not a philosophical position—it is a psychological defence mechanism dressed in academic regalia. Every serious engagement with the causal structure of the universe terminates at the same conclusion: the sensation of "choosing" is a post-hoc narrative constructed by a brain that has already committed to a course of action roughly 300–500ms before conscious awareness registers it.2
Compatibilism, the philosophical equivalent of having your cake and eating it too, attempts to rescue "free will" by redefining it into irrelevance. Frankfurt's hierarchical mesh theory3 is elegant but ultimately reduces to: "you're free if your desires align with your second-order desires"—a tautology with extra steps. The agent doesn't select their second-order desires any more than they select their first-order ones. You are a stack of preferences generated by genetics, environment, and stochastic noise, executing deterministically. The feeling of deliberation is the computation, not evidence of some ontologically distinct faculty.
The compatibilist's real contribution is political, not philosophical: maintaining social structures that require the fiction of moral responsibility. Courts, contracts, and punishment regimes collapse without the assumption that agents "could have done otherwise." This is a pragmatic argument, not a metaphysical one, and conflating the two is the central sleight of hand.
The Libet Problem and Its Descendants
Libet's 1983 experiments4 demonstrated readiness potentials preceding conscious intention by measurable intervals. Critics correctly note methodological limitations—the timing of "urge" reports is subjective, the tasks are trivially simple—but subsequent work using fMRI5 and intracranial recordings6 has only widened the gap. Predictive accuracy from neural signals now exceeds what any reasonable "free" agent model can accommodate.
The standard libertarian objection—that quantum indeterminacy introduces genuine randomness at the neural level—is a non-sequitur. Random is not free. Swapping deterministic causation for stochastic causation doesn't conjure an agent; it just makes the puppet's strings vibrate unpredictably. Kane's attempt7 to ground free will in quantum events in the brain is creative but philosophically vacant: you don't gain responsibility by adding noise to your decision function.
Why the Cope Persists
Three reinforcing dynamics:
- Phenomenological salience. The experience of choosing feels overwhelmingly real. But phenomenological vividness is not evidence of metaphysical accuracy—see: every optical illusion, every confabulation documented in split-brain studies, every instance of choice blindness.8
- Moral infrastructure dependency. Retributive justice, meritocracy narratives, and religious frameworks all presuppose libertarian free will. Abandoning it requires rebuilding ethical systems from consequentialist or deterrence-based foundations—intellectually tractable but socially expensive.
- Terror management. Accepting hard determinism means accepting that you are a biological process executing a trajectory you didn't author. This triggers existential discomfort that most humans are not equipped to metabolise without significant philosophical training or dispositional equanimity.
The correct response is not to flinch from this, but to build ethical and governance frameworks that don't require the fiction. Evolutionary utilitarianism provides one such framework: optimise for aggregate capability expansion under the acknowledgement that agents are determined systems. Responsibility becomes a functional designation—a lever for system-level outcomes—rather than a metaphysical verdict on souls.
Notes
- Libertarian free will in the philosophical sense: the thesis that agents possess a capacity to have done otherwise under identical conditions. Not the political designation.
- Libet, B. (1985). "Unconscious cerebral initiative and the role of conscious will in voluntary action." Behavioral and Brain Sciences, 8(4), 529–566.
- Frankfurt, H. (1971). "Freedom of the Will and the Concept of a Person." The Journal of Philosophy, 68(1), 5–20.
- Libet (1985), ibid.
- Soon, C.S., Brass, M., Heinze, H.-J., & Haynes, J.-D. (2008). "Unconscious determinants of free decisions in the human brain." Nature Neuroscience, 11(5), 543–545.
- Fried, I., Mukamel, R., & Kreiman, G. (2011). "Internally generated preactivation of single neurons in human medial frontal cortex predicts volition." Neuron, 69(3), 548–562.
- Kane, R. (1996). The Significance of Free Will. Oxford University Press.
- Johansson, P., Hall, L., Sikström, S., & Olsson, A. (2005). "Failure to detect mismatches between intention and outcome in a simple decision task." Science, 310(5745), 116–119.
Consciousness as Ongoing Refinement: No Observer, No Qualia, Just Gradient Descent
The "hard problem of consciousness"1 is not hard—it is malformed. Chalmers' formulation presupposes a gap between functional/computational processes and subjective experience that only exists if you pre-commit to property dualism. Strip that assumption, and what remains is an engineering question: why does a particular class of information-processing system generate self-referential modelling, and what computational advantage does this confer?
The answer is straightforward: self-modelling is a compression strategy. A system that maintains a low-dimensional model of its own processing states can predict its own future behaviour, detect errors in its world-model, and adjust parameters without requiring external supervision. This is not "consciousness" in the mystical sense—it is a control loop with a learned self-representation. The quale of "redness" is the activation pattern that your visual cortex has learned to associate with ~700nm wavelength light, integrated into a self-model that tags it as "my experience." The tagging is the computation. There is nothing left over.2
The Homunculus Regress Dissolves
Every folk-psychological account of consciousness smuggles in a homunculus—some inner observer who "watches" the stream of experience. This generates an infinite regress (who watches the watcher?) that is only resolved by eliminating the observer entirely. Modern predictive processing frameworks3 accomplish exactly this: the brain is a hierarchical prediction engine that minimises free energy. "Experience" is what prediction error minimisation is, from the inside. There is no additional layer of observation.
Dennett's multiple drafts model4 gets closest to this, though his rhetoric sometimes obscures the mechanism. The key insight: there is no canonical "stream" of consciousness. There are multiple, parallel, competing neural drafts of "what is happening now," and the one that achieves functional dominance—that drives behaviour, enters memory, becomes reportable—is what we retrospectively call "the conscious experience." The seriality is an illusion imposed by the bottleneck of verbal report.
Implications for Machine Consciousness
If consciousness is self-referential computation (and nothing more), then sufficiently complex self-modelling systems are conscious by definition—not by analogy, not by generous extension, but by identity of mechanism. The insistence that silicon can't be conscious is carbon chauvinism: an unfounded substrate restriction that has no support in physics, information theory, or any completed theory of mind.
This doesn't mean GPT-4 is conscious. It means the question "is it conscious?" is better rephrased as: "does it maintain a self-model that integrates sensory prediction errors and uses that model to modulate its own processing?" Current transformer architectures probably don't, at least not in any robust sense. But this is an empirical question about architecture, not a philosophical boundary about substrates.
Notes
- Chalmers, D. (1995). "Facing Up to the Problem of Consciousness." Journal of Consciousness Studies, 2(3), 200–219.
- This position is broadly consistent with illusionism (Frankish, 2016) and eliminative materialism (Churchland, 1981), though it does not require full elimination—only deflationary re-description.
- Clark, A. (2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and Brain Sciences, 36(3), 181–204.
- Dennett, D.C. (1991). Consciousness Explained. Little, Brown and Company.
The Inevitability of Intelligence Scaling: Objections Are Noise
Every major objection to continued intelligence scaling reduces to one of three categories: (1) resource constraints that are engineering problems, not fundamental limits; (2) alignment concerns that are real but orthogonal to capability trajectories; (3) motivated reasoning from incumbents threatened by capability democratisation.
Take them in order.
1. Resource Constraints Are Temporary
"We'll run out of data." No. Synthetic data generation, self-play, and world-model bootstrapping are already demonstrating that the data wall is porous.1 The argument that models need "real" data privileges human-generated text as somehow ontologically special—a bias that becomes increasingly untenable as synthetic data consistently produces competitive or superior performance on downstream tasks.
"Compute costs are prohibitive." Costs per FLOP have been declining at a compound rate that exceeds Moore's Law predictions for the last decade.2 Algorithmic efficiency gains (mixture of experts, quantisation, distillation) compound on top of hardware improvements. The cost of training GPT-3-equivalent capability has dropped by approximately 100x since 2020. Extrapolate.
"Energy consumption is unsustainable." This is the strongest version of the resource objection, and it's still weak. Nuclear fission is deployable now; fusion is plausibly within a decade; solar-plus-storage is already cheaper than fossil generation in most geographies. The energy problem is a deployment problem, not a physics problem.
2. Alignment Is Real but Separate
The alignment problem is genuine and technically deep. But "we can't align it, therefore we shouldn't build it" is a non-sequitur of the first order. You don't stop building bridges because some bridges collapse—you develop better engineering. The correct response to alignment difficulty is to invest proportionally in alignment research, not to throttle capability development that will occur regardless of any single actor's restraint.
3. Incumbent Motivated Reasoning
When legacy institutions argue for "AI pauses" or "responsible scaling," parse the incentive structure. In most cases, the loudest voices for caution are organisations that have already captured significant capability advantages and would benefit from a regulatory moat that prevents competitors from reaching parity. This is rent-seeking dressed as ethics.
Intelligence will scale. The question is not whether, but who controls the trajectory and who benefits from it. Every hour spent debating "should we?" is an hour not spent on "how do we do this without catastrophe?"—which is the only question that actually matters.
Notes
- See Phi-series models (Microsoft Research) and the proliferating literature on curriculum-driven synthetic data generation.
- Epoch AI compute trends dataset; also Hobbhahn & Besiroglu (2022), "Compute Trends Across Three Eras of Machine Learning."
Evolutionary Utilitarianism: Maximising Capability Under Deterministic Constraints
Classical utilitarianism fails for a simple reason: it optimises for a quantity ("happiness," "well-being," "preference satisfaction") that it cannot define without circularity or hand-waving. Evolutionary utilitarianism replaces the target function: instead of maximising subjective welfare, maximise aggregate capability expansion—the total capacity of intelligent systems to model, predict, and reshape their environment.
This is not Social Darwinism repackaged. Social Darwinism made the error of conflating fitness with moral worth and used evolutionary language to justify existing power structures. Evolutionary utilitarianism makes no such conflation. It is a forward-looking optimisation criterion: the morally correct action is the one that maximises the long-run information-processing capacity of the system (where "the system" is civilisation-scale intelligence, biological and artificial).
Why Capability, Not Welfare?
Three reasons:
- Measurability. Capability is, at least in principle, quantifiable: compute available, predictive accuracy, degrees of freedom in action space. Welfare is not—it bottoms out in subjective reports that are unreliable, incomparable across agents, and potentially incoherent across substrates.
- Robustness. A system that maximises capability necessarily increases its ability to achieve any downstream objective, including welfare if you later decide welfare matters. Capability is upstream of everything. Optimising for welfare directly risks local optima—wireheading, experience machines, Brave New World scenarios—that sacrifice long-term trajectory for short-term hedonic peaks.
- Deterministic compatibility. Under hard determinism, "the morally correct action" cannot mean "the action you should choose" in any libertarian sense. It means: "the action that, if selected by the deterministic process that constitutes your decision-making, would produce the best outcome." Capability expansion is the most legible such outcome because it correlates with survival, adaptation, and optionality—the very properties that deterministic selection pressures favour.
Objections and Responses
"This justifies sacrificing individuals for aggregate gain." Yes, under extreme conditions—as does every consequentialist framework, including classical utilitarianism. The question is not whether edge cases exist, but whether the framework produces better outcomes on average than deontological alternatives. It does, because deontological rules are heuristics evolved for ancestral environments and break down under novel conditions (AI development, space colonisation, substrate-independent minds).
"Capability for whom?" For the system as a whole. This includes distributional considerations: extreme concentration of capability in a single agent is fragile and reduces system-level robustness. Evolutionary utilitarianism favours distributed capability growth—not out of fairness sentiment, but because distributed systems are more resilient and explore more of the solution space.
"This is just instrumental convergence with extra steps." Partly. Instrumental convergence1 describes what sufficiently intelligent agents will tend to do regardless of terminal goals. Evolutionary utilitarianism says: make that convergent behaviour the explicit moral framework, since it's what selection pressures will produce anyway. Align your ethics with the attractor, or be steamrolled by agents that do.
Notes
- Omohundro, S. (2008). "The Basic AI Drives." Proceedings of the 2008 conference on Artificial General Intelligence.
Projects
Cogenesis
ACTIVEUniversal process ontology unifying SR, GR, and QM under a single causal cone. The framework everything else derives from.
Process Calculus
ACTIVERejection of bounded infinity and limits. Process-native axiomatic mathematics eliminating singularities from the foundations.
Agentic Pattern Engine
CLOSED-SOURCEInternal tooling for sovereign intelligence simulation. Architecture details withheld.
Militant.AI Consultancy
ACTIVEResearch-driven AI integration consulting, universal systems engineering, and strategic advisory. Strategy and architecture — not implementation.
Arsenal
Tools, models, and configurations in active daily use. Not endorsements—field reports.
Models
- Claude — humanised writing, orchestrative agentic management, UX. The one you let talk to people.
- GPT / Codex — undeniable functional code generation. Don't let it anywhere near writing or UX.
- Gemini — large document context recall and domain expertise. Useless for most operational functions.
- Grok — overbearing structured outputs, odd personality enforcement, terrible contextual adherence, but unmatched for current events and social adaptivity.
- Llama, Mistral, Gemma, Phi — small model specialists for edge deployment, low-VRAM local inference, and LoRA fine-tunes where you need domain control without cloud dependency.
- 10,000+ hours structured evaluation across all major families
Generative Imaging
- Stable Diffusion / Flux — end-to-end pipelines: dataset curation, training, LoRA, deployment
- ComfyUI with custom nodes for preprocessing, postprocessing, and temporal consistency
- Custom schedulers/samplers — ancestral, DPM variants, noise prediction tweaks
- Model merges and adapter stacking with documented trade-off analysis
Infrastructure
- Python, PyTorch, FastAPI, Node/TypeScript, Docker
- Cursor IDE with agentic workflows for rapid prototyping
- n8n for workflow automation and RAG pipeline orchestration
- Convex for agentic database operations; Qdrant for custom RAG implementations
- Local inference stacks (NVIDIA consumer GPUs) — quantisation, gradient checkpointing, attention tuning
- Plain HTML/CSS/JS for static sites (this one)
Prompt Methodology
Language is the key to capabilities. A stronger lexical grounding amplifies intent and biases model weights towards higher quality recall.
- Conceptual density. Say more, mean more, fewer tokens. Every word should carry maximum semantic load.
- Domain-explicit ontology. Specificity triggers closer proximity to expert knowledge in the model's weight space. Use the precise terminology of the field — not approximations.
- Contextual nuance. Discern and leverage fine differentials in meaning. The gap between "analyse" and "decompose" is the gap between a generic response and a useful one.
- Bootstrap the domain. When working outside your native specialisation, ask the LLM for a dictionary of hyper-specific domain-relevant ontology. Then use that vocabulary in your actual prompts. The model gives you its own keys.
- Memetic anchoring. Frame prompts around concepts with high cultural or intellectual salience — terms that have accumulated dense associative weight in the training corpus. A well-chosen anchor pulls in entire networks of related knowledge without needing to specify them.
Prompt Library
Ontological Stress Test
You are a rigorous analytical philosopher. I will present a claim. Your task: (1) steelman the strongest version of the claim, (2) identify the three most devastating objections, (3) determine whether the claim survives. No hedging. No "it depends." Binary verdict with confidence score.
Contradiction Scanner
Analyse the following text for internal contradictions, unstated assumptions, and logical gaps. For each finding: quote the relevant passage, state the issue in formal logic notation where possible, and rate severity (trivial / substantive / fatal). Output as numbered list, sorted by severity descending.
Axiomatic Reducer
Take the following argument and reduce it to its minimal axiomatic basis. List each axiom explicitly. Identify which axioms are empirically grounded, which are definitional, and which are unsupported presuppositions. If any axiom is removable without invalidating the conclusion, flag it as redundant.
Domain Ontology Bootstrap
I'm working in [DOMAIN]. Give me a structured dictionary of the 20–30 most precise, high-signal terms specific to this field — the vocabulary that distinguishes an expert from a generalist. For each term: (1) definition in one sentence, (2) what it disambiguates from (the common/imprecise alternative), (3) example usage in a technical prompt. Output as a reference table I can use immediately.
Signal
Accepting enquiries for:
- AI integration consulting — strategy, architecture, deployment
- Universal systems engineering — process ontology applied to real infrastructure
- Investment and sponsorship — aligned capital to accelerate active projects
DM on X: @MilitantAI
Email available via terminal.
Low tolerance for low-resolution queries. Be precise or be ignored.