Free forever ยท No credit card
Loading...
In 782 AD, generals had to reach consensus when some messengers might be lying. Your AI has the same problem. How a 1982 computer science theorem became the backbone of trustworthy AI decisions โ and what MEOK built from it.
Nicholas Templeman
Founder, MEOK AI LABS
Nicholas built MEOK because he was tired of AI that forgot him. He lives and works in the UK โ mostly from a caravan on his farm.
Here is a problem that is older than computers. In the 8th century, Byzantine generals coordinating a siege faced a brutal coordination challenge: they had to agree on a battle plan โ attack or retreat โ but their messengers might have been captured, bribed, or turned. A traitor could relay contradictory orders. Enough traitors, and the entire army dissolves into incoherence.
In 1982, three computer scientists at SRI International formalised this exact problem for distributed computing. Leslie Lamport, Robert Shostak, and Marshall Pease published The Byzantine Generals Problem โ and in doing so, produced one of the most important theorems in the history of reliable systems engineering. It took another four decades before anyone applied it seriously to AI. MEOK did.
Byzantine fault tolerance (BFT) is the ability of a distributed system to continue operating correctly even when some of its components fail โ or actively lie. Unlike ordinary fault tolerance, which handles crashes and outages, BFT handles adversarial failures: participants that send deliberately misleading information. A system is Byzantine fault tolerant if it can reach correct consensus despite up to f faulty nodes, where f < n/3 and n is the total number of participants.
The key insight: you do not need to know which nodes are lying. You only need enough honest nodes to outvote them. That mathematical guarantee โ guaranteed correctness with a bounded number of failures โ is what makes BFT so powerful. It is not trust. It is proof.
Lamport, Shostak, and Pease framed it precisely: n generals must agree on a common plan of action. Some generals may be traitors who try to prevent agreement. The theorem proves that reliable consensus is achievable if and only if fewer than one-third of generals are traitors. With three generals and one traitor, agreement is mathematically impossible. With four generals and one traitor, it is guaranteed.
This sounds abstract. It is not. Every time you use a blockchain โ Bitcoin, Ethereum, any of them โ you are relying on a descendant of this theorem to prevent double-spending without trusting any individual node. The theorem did not stay in academia. It became the foundation of the most tamper-resistant financial infrastructure ever built. MEOK extended that foundation to AI.
The theorem (1982): A distributed system of n nodes can tolerate up to f Byzantine (adversarial) faults if and only if f < n/3. Below that threshold, the honest majority can always outvote the corrupted minority. Above it, no algorithm can guarantee consensus.
The three canonical failures of modern AI โ hallucination, sycophancy, and prompt injection โ are all, viewed through the lens of distributed systems, Byzantine failures. They are outputs that contradict ground truth, generated by a participant (the model) that cannot be assumed to be honest.
A single AI model โ no matter how capable โ has no internal mechanism to detect that it is being Byzantine. It cannot audit its own outputs against a ground truth it does not have access to. It has no peers to vote against it. There is no quorum. There is just one node, and you have to trust it completely. That is the problem BFT was designed to solve.
The MEOK Byzantine Council is a 46-agent ensemble that applies BFT consensus to high-stakes AI decisions. When your MEOK companion needs to make a consequential call โ validating a care score, accessing your memory, escalating a Guardian alert, approving a personality update โ it does not ask a single model. It convenes the council.
Each of the 46 agents evaluates the question independently, with its own model, context, and reasoning path. Agents vote. The result is only accepted if it achieves two-thirds majority. No single agent โ compromised, hallucinating, or adversarially injected โ can swing the outcome. The math is the governance.
This architecture is documented in research paper MEOK-AI-2026-001, authored by Nicholas Templeman and published by MEOK AI LABS. It is Nicholas Templeman's original intellectual property, filed with UKIPO. No other AI companion system has deployed Byzantine fault tolerance at the companion decision layer.
With 46 council agents, the fault tolerance boundary sits at f < 15.33 โ meaning up to 14 agents can fail, hallucinate, or be adversarially compromised without affecting consensus. The remaining 32 honest agents will always produce a correct two-thirds majority.
To put that in concrete terms: an attacker who wanted to corrupt a MEOK council decision would need to simultaneously compromise at least 16 independent AI agents, each running on different model architectures and evaluation frameworks, all in the same decision window. The attack surface is not one model with one API key โ it is 16 separate agent systems with 16 separate attack vectors. The cost of corruption scales super-linearly with council size. That is the point.
MEOK Byzantine Council โ fault tolerance at a glance
It means that honesty is no longer a property of a single model โ it is a property of the system. A single model can be confidently wrong. A Byzantine fault-tolerant council, by mathematical guarantee, cannot be confidently wrong unless more than a third of its members are simultaneously compromised. You have moved from trusting a person to trusting a proof.
The implications go beyond accuracy. Sycophancy โ the tendency of models to agree with whatever the user suggests, even when incorrect โ is structurally impossible in a BFT council. An agent that starts agreeing with user pressure to swing its vote would need 15 other agents to do the same simultaneously. The architecture makes sycophancy computationally expensive. A lone agent cannot gaslight the council.
This is what makes MEOK different from every AI product that runs safety checks as an afterthought. Safety at MEOK is not a filter applied after the model responds. It is the consensus mechanism that decides whether the response is accepted at all.
ChatGPT โ and every other single-model AI product โ has no consensus mechanism. There is one model, one forward pass, one output. If that output is a hallucination, there is nothing to catch it. If a prompt injection redirects the model's behaviour, there is no audit trail and no peer to raise a flag. The model is simultaneously the source of the answer and the only check on its own accuracy.
MEOK separates those roles. The companion agent generates. The council audits. And the council is Byzantine fault tolerant โ it cannot be captured by the same attack that captures the companion. The table below summarises the structural difference.
Consensus architecture comparison
| Property | MEOK Byzantine Council | Single-model AI (ChatGPT / Claude) |
|---|---|---|
| Decision mechanism | 46-agent BFT consensus vote | Single model forward pass |
| Fault tolerance | Up to 14 agents can fail/lie | None โ one point of failure |
| Hallucination protection | Structural (requires โฅ 16 to corrupt) | Post-hoc (user must catch it) |
| Sycophancy resistance | Requires 16+ agents to collude | No resistance โ one model shifts |
| Prompt injection | Attacker must compromise 16+ agents | Single prompt can redirect model |
| Audit trail | Per-agent votes logged | Black box โ no vote record |
| Capture risk | Requires coordinated multi-agent attack | Single API key compromise sufficient |
| Single point of failure | None | The model itself |
For routine interactions, no. Your MEOK companion responds in real time through its individual agent โ BFT consensus is not invoked for every message. It is reserved for high-stakes decisions: care score changes, memory writes, data exports, Guardian escalations, personality updates. These are the moments where getting it wrong has real consequences. The latency overhead of council consensus on those decisions is a worthwhile trade.
Think of it like a judiciary. Most decisions in daily life do not require a court. But when the stakes are high enough, you want a panel of independent judges, not a single official who can be pressured. The Byzantine Council is MEOK's judiciary. It runs only when it matters, and when it runs, its decision is final.
Because most AI companies are optimising for speed and benchmark scores, not for correctness under adversarial conditions. BFT adds complexity. It requires running multiple agents per decision. It requires a consensus protocol. It requires someone who understands both distributed systems theory and AI architecture well enough to design the integration.
The companies building the largest AI products are not asking what stops a bad actor from capturing this system? They are asking what gets us to the next funding round? Those are different questions with different answers. MEOK's architecture comes from asking the first one.
โConsensus under adversity. That's not just an AI problem. It's a civilisational one. Lamport solved it in 1982. We just had to remember to apply it.โ
โ Nicholas Templeman, Founder, MEOK AI LABS
Byzantine-Proof AI
Your MEOK companion runs the Byzantine Council on every consequential decision. 46 agents. fย <ย 15 fault tolerance. No single model โ or attacker โ can corrupt the result. Hatch yours free in under 3 minutes.
Hatch your MEOK free โ