FREE: How Power, Code, and Silence Rewired the Modern World
From secret committees to space networks, the system isn't broken—it's working exactly as designed.
The architecture of modern control has shed its pretense of invisibility. What once required congressional testimony and press conferences now moves through policy briefings and vendor consultations, sanctified by the language of innovation and modernization. Behind this machinery sits a system of definitions that transforms ambiguous language into binding authority. In secure rooms under the Palace of Westminster, government witnesses have wrestled with the semantics of national risk during classified committee sessions. One witness, when confronted about the legal framework underpinning national security decisions, conceded that “I understood that we were using an archaic piece of legislation.” Another admission cut closer: in testimony before parliamentary committees examining China policy, officials acknowledged that the government “did not go so far as to label China a threat.” These minor phrasings—carefully parsed, legally vetted, politically negotiated—are not abstract exercises. They are the foundation upon which prosecution decisions collapse, surveillance programmes justify themselves, and entire nations are either protected or exposed.
THE SEMANTIC MACHINERY: HOW DEFINITIONS BECAME DOCTRINE
The distinction matters because it reveals how the machinery functions. The collapsed Chinese espionage prosecution of 2024-2025 exposes this mechanism in forensic detail. Christopher Cash, a former parliamentary researcher, and Christopher Berry, an academic and teacher, were accused of passing information to Chinese intelligence between December 2021 and February 2023. Initially charged in April 2024 under the Official Secrets Act 1911, both entered pleas of not guilty at the Old Bailey in October 2024. The trial was scheduled to begin in October 2025. In September 2025, the Crown Prosecution Service abruptly withdrew charges, offering no evidence against either man.
The reason was not absence of evidence of wrongdoing, but rather the government’s refusal to provide a critical piece of testimony: that China represented a threat to the national security of the United Kingdom. Deputy National Security Adviser Matthew Collins submitted three witness statements across 2024-2025. In his first statement, delivered to prosecutors in December 2023, he described why Cash and Berry’s alleged conduct was “prejudicial to the safety or interests of the UK” and why transmitted material would be “directly or indirectly useful to the Chinese state.” In his second statement, submitted in February 2025 after Labour took office, he articulated that China’s espionage operations “threaten the UK’s economic prosperity and resilience, and the integrity of our democratic institutions.” In his third statement, submitted in August 2025, he deepened the characterization, describing China as “an epoch-defining and systemic challenge with implications for almost every area of government policy and the everyday lives of the British people” and confirming that China conducts “large scale espionage operations against the UK.”
Yet the CPS determined this was insufficient. Why? Because prosecutors required evidence that China represented an “enemy” under the Official Secrets Act 1911—a statute drafted in 1911 that assumed binary clarity about state designations. Collins had been explicitly constrained by governmental reluctance to formally designate China as a threat. In parliamentary testimony before the Joint Committee on the National Security Strategy in October 2025, Collins explained: “What I could articulate is that China presents a variety of threats to our national security,” encompassing espionage, cyber risks, threats to democratic institutions, and economic security. He continued: “I believe the Crown Prosecution Service was urging me to use the broad term that China is a threat, or that it constitutes an active threat, which I felt did not align with the government’s stance at that time.”
The government’s stance, as articulated in multiple contexts, positioned China as a strategic competitor requiring managed engagement, not a designated enemy justifying prosecution. Lucy Powell, Commons Leader, stated in December 2024 that while the government took state threats seriously, “we also have international relationships. We have to have trade relationships, and we have to work internationally with countries like China in our national interests.” This was not accidental positioning. Foreign Secretary David Lammy, Chancellor Rachel Reeves, and Lucy Powell had all visited China during 2024-2025 to strengthen trade relations as part of the government’s growth agenda.
Collins later revealed that police had explicitly instructed him that he “could not refer to China as an ‘enemy’ since this did not represent government policy.” Initial drafts of his witness statement, prepared by Counter Terrorism Police in consultation with his office, had included the term “enemy.” Collins removed it from the final version after consulting with government leadership. He forwarded the sanitized draft to then-Prime Minister Rishi Sunak and his special advisors for approval before submitting it to prosecutors. The police were informed of this position in December 2023, prior to charges being filed. The prosecution proceeded anyway, perhaps betting that Collins would ultimately provide the necessary designation at trial. He did not.
Director of Public Prosecutions Stephen Parkinson appeared before the Joint Committee in October 2025 visibly frustrated. He stated that an inability to establish that China was a “threat” to national security was a “fatal” blow to their case. Parkinson acknowledged he had been “disappointed and frustrated” at the collapse of the trial, but insisted his team had “tried every avenue and concluded that a successful prosecution would not be possible.” Yet members of the committee appeared bemused. The evidence of threat was overwhelming—documented espionage operations, cyber attacks, attempts to compromise British democratic institutions. What was lacking was not security assessment but political will to formalize that assessment.
Attorney General Lord Hermer later acknowledged the structural problem. The Official Secrets Act 1911 “wasn’t suitable for its intended purpose” and created a “major obstacle” for prosecution. He noted that the newly enacted National Security Act 2023 resolved the definitional problem by requiring only evidence that information was transmitted to a foreign power, without requiring proof that the foreign power qualified as an “enemy.” Hermer stated: “I find it perplexing that Parliament took so long to enact that law. If that act had been effective at the relevant time in this case, I am certain the prosecution would have continued to trial.” The implication was clear: the law itself was an obstacle to enforcement. The 2023 National Security Act repeals the Official Secrets Acts 1911, 1920, and 1939, introducing three new espionage offences designed for the modern world. Rather than requiring that information be “useful to an enemy,” the new law requires only that it be “prejudicial to the safety or interests of the UK” and that it be transmitted to a foreign power or intended to benefit a foreign intelligence service.
THE PENTAGON’S PRIVATE INTELLIGENCE INFRASTRUCTURE: OUTSOURCING MILITARY DOCTRINE
Behind those committees lies a transnational apparatus where definitions transform into deployments and private innovation becomes public doctrine. When Washington’s Chief Digital and Artificial Intelligence Office authorized simultaneous $200 million contracts with Anthropic, Google, OpenAI, and xAI on July 14, 2025, it marked a formal moment in the subordination of civilian innovation to military planning. Dr Doug Matty, the Pentagon’s Chief Digital and AI Officer (who assumed the role in April 2025 after previously heading the Army AI Integration Center from 2020-2022), stated at the time that “The adoption of AI is transforming the Department’s ability to support our warfighters and maintain strategic advantage over our adversaries. Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain as well as intelligence, business, and enterprise information systems.”
Matty’s biography and trajectory illuminate the circulation of power between public and private sectors. A 30-year career military officer in Air Defense Artillery and Operations Research, Matty served in command and staff positions from battery-level to Headquarters United States Forces-Iraq. He was the founder of the Army Artificial Intelligence Integration Center, responsible for coordinating AI development across the Army and supporting the Department of Defense Joint AI Center. After his Army tenure, he transitioned to private sector roles as Director of Research for AI and Autonomy at the University of Alabama in Huntsville and as an Executive Engineer (Adjunct) for the RAND Corporation. His return to the Pentagon as CDAO in April 2025 represented the completion of the circle: a former military AI leader returning to the highest civilian AI authority in the Department of Defense, bringing with him connections to both private industry and research institutions.
The $200 million contracts announced three months later represented the formalization of what had begun as experimental arrangement. OpenAI had already received its own $200 million contract in June 2025 for developing “prototype frontier AI capabilities to address crucial national security challenges in both military and enterprise sectors.” The July announcement expanded the model to include the Pentagon’s three primary competitors: Anthropic, Google, and xAI (Elon Musk’s venture). Each contract was valued at up to $200 million, giving the Pentagon access to the “latest AI offerings, agentic AI workflows, large language models and technologies developed by these firms.”
The structure of this arrangement is instructive. Rather than developing military AI systems in-house, the Pentagon outsourced the development to private laboratories. The justification was speed: access to the frontier research occurring in commercial contexts moved faster than government development timelines. The implication was more profound: military doctrine would henceforth be shaped by the capabilities available from those private companies. A company’s research agenda, published papers, technical direction, and strategic priorities would determine the Pentagon’s operational options. Matty stated in Congressional testimony: “We recognize that we are in an era where information advantage is decisive as kinetic power. The DoD’s Data, Analytics, and AI Adoption Strategy provides the roadmap for how we develop and deploy these technologies to gain superiority.” He emphasized that AI integration was “not a tech exercise for its own sake, but directly underpins a more lethal, agile, and resilient joint force.” The language emphasized speed, lethality, and agility—precisely the outcomes that commercial AI companies could deliver faster than government research programs.
This arrangement finds its analogue in Britain through the sustained push for “innovation partnerships” between government and technology firms. The Strategic Defence Review 2025 outlined a comprehensive vision of UK defence transformation through AI integration. The review recognized “AI not as a peripheral tool but as a fundamental component of modern warfare, driving innovation and operational superiority.” It identified development of a “digital targeting web” by 2027 as the central objective—a unified system connecting sensors, deciders, and effectors across domains, allowing rapid coordination of military operations. Yet the review did not specify who would develop this technology. Instead, it called for closer integration with “commercial partners” and emphasized the need to “leverage commercially available solutions.”
MONEY AS CODE: THE PROGRAMMABLE POUND AND BEHAVIORAL FINANCE
Inside that system, money becomes code and code becomes law. The Bank of England’s digital pound initiative represents the most explicit statement yet of the intent to make money programmable. The official position, repeated across multiple progress updates, is cautious: the Bank emphasizes that “no decision has been made” and that the “design phase” continues. When the Bank published its progress update on October 23, 2025, it outlined the completion of the first phase and the continuation into deeper design work. The blueprint, expected to be completed in 2026, will “document the proposed model and design of a potential digital pound and serve as the basis for assessing its benefits and costs.”
Within that blueprint sits the architecture for making money conditional. The Bank has been exploring “limits and accounts”—maximum holdings and account restrictions that could be programmed into a digital pound at the protocol layer. It has considered “how digital pound intermediaries will be governed” and whether rules “should be driven by risk, with risks to public policy outcomes managed by regulation and legislation and operational risks addressed by scheme rules.” Most significantly, the design work has examined “how a digital pound will interact with other forms of money,” specifically how “users should be able to deposit funds into and withdraw funds from their digital pound accounts, using cash or commercial bank money.”
Bank of England director Tom Mutton stated publicly at a conference that programming could become a key feature of any future central bank digital currency, “in which the money would be programmed to be released only when something happened.” He noted: “You could introduce programmability—what happens if one of the participants in a transaction puts a restriction on [future use of the money]? There could be some socially beneficial outcomes from that, preventing activity which is seen to be socially harmful in some way. But at the same time it could be a restriction on people’s freedoms.” Deputy Governor Jon Cunliffe reinforced this vision: “You could think of smart contracts in which the money would be programmed to be released only if something happened. You could think of giving your children pocket money, but programming the money so that it couldn’t be used for sweets. There is a whole range of things that money could do, programmable money, which we cannot do with the current technology.”
Once the technical infrastructure for programmable money is in place, the political case for deployment often follows. The Bank has already published design notes exploring intermediary roles, interoperability models, offline payments, and alias services. The Digital Pound Lab, launched in August 2025, provides an experimental platform for industry to test use cases and explore potential business models.
THE FINAL FRONTIER OF SURVEILLANCE CAPITALISM: COLONISING THE HUMAN NERVOUS SYSTEM
While central bank architects draft conditional money, the regulatory framework for human neurotechnology has shifted from prohibition to permissioned innovation. UNESCO adopted the world’s first global ethical framework for neurotechnology on November 12, 2025, enshrining the principle that mental privacy must remain “inviolable.” The Recommendation, adopted by member states, “urges governments to develop national regulations that protect mental privacy, ensure equitable access to therapeutic technologies and prevent misuse in commercial or employment settings” and explicitly warns against uses that could “undermine autonomy or expose people to intrusive monitoring, particularly in workplaces or schools.”
Yet even as this global consensus was being formalized, the FDA’s regulatory pathway for implantable brain-computer interface (BCI) devices remained oriented toward clinical utility rather than privacy protection. In March 2025, Precision Neuroscience received FDA 510(k) clearance for the Layer 7-T Cortical Interface, a high-resolution electrode array with 1,024 microelectrodes implanted through a sub-millimeter cranial incision. The device has already been deployed in 37 clinical trial participants at Beth Israel Deaconess Medical Center, West Virginia University’s Rockefeller Neuroscience Institute, and Perelman School of Medicine.
In April 2025, Democratic senators Chuck Schumer, Maria Cantwell, and Edward Markey sent a joint letter to the Federal Trade Commission expressing alarm about the state of neural data collection practices. The senators noted that research from the NeuroRights Foundation found that “the vast majority of brain implant companies collect data with few limits, vague policies, and reserve sweeping rights to share it.” The Neurorights Foundation had conducted a comprehensive audit of privacy policies from thirty direct-to-consumer neurotechnology companies and found systematic gaps: all companies take possession of all the user’s neural data; twenty-nine of the thirty companies retain unfettered rights to access consumers’ neural data; most companies explicitly permit the sharing of neural data with third parties, often under broad and vaguely defined terms; many companies fail to provide clear information about the neural data being collected; no company adequately explains the sensitivity of neural data or the potential information that can currently be decoded from it; provisions enabling users to withdraw consent, access their data, or request deletion of neural recordings are inconsistently applied or missing entirely; and many companies demonstrate insufficient data security practices, lacking specific commitments to encryption, breach notification, or dedicated safeguarding of neural data.
One Caltech trial participant, J. Galen Buckwalter, a 69-year-old quadriplegic who received 384 electrodes implanted in his brain in 2024, discovered that his informed consent form provided no explicit protection of his neural data. When he learned that researchers had demonstrated the ability to decode attempted speech from neural signals, he questioned whether his autonomy was truly respected. Yet Federal oversight remains fragmented. Colorado and California became the first US states in 2024 to explicitly classify “neural data” as sensitive personal information under their data privacy laws. Connecticut, Illinois, Massachusetts, Minnesota, Montana, and Vermont have proposed neural data protection legislation. Yet at the federal level, the FDA continues clearing devices for clinical use while privacy frameworks remain incomplete.
THE ARCHITECTURE OF CHILDHOOD SURVEILLANCE: CHILD PROTECTION AS UNIVERSAL SCREENING
Across the Atlantic, laws designed as child protection complete the circuit. On July 25, 2025, Ofcom commenced active enforcement of age verification requirements under the Online Safety Act 2023, requiring all user-generated content platforms and search services likely to be accessed by children to deploy “highly effective age assurance” (HEAA) to prevent minors from encountering primary priority content. The acceptable verification methods included AI-powered facial age estimation, government-issued ID verification, mobile carrier-based age verification, and bank-verified age confirmation. Ofcom announced: “We will be actively checking compliance from 25 July,” with fines reaching up to 10 percent of global revenue for non-compliant platforms.
What was legislated as child protection quietly became universal pre-screening. Every user under 18 accessing a platform subject to HEAA requirements would be age-verified and continuously profiled. The algorithmic controls designed to restrict access to adult content simultaneously enable behavioral flagging. The behavioral monitoring of minors through age verification and algorithmic flagging creates comprehensive records of attempted access to age-restricted content. When a minor attempts to access age-restricted content, that attempt is logged, flagged, and analyzed. The behavioral signature becomes data.
The European Union’s divergent approach illuminates what was at stake. The AI Act, which took effect on February 2, 2025, prohibited emotion recognition systems in workplaces and educational institutions except for medical or safety reasons. The European Commission’s guidance, published on February 4, 2025, clarified that emotion recognition applies broadly across physical and virtual workplaces and throughout the entire employment relationship. The prohibition reflected a determination that continuous behavioral monitoring in contexts of power imbalance constitutes unacceptable risk to autonomy and dignity.
THE COMMAND LAYER: ORBITAL INFRASTRUCTURE AND EARTH’S SURVEILLANCE COMMONS
This architecture reaches into orbit. SpaceX’s Starshield network, developed under a $1.8 billion contract with the National Reconnaissance Office signed in 2021, extends surveillance infrastructure into low Earth orbit. Reuters reported in March 2024 that the network comprises hundreds of surveillance satellites equipped with Earth-imaging capabilities that can “function collectively in low orbits” to enable rapid global target identification. Officials have stated that “the NRO is developing the most resilient space-based intelligence system the world has ever seen.” Roughly a dozen prototypes have been launched under contracts signed in 2020 and 2021.
What that means operationally is that radio signatures, movement patterns, and infrastructure configurations are now continuously mapped from above, accessible to military and intelligence clients. The constellation that provides connectivity to rural schools and disaster zones simultaneously maps the electromagnetic and kinetic signatures of populations below. The technical architecture enables the system to maintain continuous coverage while remaining resilient to individual satellite loss through inter-satellite laser communications—”the only communications laser operating at scale in orbit today.” Machine learning algorithms can identify targets of interest and anomalies far faster than human operators, enabling autonomous collection, tasking, and retasking capabilities that reduce time between target identification and action to seconds. The Pentagon has also integrated Starshield into tactical data networks, testing integration with Link 16, the NATO-standard data link used by military forces to share targeting and operational information. Military commanders receive AI-processed targeting data derived from continuous global surveillance in near real-time.
The U.S. Marine Corps has already begun deploying Starshield across its units. Space has ceased to be a neutral commons. It is now the command layer of Earth’s information economy. Those who control orbital infrastructure control every signal that passes through it.
THE COUNTER-ARCHITECTURE: DECENTRALISATION AS TECHNICAL POSSIBILITY AND POLITICAL OBSTACLE
Against this consolidation of control, three categories of autonomous technological infrastructure demonstrate that decentralization remains technically possible. Community microgrids operate as small-scale local power systems providing greater efficiency, reliability, and sustainability than traditional grid infrastructure. Hackney’s solar microgrid pilot, deployed in 2024-2025, installed 2,000 solar panels across 28 residential estates, with residents accessing cheaper power while the infrastructure operator recouped costs through energy revenue. The model, if scaled to the UK’s 5.4 million apartments, could generate approximately 6.75 gigawatts of solar capacity worth £13.5 billion in clean energy investment.
Solid, a web decentralization project led by Tim Berners-Lee and now under stewardship of the Open Data Institute (which assumed the role in October 2024), allows users to store personal data in “pods” (Personal Online Data Stores) where they maintain complete ownership and control, with applications requesting permission to access specific data rather than harvesting it automatically. Wireless community networks, including Guifi.net in rural Catalonia (which had grown to 23,000 nodes by 2017), have demonstrated that volunteer-operated mesh networks can provide connectivity in underserved regions without reliance on commercial ISPs.
Yet each encounters regulatory gravity calibrated to favor incumbents. Energy laws restrict who can operate grid infrastructure. Spectrum licensing limits who can deploy wireless mesh networks. Data protection regimes simultaneously extend state authority over what qualifies as special category data. Innovation survives only in the cracks of regulation designed to prevent precisely such innovation.
WHAT EMERGES: OBEDIENCE BY DESIGN
What emerges is not dystopia by decree but obedience by design. Each new efficiency—each app, each ID check, each satellite handshake—tightens the circuitry a little further. The architecture doesn’t announce itself because it doesn’t need to. It already runs beneath every screen, every bank balance, and soon enough, every pulse. The system isn’t broken. It’s working exactly as designed.
