The Invisible Inspectors: How Artificial Intelligence Is Quietly Reinventing Biological Weapons Monitoring
The Biological Weapons Convention has no verification mechanism. AI might be about to change that — and the Trump administration has just made it official U.S. policy.
The Invisible Inspectors: How Artificial Intelligence Is Quietly Reinventing Biological Weapons Monitoring
The Biological Weapons Convention has no verification mechanism. AI might be about to change that — and the Trump administration has just made it official U.S. policy.
Summary
The Biological Weapons Convention (BWC), the foundational international treaty prohibiting biological weapons, has operated for more than fifty years without any formal verification mechanism. No inspectors, no mandatory declarations, and no independent body empowered to confirm compliance. This essay argues that artificial intelligence (AI) is now making a new approach feasible, one that works not through physical inspections but through continuous automated analysis of the digital and physical footprint that modern biotechnology inevitably generates.
Six distinct monitoring capabilities constitute this emerging architecture. Genomic surveillance systems scan public DNA sequence repositories for statistical signatures of artificial genetic engineering. Open-source intelligence (OSINT) analysis applies natural language processing (NLP) to mine the global scientific literature, patent databases, funding records, and informal technical communities to detect research trajectories converging on dangerous biological capabilities. Supply chain monitoring analyzes DNA synthesis orders, equipment procurement records, and biological material transfers for acquisition patterns consistent with weapons-relevant programs. Environmental monitoring deploys AI-enabled biosensor networks to detect biological signatures in air, water, and surfaces at strategic locations. Behavioral and financial analysis maps funding flows, organizational networks, and procurement transactions to identify patterns consistent with covert activity. And predictive modeling uses epidemic simulation and agent-based modeling to distinguish natural outbreaks from deliberate events and to identify vulnerabilities in existing surveillance systems.
Each capability addresses a different dimension of the biological weapons threat, and each has distinct strengths and limitations. Together, and particularly when their outputs are integrated, they offer the international community something it has never previously possessed: a continuously operating, evidence-based early warning system for biological weapons-related activity.
These monitoring concepts have moved from academic discussion to active U.S. foreign policy. In September 2025, President Donald Trump told the United Nations General Assembly (UNGA) that his administration would pioneer an AI-based verification system for the BWC. His State Department has since elaborated specific applications at diplomatic venues in Geneva, and the initiative is expected to be a centerpiece of the 2026 BWC Review Conference.
To illustrate how this six-layer architecture might work in practice, the essay includes a detailed case study applying each monitoring layer retrospectively to the origins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), focusing specifically on the research networks, institutional relationships, and genomic evidence surrounding the Wuhan Institute of Virology (WIV). Particular attention is given to the role of the University of California, Davis (UC Davis) and its One Health Institute (OHI), which served as the programmatic anchor of the U.S.-WIV research network through the USAID PREDICT program and has received comparatively little scrutiny in public discussions of the network’s governance. The case study demonstrates both the considerable analytical power of the integrated architecture and the governance and transparency challenges that must be resolved before it can fulfill its potential. The essay concludes by examining what the road ahead requires in terms of diplomatic persistence, international cooperation, and institutional investment.
Introduction
There is a quiet revolution happening at the intersection of artificial intelligence and global biosecurity, and until recently almost nobody outside a small community of arms control specialists and bioinformaticians was paying attention to it. That changed on September 23, 2025, when President Donald Trump stood before the United Nations General Assembly and made an announcement that surprised much of the diplomatic world.
The Biological Weapons Convention, signed in 1972 and now counting over 180 state parties, represents one of humanity’s most ambitious attempts to ban an entire category of weapons. It prohibits the development, production, and stockpiling of biological agents for offensive purposes. It was the first multilateral treaty to outlaw an entire class of weapons of mass destruction (WMD). And for more than fifty years, it has operated without any formal verification mechanism whatsoever.
No inspectors. No mandatory declarations. No sampling protocols. Nothing analogous to the International Atomic Energy Agency’s (IAEA) safeguards regime or the Chemical Weapons Convention’s (CWC) inspection framework. When states parties meet, they exchange confidence-building measures (CBMs), which are voluntary declarations about research facilities and disease outbreaks, but there is no independent body with the authority or the tools to verify that anyone is actually complying. The BWC, for all its moral authority, has always been something of an honor system operating in one of the most consequential domains imaginable.
Into this long-standing gap stepped an unanticipated advocate. Addressing the General Assembly, Trump pledged that the United States would lead a global effort to enforce the BWC, telling assembled world leaders that his administration would spearhead the creation of an AI-based verification system. “My administration will lead an international effort to enforce the biological weapons convention,” Trump said. “We will do so by pioneering an AI verification system that everyone can trust.”¹ The announcement represented a significant and welcome shift in U.S. policy, bringing the world’s leading AI power squarely behind the cause of strengthening a treaty that has long needed exactly this kind of high-level political commitment.
The Trump initiative did not emerge in isolation. It reflects a growing recognition across government, industry, and the research community that the convergence of AI and biotechnology is simultaneously creating new tools for monitoring biological threats and new threats that existing governance frameworks are not equipped to address. The same AI capabilities that can detect engineered pathogens in genomic databases can also help a moderately skilled actor design one. The same synthesis technologies that enable vaccine development lower the barrier to acquiring dangerous biological materials. The administration’s response, as elaborated by the State Department in subsequent diplomatic engagements, has focused specifically on the monitoring capabilities that biosecurity researchers have been developing for years.
Six distinct capabilities constitute this emerging monitoring architecture. Each addresses a different dimension of biological weapons-related activity. Each has its own capabilities and limitations. And together, they constitute something genuinely new: a continuously operating, AI-driven early warning system for biological threats that the world has never had before.
Layer One: Genomic Surveillance and Bioinformatics Analysis
Modern biotechnology leaves digital fingerprints. Every time a researcher sequences a pathogen, engineers a genetic construct, or characterizes a novel protein, that work generates digital records. Deoxyribonucleic acid (DNA) sequences are deposited in public repositories, including GenBank, the European Nucleotide Archive (ENA), and the DNA Data Bank of Japan (DDBJ), where they accumulate in vast, searchable databases containing hundreds of millions of entries. These sequences are shared in databases, published in papers, and transmitted through research collaborations around the world. This was always intended as a feature of open science: share your data, accelerate collective progress. What nobody fully anticipated was that this ocean of genomic information would also become a surveillance resource of extraordinary richness.²
Here is the core insight that makes genomic surveillance possible: engineered DNA looks different from naturally evolved DNA, and those differences are detectable by machine learning systems trained to recognize them. AI systems can analyze large volumes of genomic data to detect unusual patterns, such as gene combinations that are unlikely to occur in nature, or sequences optimized for high expression in ways that suggest engineering rather than evolution.
Evolution is a messy, undirected process. Natural genomes carry the accumulated signatures of millions of years of mutation, selection, horizontal gene transfer, and genetic drift. They contain redundancies, inefficiencies, and idiosyncrasies that reflect their history rather than any design logic. Engineered sequences, by contrast, tend to be optimized. Researchers use codon optimization to increase protein expression in specific host organisms, replacing naturally occurring codons with alternatives that the target cell’s ribosomes will process more efficiently. They combine genetic elements from multiple organisms that would never naturally exchange DNA. They insert standardized regulatory elements, including promoters, terminators, and ribosome binding sites, drawn from a relatively small toolkit of commonly used molecular biology components. They leave behind physical traces of the cloning techniques used to assemble the construct: restriction enzyme sites, assembly scars from techniques such as Gibson assembly or Golden Gate cloning, and plasmid backbone fragments.³
Machine learning models can flag anomalies in genetic design, identify sequences that incorporate virulence factors from multiple pathogens, or detect signatures associated with laboratory manipulation. Deep neural networks (DNNs) and probabilistic sequence models can learn to recognize statistical patterns across thousands of such features simultaneously, comparing newly deposited sequences against the full distribution of natural genomes and flagging those that deviate in ways consistent with artificial design.⁴ These tools do not prove intent, but they can identify cases that warrant closer review. In other words, AI helps find the needle in the haystack by scanning vast datasets that no human team could review manually.
More targeted analysis goes further. AI systems can specifically search for combinations of genetic features associated with pathogen enhancement: virulence factors from unrelated organisms inserted into a new genomic context; gain-of-function (GOF) mutations that extend host range or increase transmissibility; synthetic regulatory elements designed to drive high-level expression of pathogenic genes; and reconstructed sequences assembled from fragments of historical genomes that no longer circulate naturally. Phylogenetic analysis, which traces the evolutionary relationships among sequences, can reveal when a genome appears inconsistent with any known natural lineage, suggesting that it was constructed rather than evolved.⁵
Structural biology adds another dimension. Systems such as AlphaFold, developed by DeepMind, can predict the three-dimensional structure of proteins encoded by novel sequences, allowing AI classification models to assess whether a newly designed protein resembles a known toxin or virulence factor even if its sequence differs substantially from anything previously characterized. A sequence that looks novel at the nucleotide level might still fold into a structure functionally equivalent to a dangerous protein, and that can now be detected computationally.⁶
The operational model here is explicitly modeled on financial fraud detection. Just as banks use algorithmic systems to flag unusual transactions for human review rather than having investigators manually examine every credit card purchase, genomic surveillance AI flags unusual sequences for expert evaluation. The system does not make accusations. It performs triage, sorting an unmanageably large information stream into signals that warrant closer attention and background noise that does not. Human experts with deep knowledge of the relevant biology then evaluate the flagged sequences in context, determining whether they represent legitimate research, known dual-use work within established norms, or something that warrants further inquiry.
The scale advantage is decisive. Public genomic databases grow by millions of sequences each year. No team of human analysts, however large and expert, could meaningfully review this volume of data. AI systems can. And as commercial DNA sequencing becomes cheaper and faster, the volume of sequence data will only accelerate. The cost of sequencing a human genome has fallen from roughly three billion dollars in 2003 to under a thousand dollars today, a decline that illustrates the pace of change in this field.⁷ The monitoring challenge grows continuously; AI is currently the only plausible response.
Layer Two: Open-Source Intelligence Monitoring
Biological weapons development does not begin in a laboratory. It begins with ideas, plans, and technical knowledge, and a remarkable amount of scientific and technical information is publicly available. In the modern era, an enormous proportion of that knowledge circulates openly in the scientific literature, patent databases, conference proceedings, and online technical communities. This is simultaneously one of the great strengths of open science and one of its most challenging security implications.
The global life-science literature now grows by hundreds of thousands of papers per year. PubMed alone indexes more than thirty-five million biomedical citations. Preprint servers such as bioRxiv and medRxiv add tens of thousands of additional papers monthly, often before formal peer review. Patent databases maintained by the World Intellectual Property Organization (WIPO) and national patent offices contain detailed technical descriptions of biotechnology innovations that frequently exceed the specificity of academic publications. Funding databases published by governments and research councils document the financial flows sustaining the entire enterprise. Informal technical discussions proliferate across conference presentations, specialized forums, and social media.⁸
No human institution can monitor this information environment comprehensively. AI can.
AI-driven NLP systems can scan scientific publications, patent filings, preprint servers, and even online forums to identify emerging research areas with dual-use potential. These systems can be trained and deployed to read scientific literature at scale, extracting concepts, identifying research themes, and tracking how they evolve over time. The most basic application is automated text mining: continuously ingesting publications and identifying those that discuss technical capabilities of potential dual-use concern, including aerosolization techniques, environmental stability enhancement, immune evasion strategies, host-range modification, and large-scale pathogen cultivation methods.⁹ Algorithms can track trends in work related to aerosol stability, environmental persistence of pathogens, or methods that could bypass existing medical countermeasures.
But sophisticated NLP goes well beyond keyword detection, which is easily defeated by researchers who simply avoid flagged terminology. Modern language models can interpret semantic context, understanding not just that a paper discusses viral receptor binding, but whether that discussion is oriented toward vaccine design, basic virology, or something that looks more like transmissibility enhancement. The same technical concept can appear in radically different research contexts, and AI systems can now distinguish those contexts with meaningful reliability.¹⁰
This contextual interpretation extends to research intent signals embedded in how scientists write about their work. The framing of a paper, its stated objectives, its choice of experimental models, and the way it discusses potential applications all carry information about what the researchers are actually trying to accomplish. A group describing aerosol stability experiments in the context of improving inhaled vaccine delivery reads differently from a group studying the same phenomenon in the context of environmental persistence of pathogenic agents. AI models trained on large corpora of scientific literature can learn to make these distinctions.
Network analysis can map collaborations and identify clusters of activity that might merit further scrutiny. Scientific papers, grants, patents, and conference presentations all carry author and institutional affiliation information. AI systems can construct large-scale knowledge graphs linking researchers, laboratories, funding sources, and publications, then apply network analysis to identify clusters of collaboration converging on sensitive technical areas. A group of laboratories independently working on pathogen enhancement, aerosol dispersal, and large-scale fermentation might not individually raise concerns, as each focus area has legitimate research applications. But the simultaneous convergence on all three is a pattern worth flagging for expert review.¹¹ This is not about policing legitimate science. It is about recognizing patterns that, taken together, could indicate activities inconsistent with BWC obligations.
Patent analysis deserves special attention because patent applications frequently contain far more technical detail than academic publications. Inventors are required to fully disclose their innovations to receive patent protection, which means patent databases often function as a comprehensive technical library of the biotechnology sector’s actual capabilities. NLP systems mining patent claims can track the emergence of new synthesis methods, novel delivery technologies, or fermentation processes with potential dual-use implications, often before those capabilities appear in the peer-reviewed literature.¹²
Monitoring informal channels matters too, and is increasingly important as technical communities discuss emerging methods in places that were never designed for scientific communication, including specialized online forums, messaging platforms, and preprint comment sections. NLP systems capable of processing these informal channels can detect emerging technical discussions long before they crystallize into publications, providing earlier warning signals about where the research frontier is moving.
Discourse analysis is subtler but potentially valuable. AI models can track how scientific communities discuss controversial experiments, whether researchers are raising ethical or safety concerns, whether there is debate within fields about the appropriateness of certain lines of research, and whether official narratives about research programs match what scientists are actually saying to each other in technical venues. Shifts in discourse can precede changes in research direction, making them early warning signals in a more literal sense.¹³
The integrated picture that emerges from combining all these OSINT streams is a continuously updated map of global biotechnology activity, not just what is being published, but who is working with whom, what they are being funded to do, what technologies they are developing, and how those trajectories relate to each other. The goal is not accusation but situational awareness: helping the international community understand where concerning convergences are developing while there is still time for diplomatic engagement or expert dialogue.
Layer Three: Supply Chain and Procurement Monitoring
The first two monitoring layers work primarily in the digital domain, analyzing genomic data and textual information that flows across the internet. The third layer is different. It engages with the physical dimension of biotechnology: the materials, equipment, and specialized inputs that any biological research program, including a weapons program, must acquire in the real world. Biological weapons programs require equipment and materials, including fermenters, specialized containment systems, DNA synthesis platforms, and certain reagents, and AI systems can analyze global trade data and procurement records to detect unusual purchasing patterns.
This is where AI-enabled monitoring becomes most directly operational, and most consequential. It is also the layer that the Trump administration’s State Department has most explicitly endorsed. Speaking at a BWC Meeting of States Parties (MSP) side event in Geneva in December 2025, Under Secretary for Arms Control and International Security Thomas DiNanno specifically identified AI-assisted supply chain monitoring and DNA synthesis screening as priority applications for U.S.-led international cooperation under the Convention.¹⁴
DNA synthesis screening is the most developed component of this layer, and arguably the single most important chokepoint in the entire monitoring architecture. The commercial gene synthesis industry allows researchers anywhere in the world to order custom DNA sequences and receive physical DNA within days. The industry has grown dramatically and costs have fallen precipitously, making custom gene synthesis accessible to university laboratories, small biotechnology startups, and, potentially, bad actors.¹⁵ Orders for dual-use equipment that are inconsistent with a facility’s declared mission can be flagged for review.
Major commercial synthesis providers already operate automated screening systems that compare customer orders against databases of dangerous sequences, including select agent genomes, toxin genes, and regulated pathogen sequences. The International Gene Synthesis Consortium (IGSC), a voluntary industry body, has developed screening standards that member companies commit to follow. But basic screening has important limitations. A simple sequence lookup will miss constructs that are functionally dangerous but differ at the nucleotide level from known threat sequences. It will also miss cases where a bad actor orders components of a dangerous genome in separate pieces from different providers, assembling them after receipt. And it will miss providers operating in jurisdictions without screening requirements or enforcement.¹⁶
AI addresses each of these gaps. Machine learning models trained on the functional biology of dangerous sequences can recognize novel sequences that encode similar capabilities even when they share limited sequence identity with known threats, catching evasion attempts that simple lookup systems miss. Pattern recognition across order histories can detect the progressive assembly of dangerous constructs across multiple separate orders, even from different providers, by analyzing the aggregate picture rather than each order in isolation. And international coordination mechanisms supported by AI analysis can help raise screening standards across the industry globally, reducing the benefit of routing orders through less regulated providers.¹⁷
By integrating shipping records, customs data, and end-user certifications, AI can identify anomalies across borders and over time. Equipment and reagent procurement monitoring extends supply chain surveillance into adjacent domains. A facility acquiring industrial-scale fermentation capacity, combined with lyophilization equipment for stabilizing biological materials and systems for generating respirable aerosols, is assembling a capability profile that differs from routine pharmaceutical manufacturing or vaccine production in detectable ways.¹⁸
Emerging technologies such as blockchain and sensor-enabled logistics could further enhance transparency by tracking sensitive materials from manufacturer to end user. Front-company procurement networks, in which the actual end user of sensitive materials or equipment is concealed behind multiple layers of intermediaries, have been a standard technique of proliferators in other weapons domains for decades. AI-powered network analysis can map the relationships between suppliers, intermediaries, and apparent end users, identifying structures that are inconsistent with normal commercial transactions and suggestive of deliberate concealment.¹⁹ This kind of monitoring strengthens compliance while still allowing legitimate research and industry to operate.
Biological material transfers represent an additional supply chain monitoring domain. Pathogen samples, cell lines, and other biological materials move constantly between research institutions for legitimate scientific purposes, and most jurisdictions require some form of permitting and documentation for transfers of regulated materials. AI systems can cross-reference transfer records against research profiles, publication histories, and institutional capabilities, flagging transfers that appear inconsistent with a recipient’s known research program or that involve materials with limited legitimate civilian applications. AI-assisted inventory management systems can continuously compare holdings against access records and transfer logs at biological repositories, providing early warning if materials are accessed by unauthorized individuals or if inventory discrepancies develop.²⁰
Layer Four: Environmental Monitoring and Biosensor Networks
A fourth and increasingly promising monitoring capability involves AI-enabled biosensors deployed in strategic locations, including urban centers, transportation hubs, and regions of particular geopolitical concern. These sensors can continuously sample air, water, or surfaces for biological signatures. Machine learning models can then analyze the data in real time to distinguish between the background presence of naturally occurring microorganisms and patterns that might suggest unusual biological activity or deliberate release.²¹
The analytical power of this layer lies in its ability to compare observed environmental signatures against detailed computational models of natural disease spread. Dispersion patterns, geographic clustering, and concentration levels can all be evaluated against baseline expectations derived from epidemiological modeling. When observed patterns deviate significantly from what natural disease dynamics would predict, those deviations become signals warranting investigation. A disease cluster that appears too geographically concentrated, spreads too rapidly in a pattern inconsistent with person-to-person transmission, or exhibits an unusual pathogen signature can be distinguished from a natural outbreak with a precision that traditional epidemiological surveillance cannot match.
Satellite imagery and drone-based monitoring add a structural dimension to environmental surveillance. AI systems trained on facility imagery can detect unusual infrastructure changes at research or production facilities, including modifications to ventilation systems, new construction inconsistent with declared research activities, or operational patterns that diverge from what normal research programs would generate. These remote sensing capabilities allow continuous passive monitoring of facilities of concern without requiring physical access or the consent of the host state.²²
Importantly, these technologies are not about constant surveillance of populations. They are about early detection and situational awareness, allowing public health and security institutions to respond quickly to emerging threats, whether natural or deliberate. The same biosensor networks that could detect an unusual release of an engineered pathogen provide continuous public health benefits by enabling earlier detection of naturally occurring disease outbreaks. This dual benefit is significant from a governance perspective, as it creates incentives for broad international participation that purely security-oriented monitoring would not generate.
The integration of environmental monitoring with the genomic and OSINT layers described above creates particularly powerful analytical combinations. A biosensor alert indicating unusual biological activity in a given location can trigger targeted genomic analysis of collected samples, while simultaneously prompting OSINT systems to scan for recent publications, procurement activity, or facility changes in the relevant geographic area. These convergent signals, arriving through independent channels, provide a quality of situational awareness that no single monitoring layer can achieve alone.
Layer Five: Behavioral and Financial Analysis
Clandestine programs, whatever their domain, leave organizational and financial traces, and biological weapons programs are no exception. AI tools applied to financial flows, procurement transactions, and organizational networks can identify patterns that are consistent with covert activity even when no single transaction or relationship is individually conclusive.²³
Unusual funding channels represent one important signal. Legitimate research programs typically display funding patterns that are consistent with their institutional affiliations, publication records, and declared research objectives. Programs that draw funding through opaque channels, shell companies, or intermediary organizations whose stated purposes are inconsistent with biological research can be identified through financial network analysis. Repeated transactions tied to dual-use materials, particularly when they involve entities with no established research profile in the relevant area, represent another category of signal that AI systems can surface from large financial datasets.
Social network analysis extends this capability to the organizational domain. By mapping relationships among researchers, institutions, suppliers, and funding sources, AI systems can identify clusters with high dual-use potential that might not be apparent from examining any single relationship in isolation. A network of researchers who collectively span the technical capabilities required for a biological weapons program, connected through shared funding sources, equipment suppliers, or publication collaborations, represents a structural pattern worth examining even if no individual member of the network has done anything individually concerning.²⁴
The behavioral dimension of this analysis is subtler but potentially valuable. Researchers and institutions engaged in legitimate science behave in ways that are broadly consistent with the norms of open scientific practice: they publish their results, present at conferences, share data with collaborators, and engage transparently with regulatory and oversight bodies. Systematic deviations from these behavioral norms, such as unusually low publication rates relative to funding levels, withdrawal from international collaborations, or patterns of data withholding, can be detected by AI systems monitoring the scientific ecosystem and may indicate research programs that are not what they appear to be.
When combined with the other monitoring layers, behavioral and financial analysis contributes to the convergent evidence picture that is the architecture’s greatest strength. A facility whose procurement patterns are unusual, whose publication record is inconsistent with its stated research mission, whose funding flows through opaque channels, and whose research themes cluster around dual-use capabilities presents a very different risk profile than a facility that triggers concern on only one of these dimensions.
Layer Six: Simulation and Predictive Modeling
The five monitoring layers described above are fundamentally reactive in orientation: they detect patterns in existing data and flag them for human review. The sixth capability operates differently. AI-driven simulation and predictive modeling allow analysts to reason prospectively about biological threats, assess the plausibility of observed events, and identify vulnerabilities in surveillance systems before those vulnerabilities are exploited.²⁵
Agent-based models and epidemic simulations allow analysts to compare observed disease patterns with expected ones derived from natural outbreak dynamics. If an outbreak’s characteristics, including its spread, severity, genetic features, or geographic distribution, deviate significantly from what natural scenarios would predict, that discrepancy may signal the need for deeper investigation.
Predictive modeling also serves as a planning and preparedness tool. By simulating the behavior of hypothetical engineered pathogens under various release scenarios, analysts can identify the geographic locations, population densities, and environmental conditions that would make detection most difficult. This allows biosensor network designers to optimize sensor placement, helps public health planners identify the response capabilities that would be most valuable, and enables policymakers to understand which gaps in the monitoring architecture are most consequential and therefore deserve the most urgent investment.²⁶
Vulnerability analysis is a related application. AI systems can stress-test existing monitoring architectures by simulating adversarial strategies: what sequence modifications would evade genomic screening? What procurement patterns would avoid supply chain flags? What funding structures would escape financial analysis? By systematically exploring these questions, analysts can identify the weaknesses in current monitoring systems and develop countermeasures before adversaries exploit them.
The integration of predictive modeling with real-time monitoring data creates a continuously updated threat assessment picture. As new sequences are deposited, new publications appear, new procurement patterns emerge, and new environmental sensor readings arrive, simulation models can be updated to reflect the current state of knowledge and recalibrate risk assessments accordingly. The result is not a static picture of known threats but a dynamic model of the evolving threat landscape that can inform both policy decisions and operational monitoring priorities.
The Integrated Picture
The true power of this six-layer architecture emerges when the layers are considered together rather than in isolation.
A sequence flagged by genomic surveillance as potentially engineered can be cross-referenced against synthesis order records to identify when, where, and by whom it was produced. A cluster of publications identified by OSINT analysis as converging on dangerous pathogen-engineering techniques can be linked to procurement records showing acquisition of relevant equipment and financial records showing unusual funding flows. A biosensor alert indicating unusual biological activity can trigger targeted genomic analysis of collected environmental samples while simultaneously prompting OSINT systems to scan for recent publications or facility changes in the relevant area. Predictive models can assess whether an emerging outbreak’s characteristics are consistent with natural disease dynamics or suggest something requiring deeper investigation. Signals that are ambiguous or inconclusive when examined in a single data stream become far more interpretable when corroborated across multiple independent sources.
This is what intelligence analysts call convergent evidence, the principle that multiple independent indicators pointing in the same direction provide confidence that cannot be achieved by any single indicator, however compelling. Applied to biological weapons monitoring, convergent evidence across all six layers constitutes something much closer to a genuine verification capability than anything the BWC has previously had available.²⁷
Case Study: Retrospective Application to the Origins of SARS-CoV-2
Introduction
No event in recent history has more powerfully demonstrated the consequences of the BWC’s verification gap than the COVID-19 pandemic. The question of whether SARS-CoV-2 emerged through natural zoonotic spillover or through some form of laboratory incident at the WIV in Wuhan, China remains one of the most consequential and contested issues in contemporary science and geopolitics. Rather than adjudicating that question here, this case study asks a different one: had the six-layer AI monitoring architecture described in this essay been operational in the years before the pandemic, what signals would it have detected, and what picture would those signals have collectively painted?
The answer, examined honestly and in detail, is that the monitoring architecture would have generated a substantial and convergent body of signals warranting serious expert review well before December 2019. Whether those signals would have proven sufficient to prevent the pandemic, or even to characterize its origins definitively in retrospect, remains uncertain. What is not uncertain is that the world would have entered the crisis with a far richer evidentiary record than it actually possessed, and that the five years of inconclusive investigation that followed might have been substantially shorter and more productive.
The institutions and individuals named in this case study, including the WIV, the Chinese Communist Party (CCP), researcher Ralph Baric of the University of North Carolina (UNC), EcoHealth Alliance (EHA), the National Institute of Allergy and Infectious Diseases (NIAID), the Defense Threat Reduction Agency (DTRA), and UC Davis and its OHI, are discussed solely in their capacity as participants in a research network whose activities would have been visible to the monitoring architecture described. The discussion of what signals an AI system would have detected is not an assertion of wrongdoing by any individual or institution. It is an illustration of how the monitoring architecture functions as an analytical tool, and why the transparency and governance frameworks surrounding dual-use research matter so profoundly.
Layer One Applied: Genomic Signals in SARS-CoV-2
The genomic features of SARS-CoV-2 present a set of analytical puzzles that a systematic AI monitoring system would have flagged for expert review, both before and immediately after the virus’s emergence.
The most discussed anomaly is the furin cleavage site (FCS) at the junction of the S1 and S2 subunits of the spike protein. This polybasic cleavage site, encoded by the sequence PRRA, is absent in all known bat coronaviruses closely related to SARS-CoV-2, including RaTG13, which shares approximately 96.2% overall genome sequence identity with SARS-CoV-2.²⁸ Furin cleavage sites significantly enhance the ability of coronaviruses to infect human cells by enabling spike protein priming by ubiquitous host proteases, and their presence in pandemic influenza strains has long been recognized as a virulence determinant. An AI system trained on coronavirus genomics would have assigned high anomaly scores to this feature, particularly given its absence in the virus’s closest known relatives.
The codon usage pattern within the furin cleavage site insertion adds a further layer of analytical interest. The CGG-CGG (arginine-arginine) codon pair within the PRRA sequence is rarely used by coronaviruses but is commonly used in human gene expression systems for laboratory protein production. Researchers including Steven Quay and Richard Muller have noted this codon usage as potentially inconsistent with natural evolution, while others have argued that rare codon usage can occur naturally.²⁹ An AI system would not resolve this disagreement, but it would flag the combination of an absent feature in related viruses and unusual codon usage as warranting expert review.
The receptor binding domain (RBD) of SARS-CoV-2’s spike protein presents additional anomalies. The RBD shows exceptionally high affinity for the human angiotensin-converting enzyme 2 (ACE2) receptor, higher than would be predicted from the overall sequence similarity with RaTG13. Structural analyses using AlphaFold and related tools reveal that the RBD is optimized for human ACE2 binding in ways that appear inconsistent with recent natural adaptation to human hosts, a finding that several research groups have noted without reaching consensus on its implications.³⁰ An AI system combining phylogenetic analysis with structural prediction would have identified this as a significant anomaly.
Separately, work conducted prior to the pandemic by researchers at the WIV, UNC, and collaborating institutions had generated chimeric coronaviruses combining spike proteins from bat coronaviruses with backbone sequences from other strains. A 2015 paper by Menachery and colleagues, including Baric and WIV researcher Zhengli Shi, described the construction of a chimeric virus using the spike protein of SHC014, a bat coronavirus, inserted into a mouse-adapted SARS backbone. The resulting chimera was shown to replicate efficiently in human airway cells.³¹ An AI genomic surveillance system monitoring newly deposited sequences would have identified this published chimeric construct as a high-priority dual-use signal, noting that the experimental approach demonstrated technical capability directly relevant to the creation of human-adapted coronaviruses.
Critically, the genomic surveillance layer would also have been positioned to detect the removal of the WIV’s PREDICT coronavirus sequence database in September 2019, which had contained records of more than 22,000 wildlife samples and associated virus sequences collected over years of field work, much of it generated under the UC Davis-led PREDICT program. The UC Davis OHI maintained an explicit archiving role for PREDICT sequence data, with program virus sequences deposited in a dedicated National Center for Biotechnology Information (NCBI) GenBank BioProject. A genomic surveillance system monitoring the completeness and consistency of this specific data archive over time would have detected the sudden unavailability of the WIV database as an anomalous data access event in real time, prompting queries about its removal at exactly the moment when its contents would have been most analytically valuable.³²
Layer Two Applied: The OSINT Picture of the WIV Research Network
An OSINT monitoring system operating across the peer-reviewed literature, preprint servers, patent databases, and funding records in the years before 2019 would have constructed a detailed and analytically significant picture of the research network centered on the WIV and its international collaborators. That network was considerably broader and more institutionally complex than is commonly appreciated, and UC Davis occupied a structural position within it that was in some respects more central than that of EHA.
The published literature from the WIV’s bat coronavirus research program, led primarily by Shi Zhengli, documented a systematic and expanding effort to collect, sequence, and characterize bat coronaviruses with potential human pandemic relevance. Publications spanning from 2013 onward described the isolation of bat coronaviruses using human ACE2 as a receptor, the construction of chimeric viruses to test human infectivity potential, and the identification of bat coronavirus sequences sharing structural features with SARS-CoV-1.³³ An NLP system scanning this literature for convergence on human-relevant pathogen engineering would have identified this research cluster as a high-priority dual-use concern, not because the research was necessarily inappropriate, but because its cumulative trajectory represented a systematic capability-building program in exactly the domain most relevant to pandemic pathogen creation.
The collaboration network linking the WIV to EHA, headed by Peter Daszak, would have been prominently visible in any collaboration graph constructed from co-authorship records and grant documentation. EHA served as the primary operational coordinator for U.S. federal funding flowing to the WIV, with grants from NIAID supporting bat coronavirus surveillance and characterization work.³⁴ DTRA also provided funding for related bat virus surveillance work in Southeast Asia through grants that intersected with EHA’s programmatic activities.³⁵
Critically, however, an OSINT system would have identified that the institutional anchor of this entire programmatic enterprise was not EHA but UC Davis and its OHI, led by epidemiologist Jonna Mazet. UC Davis served as the primary grantee of the USAID PREDICT program, the multi-year, multi-hundred-million-dollar global pathogen surveillance initiative that constituted the direct organizational and financial predecessor to the Global Virome Project (GVP). Under a series of NIH grants and USAID contracts, EHA coordinated the collection of SARS-like bat coronaviruses from the field in southwest China and southeast Asia, the sequencing of these viruses, the archiving of these sequences involving UC Davis, and the analysis and manipulation of these viruses notably at UNC.³⁶ This explicit archiving role made UC Davis not merely a funding conduit but an active participant in the data management infrastructure for the entire bat coronavirus surveillance program, a node through which sequence data and sample information necessarily flowed and which an OSINT system would have identified as a high-priority point of analytical interest.
The GVP itself, the planned successor to PREDICT at dramatically greater scale, was co-founded by Mazet alongside Daszak and Dennis Carroll, the former director of USAID’s Emerging Threats Division. The GVP’s published governance documents and leadership roster, visible in the scientific literature and institutional websites, included Shi Zhengli of the WIV as a project leader, and proposed to involve BGI, China’s largest genomic sequencing company, which has documented ties to the People’s Liberation Army (PLA), as the primary sequencing partner.³⁷ An NLP system scanning these documents would have flagged the convergence of American academic leadership, Chinese government-affiliated research institutions, and PLA-connected sequencing infrastructure around a program explicitly designed to collect and catalog the world’s unknown viral diversity as a significant dual-use concern warranting enhanced oversight attention.
An AI system mapping funding flows through grant databases would have identified this multi-agency, multi-institutional funding structure, noted its convergence on a single international research node at the WIV, and flagged the combined funding levels and research scope as warranting oversight review. The overall five-year PREDICT-2 award alone totaled $138.4 million, representing one of the largest dual-use research funding streams visible to any monitoring system operating across federal grant databases.³⁸
The NIAID grant portfolio adds further analytical significance. Grant R01AI110964, awarded to EHA and supporting coronavirus surveillance and gain-of-function-adjacent research at the WIV, was renewed multiple times and expanded in scope despite ongoing internal U.S. government debates about the appropriate oversight framework for gain-of-function research. Correspondence subsequently released under Freedom of Information Act (FOIA) requests revealed that NIAID program officers were aware of the chimeric virus construction work being conducted under this grant and engaged in discussions about whether it met the definition of enhanced potential pandemic pathogen (ePPP) research requiring enhanced oversight.³⁹ An AI system monitoring grant databases, publication records, and regulatory correspondence simultaneously would have identified the gap between the research being conducted and the oversight framework being applied as a significant anomaly warranting regulatory review.
The Baric laboratory at UNC contributed essential technical capabilities to this research network. Baric’s group had pioneered reverse genetics systems for coronaviruses, enabling the reconstruction and modification of coronavirus genomes from component sequences. Publications from the Baric group, including the 2015 Menachery paper and subsequent work on coronavirus spike protein engineering, demonstrated technical capabilities that an OSINT system would have identified as directly relevant to the creation of novel human-adapted coronaviruses.⁴⁰ The pattern of collaboration between Baric’s group, the WIV, and the broader UC Davis-anchored network, visible in co-authorship records and grant documentation, would have represented one of the most significant dual-use collaboration clusters identifiable in the global coronavirus research literature.
Discourse analysis would have added further texture to this picture. In the years before the pandemic, an active debate was occurring within the scientific community about the risks of gain-of-function research on potential pandemic pathogens. Publications, conference presentations, and regulatory submissions documented biosafety experts raising concerns about the risk profile of chimeric coronavirus work being conducted at BSL-2 and BSL-3 containment, which many experts considered inadequate for work with potentially human-adapted pathogens.⁴¹ An AI system tracking this discourse would have identified a significant gap between the risk assessments being expressed within the scientific community and the oversight frameworks being applied to the research, a gap that represents exactly the kind of early warning signal the monitoring architecture is designed to surface.
Layer Three Applied: Supply Chain Signals at the WIV
Supply chain monitoring applied retrospectively to the WIV research program would have generated several categories of signal warranting investigation, with UC Davis’s programmatic role adding important additional dimensions.
The WIV’s biosafety infrastructure presents the most directly relevant supply chain question. The facility’s BSL-4 laboratory, the first in China, became operational in 2018, while much of the bat coronavirus work most relevant to SARS-CoV-2’s characteristics was reportedly conducted in BSL-2 and BSL-3 facilities.⁴² The procurement patterns associated with a research program working with potentially human-adapted bat coronaviruses at BSL-2 containment would have been flagged by an AI system as potentially inconsistent with best-practice biosafety standards, generating a signal about the mismatch between research risk profile and containment capability. The PREDICT program, through its UC Davis-managed equipment provision to international partner laboratories, had funded laboratory equipment for the WIV, making UC Davis a direct participant in the supply chain that determined the WIV’s research infrastructure.⁴³
The WIV’s acquisition of specialized coronavirus research equipment, including virus culture systems, aerosol characterization equipment, and humanized mouse models for infection studies, would have been visible in procurement records and import documentation. An AI system analyzing Chinese customs and import records for biological research equipment would have been able to construct a capability profile for the WIV and assess whether it was consistent with the facility’s declared research mission and published output.⁴⁴
The question of DNA synthesis orders is particularly significant. If any components of the SARS-CoV-2 genome, or of the chimeric constructs described in published and unpublished WIV research, were ordered from commercial synthesis providers, those orders would in principle be visible to a synthesis screening system. Chinese synthesis providers have not historically been subject to the same transparency and screening requirements as their counterparts in the United States and Europe, representing precisely the jurisdictional coverage gap that an internationally harmonized synthesis screening system would be designed to close.⁴⁵
The data management practices of the UC Davis-led PREDICT program would have generated a specific and important supply chain signal. PREDICT virus sequence data was deposited in a dedicated NCBI GenBank BioProject, creating a documented and auditable archive of the program’s viral discoveries. The subsequent direction by EHA leadership that certain sequences collected under the PREDICT program be excluded from this public database to avoid what they described as unwelcome attention represents a direct intervention in the informational supply chain that an AI monitoring system tracking database completeness over time would have detected as an anomalous and concerning data management event.⁴⁶ The fact that at least 11,051 samples collected by USAID-backed scientists under the PREDICT program were left in WIV freezers and never publicly sequenced represents a material gap in the supply chain record that a systematic monitoring architecture would have flagged as requiring resolution.⁴⁷
Layer Four Applied: Environmental Monitoring Around Wuhan
Environmental monitoring represents perhaps the most tantalizing counterfactual in the SARS-CoV-2 origins question, because it addresses directly the question of what a prospective detection system might have observed in the weeks and months before the outbreak was publicly identified.
Retrospective analysis of hospital admission data, internet search trends, and satellite imagery of hospital parking lots in Wuhan has suggested that unusual respiratory illness activity may have begun as early as August or September 2019, several months before the outbreak was officially recognized.⁴⁸ An AI environmental monitoring system integrating syndromic surveillance data, wastewater epidemiology signals, and biosensor readings from strategic locations in Wuhan would have been positioned to detect this early signal, potentially providing weeks or months of additional lead time for investigation.
The geographic clustering of early SARS-CoV-2 cases around the Huanan Seafood Market and, in some analyses, around the WIV and related facilities in the Wuchang district of Wuhan, is a pattern that AI-driven spatial epidemiology tools would have characterized in detail.⁴⁹ The ability to compare observed clustering patterns against computational models of natural zoonotic spillover from a wet market versus models of laboratory-associated release would have provided an analytical framework for evaluating these competing hypotheses with a rigor that the actual investigation, hampered by data access limitations, was unable to achieve.
Satellite imagery analysis of the WIV campus during the period from August to November 2019 has been conducted retrospectively by several research groups. Analyses published by the Australian Strategic Policy Institute (ASPI) and others identified changes in vehicle traffic patterns and facility activity at the WIV during this period that were considered potentially consistent with an unusual event at the facility, though the analyses were necessarily inconclusive given the limitations of publicly available imagery resolution.⁵⁰ A systematic AI-assisted satellite monitoring program would have provided higher-resolution, continuously updated imagery analysis rather than the retrospective and fragmentary picture that post-hoc analysis has been able to construct.
Layer Five Applied: Financial and Behavioral Signals
The financial architecture supporting the WIV research program would have generated multiple categories of signal in a behavioral and financial monitoring system, with the UC Davis-centered funding network adding substantial analytical depth to the picture.
The UC Davis OHI occupied the apex of the most significant funding topology in the network. As the primary PREDICT grantee, UC Davis was the institutional entity through which USAID funding flowed before being distributed to EHA as a core partner and subgrantee, and from EHA onward to the WIV. Between 2009 and 2019, USAID PREDICT, headed by Mazet at UC Davis, channeled approximately $1.1 million to the WIV via EHA, while NIAID contributed an additional $826,277 in direct funding to the WIV over the same period.⁵¹ An AI financial monitoring system mapping this multi-layered funding topology would have constructed a detailed picture of the flow of U.S. government funds through a sequence of institutional intermediaries, each adding a layer of distance between the funding agencies and the ultimate research activities at the WIV.
The behavioral signal generated by the transition from PREDICT to the GVP is particularly significant. As PREDICT was winding down, Daszak, Carroll, and Mazet of UC Davis used $1.3 million of PREDICT program funds to travel and solicit financial support for the GVP, the successor organization they were co-founding.⁵² An AI system monitoring grant compliance records and financial transactions for anomalies relative to program objectives would have flagged the use of operational program funds for successor organization development as a compliance question warranting regulatory attention.
The flow of U.S. federal funds through EHA to the WIV represents a well-documented funding pathway that an AI financial monitoring system would have mapped in detail. Between 2014 and 2019, EHA received approximately $3.1 million in NIAID funding under grant R01AI110964, a portion of which was subgranted to the WIV to support bat coronavirus surveillance and characterization.⁵³ An AI system cross-referencing this funding against the published output from the WIV, the scope of research being conducted relative to the grant’s stated objectives, and the oversight frameworks being applied would have identified several anomalies: the apparent mismatch between grant scope and research activities, the limited transparency of subgrant arrangements with a Chinese government-affiliated institution, and the regulatory ambiguity surrounding the classification of chimeric coronavirus work as ePPP research.
The DTRA funding stream adds further complexity. DTRA grants supporting bat virus surveillance work in Southeast Asia intersected with EHA’s programmatic activities and the broader research network connecting American and Chinese coronavirus researchers. An AI system mapping the full funding topology of this network, including primary grants, subgrants, and collaborative agreements across UC Davis, EHA, UNC, NIAID, DTRA, and the WIV, would have constructed a picture of a research program whose total resources, distributed governance structure, and international reach exceeded what any single oversight body was positioned to monitor comprehensively.⁵⁴
Behavioral signals from the broader network would have generated additional flags. The progressive reduction in publicly available information about the WIV’s virus collection, the absence of certain bat coronavirus sequences from published databases despite their apparent collection during field expeditions documented in grant reports, and the direction to exclude PREDICT sequences from public databases to avoid what was internally described as unwelcome attention all represent deviations from the open science norms that legitimate publicly funded research programs are expected to observe.⁵⁵ An AI system tracking data deposition patterns, publication rates relative to funding levels, and compliance with data-sharing requirements would have identified these deviations as a coherent pattern of information opacity rather than isolated incidents.
The behavior of EHA in its capacity as grant intermediary would also have generated signals. Communications subsequently disclosed through FOIA litigation revealed that EHA leadership was aware of biosafety concerns at the WIV and engaged in discussions about how to characterize the research being conducted there in the context of U.S. regulatory requirements.⁵⁶ An AI system monitoring grant compliance documentation, regulatory correspondence, and institutional communications would have identified these discussions as indicators of potential oversight gaps requiring regulatory attention.
Layer Six Applied: Simulation and the Plausibility of Origins Scenarios
Predictive modeling and simulation tools applied to the SARS-CoV-2 origins question would have contributed several important analytical capabilities.
Agent-based epidemic models calibrated to Wuhan’s urban geography and population density can be used to evaluate the plausibility of different outbreak origins scenarios. A natural spillover event originating at the Huanan Seafood Market would be expected to produce a spatial distribution of early cases centered on the market and spreading outward through established human contact networks. A laboratory-associated release event originating at the WIV campus would be expected to produce a different spatial signature, with early cases distributed around laboratory personnel and their contact networks rather than around the market. Retrospective modeling studies have attempted this analysis with the data available, reaching varied conclusions that reflect the limitations of the available case data rather than the limitations of the modeling approach.⁵⁷
Phylodynamic modeling, which uses the evolutionary relationships among early virus sequences to reconstruct outbreak timing and origin, has been applied extensively to SARS-CoV-2. These analyses generally suggest that the virus was circulating in humans from approximately October to December 2019, a finding consistent with both the natural spillover and laboratory-associated release hypotheses. However, the absence of intermediate bat coronavirus sequences that would be expected under a natural evolution scenario, combined with the unusual genomic features discussed under Layer One, represents a set of constraints that phylodynamic models consistently struggle to accommodate under natural origin assumptions.⁵⁸
Simulation of the WIV’s research activities using the published literature on chimeric coronavirus construction would have allowed analysts to assess the plausibility of SARS-CoV-2 having been created or adapted through the techniques available at the facility. Reverse genetics systems developed by the Baric laboratory and transferred to collaborating institutions, including the WIV, would in principle have been capable of generating a SARS-CoV-2-like genome from component sequences. Simulation models assessing the probability of various technical pathways would not have proven that such a pathway was followed, but they would have established its technical feasibility and allowed analysts to assign it a non-negligible prior probability that subsequent evidence could then update.⁵⁹
Crucially, simulation tools would also have been applicable to the UC Davis-centered PREDICT network itself. A simulation of the program’s data management practices, modeling the probability that significant viral sequences collected under the program remained unpublished or undisclosed at the time of the outbreak, would have quantified the evidentiary gap created by the incomplete public record of PREDICT’s discoveries. With over 160 novel coronaviruses detected by the PREDICT program and 11,051 samples remaining in WIV freezers at the time of the pandemic, simulation models could have estimated the probability that one or more of these undisclosed samples was relevant to SARS-CoV-2’s origins, providing a quantitative framework for assessing the significance of the transparency gap.⁶⁰
What the Retrospective Analysis Teaches Us
Taken together, the retrospective application of the six-layer monitoring architecture to SARS-CoV-2 origins, with particular attention to the role of UC Davis as the programmatic anchor of the U.S.-WIV research network, generates several important lessons for the design and deployment of AI monitoring systems.
First, the monitoring architecture would have generated a substantial and convergent body of signals well before December 2019. Genomic anomalies in published chimeric coronavirus research, OSINT signals from the WIV research network and its multi-institutional U.S. funding relationships centered on UC Davis, supply chain questions about biosafety infrastructure and data management practices, environmental indicators of unusual respiratory illness activity, financial signals from the multi-agency, multi-institutional funding structure, and simulation assessments of technical feasibility would all have been visible to a systematic monitoring system operating across the relevant data streams.
Second, the case powerfully illustrates why the institutional scope of monitoring must extend beyond the most obviously visible nodes in a research network. EHA and the WIV received the most public and congressional attention in the aftermath of the pandemic. But the monitoring architecture would have identified UC Davis as the institutional entity with the broadest programmatic visibility into the network’s activities, the primary custodian of its sequence data archives, and the organizational home of leadership figures who were simultaneously operating U.S. government programs and co-founding successor organizations with Chinese government-affiliated partners. The signals generated from the UC Davis node were not signals of misconduct. They were signals of governance complexity that exceeded the capacity of any single oversight body to monitor, and that an integrated AI monitoring system would have been uniquely positioned to surface.
Third, the case demonstrates that the dual-use ambiguity problem is not merely theoretical. Every genomic feature of SARS-CoV-2 that raises questions about its origins has at least a plausible natural explanation. Every concerning element of the WIV and UC Davis-anchored research program has a parallel in legitimate pandemic preparedness research conducted at institutions worldwide. The monitoring architecture does not resolve this ambiguity. What it does is ensure that the ambiguity is identified, documented, and subjected to expert review in real time rather than after a catastrophic event has already occurred.
Fourth, the case highlights the critical importance of the governance and transparency frameworks that must accompany any monitoring architecture. The signals that a monitoring system would have detected in this case were not primarily signals of obvious wrongdoing. They were signals of insufficient transparency, inadequate oversight, and governance gaps in the management of high-risk dual-use research distributed across multiple institutions and national jurisdictions. Closing those gaps is as important as the technical capabilities of the monitoring system itself.
Fifth, the SARS-CoV-2 case provides the most compelling available argument for the urgency of the monitoring architecture that the Trump administration has now committed to building. Whether the virus emerged naturally or through a laboratory incident, the world has paid an extraordinary price for the absence of the transparency and monitoring infrastructure that could have provided earlier warning, better evidence, and more effective response.
The cost of that absence, measured in millions of lives, trillions of dollars in economic disruption, and profound damage to international trust in scientific institutions, is the strongest possible argument for ensuring that the next potential outbreak is met with the full analytical capability that AI-enabled monitoring can provide.
What This Type of AI Analysis Cannot Do
It is important to be precise about the limitations of this architecture, because overstating its capabilities would be as dangerous as ignoring them.
None of these systems can establish intent. Biological research is irreducibly dual-use, and the most sensitive experiments in vaccine development, biodefense research, and basic virology can be nearly indistinguishable from weapons-relevant work at the level of sequences, publications, procurement patterns, and financial flows. The AI systems described here are triage tools, not verdict machines. AI can generate false positives and false negatives. Every flag these systems generate requires expert human evaluation, contextual judgment, and, ultimately, diplomatic or political engagement with the relevant state or institution. Human expertise, diplomatic context, and multilateral oversight remain indispensable. The goal is not automated enforcement. The goal is better information to support better decisions.⁶¹
Coverage is also a fundamental limitation. Not all genetic research is published. Not all DNA synthesis orders go through screened providers. Not all equipment procurement is reflected in accessible trade data. Not all financial flows are visible to monitoring systems. Clandestine programs that operate entirely outside the open research ecosystem would largely evade all six monitoring layers. What AI-enabled monitoring can do is dramatically raise the cost and difficulty of concealment, ensuring that programs that touch the legitimate biotechnology sector in any way leave detectable traces.
The governance challenges are equally significant. AI systems must operate within legal and ethical frameworks. They must respect privacy, protect legitimate scientific collaboration, and avoid creating incentives for secrecy or mistrust. Systems capable of monitoring global genomic databases, scientific literature, biotechnology supply chains, environmental sensors, and financial flows at the level of detail described here represent extraordinary concentrations of analytical power. The risk that such systems could be turned toward industrial espionage, competitive intelligence, or political targeting of legitimate researchers is not hypothetical. Robust international governance frameworks, specifying data collection authorities, use limitations, access controls, and legal protections for researchers and institutions, are not optional features of this monitoring architecture. They are prerequisites for its legitimate operation.⁶²
International cooperation is essential. Data-sharing agreements, transparency measures, and confidence-building mechanisms under the BWC framework will need to evolve alongside these technologies. The rapid advancement of AI capabilities in the biological domain means that the same tools being proposed for monitoring purposes are also lowering barriers to misuse. As analysts at the Center for Strategic and International Studies (CSIS) have noted, current synthesis screening measures are already challenged to detect AI-generated sequences that do not match known agents, and continued advances in biological design tools will require screening systems to evolve continuously to remain effective.⁶³ This underscores the importance of the administration’s commitment to ongoing investment in biosecurity research and the development of more sophisticated AI-based screening capabilities.
Why This Matters Now, and the Road Ahead
The urgency behind developing these monitoring capabilities is not abstract. The biotechnology revolution is accelerating rapidly, and the tools for engineering pathogens are becoming cheaper, more powerful, and more widely accessible with each passing year. The same trends driving extraordinary progress in medicine, agriculture, and materials science are also lowering the barriers to biological weapons development in ways that existing governance frameworks were not designed to address.
The BWC was negotiated in a world where sophisticated biological weapons programs required nation-state resources and industrial-scale infrastructure. That world is changing. The convergence of advances in synthetic biology, machine learning, automated laboratory systems, and widely distributed manufacturing capability is creating a landscape in which the technical barriers to biological weapons development are falling faster than the governance barriers are rising.⁶⁴
If done well, AI-enabled monitoring could help close the verification gap that has existed since the BWC was signed. It could provide earlier warning of emerging threats, strengthen deterrence by increasing the likelihood of detection, and build confidence among states that the treaty is being upheld. Against this backdrop, the Trump administration’s September 2025 announcement has created a genuine and significant opportunity. Biosecurity experts who have spent years arguing for stronger BWC verification mechanisms find themselves with high-level political support from the world’s leading AI power. The Carnegie Endowment for International Peace has noted that the U.S. proposal offers strong motivation for sustained investment, with the prospect of deploying American AI technology to address a long-standing international challenge representing exactly the kind of initiative that could attract durable political commitment.⁶⁵ The initiative has also arrived alongside constructive signals from Russia, historically a cautious actor in BWC negotiations, creating a potentially favorable alignment of circumstances for meaningful progress.
The road ahead will require sustained diplomatic engagement. BWC negotiations move slowly by their nature, and translating a high-level political commitment into specific treaty mechanisms, agreed technical standards, and internationally accepted governance frameworks is the work of years rather than months. The 2026 BWC Review Conference will be the first major test of the initiative’s momentum, and biosecurity experts have emphasized that maintaining focus and energy through the deliberate pace of multilateral negotiations will be essential to realizing the promise of the president’s commitment.⁶⁶
AI-enabled monitoring cannot solve the fundamental problem of biological weapons verification on its own. No technical system can substitute for the political will, diplomatic engagement, and institutional investment required to strengthen biological weapons norms. But it can provide something the international community has never had before: continuous, large-scale, evidence-based situational awareness about global biotechnology activity. In a domain where the consequences of a monitoring failure could be catastrophic and irreversible, that capability is not merely useful. It may prove to be essential.
The invisible inspectors are already being built. A sitting U.S. president has committed to deploying them internationally. The task now before the international community is to invest in the governance frameworks, the diplomatic persistence, and the multilateral trust-building necessary to deploy these tools legitimately, transparently, and in service of the global public interest, before they are needed in circumstances where there is no longer time to get the design right.
Conclusion
This essay has argued that artificial intelligence offers the international community its most promising opportunity in fifty years to address the fundamental verification gap at the heart of the Biological Weapons Convention. The six monitoring layers described here, encompassing genomic surveillance, open-source intelligence analysis, supply chain monitoring, environmental biosensor networks, behavioral and financial analysis, and predictive modeling, together constitute a continuously operating early warning architecture of a kind the world has never previously had available. No single layer is sufficient on its own. Each generates signals that are ambiguous when examined in isolation. But when integrated across all six dimensions, the convergent evidence they produce represents a qualitatively new verification capability, one that works not through physical inspections requiring state consent but through the systematic analysis of the digital and physical footprint that modern biotechnology inevitably generates.
The retrospective case study applying this architecture to the origins of SARS-CoV-2 demonstrates both the power and the limits of the approach with unusual clarity. Had these monitoring systems been operational in the years before 2019, they would have generated a substantial body of convergent signals from the research network centered on the WIV, its American collaborators, and the multi-agency, multi-institutional funding structures supporting their work. Genomic anomalies in published chimeric coronavirus constructs; open-source signals from the broader network including the UC Davis-anchored PREDICT program; supply chain questions about biosafety infrastructure and data management; environmental indicators of unusual respiratory illness activity; financial signals from the complex grant relationships connecting UC Davis, NIAID, DTRA, EHA, and the WIV; and simulation assessments of technical feasibility would all have been visible to a systematic monitoring system operating across the relevant data streams. The UC Davis case is particularly instructive because it illustrates how the most analytically significant nodes in a research network are not always the most publicly visible ones, and why comprehensive monitoring must map the full institutional topology of dual-use research programs rather than focusing only on their most prominent participants. Whether those signals would have proven sufficient to prevent the pandemic cannot be known. What is certain is that the world would have entered the crisis with a far richer evidentiary record, and the five years of inconclusive investigation that followed might have been substantially shorter and more productive. The cost of the monitoring gap that actually existed, measured in millions of lives, trillions of dollars in economic disruption, and profound damage to international trust in scientific institutions, is the most powerful argument available for the urgency of the monitoring architecture this essay describes.
The Trump administration’s September 2025 commitment to pioneer an AI-based BWC verification system represents a historic opportunity to begin closing that gap. The technical foundations are available. The policy commitment has been made at the highest level. The diplomatic moment, with constructive signals from multiple major powers and the 2026 BWC Review Conference providing a near-term focal point, is more favorable than it has been in decades. What remains is the hard work of translating rhetorical commitment into specific treaty mechanisms, agreed technical standards, internationally accepted governance frameworks, and the sustained diplomatic engagement that multilateral arms control processes require.
The invisible inspectors are being built. The SARS-CoV-2 pandemic has shown, at enormous human cost, what the world risks when they do not exist. The task now is to ensure they are deployed with the transparency, the governance, and the international legitimacy that will make them not merely technically capable but genuinely trusted instruments of global biosecurity.
The stakes could not be higher, and the window of opportunity may not remain open indefinitely.
Notes
Trump, Donald J. Address to the United Nations General Assembly, New York, September 23, 2025. Quoted in “At the U.N., Trump Proclaims Strong Will to Lead Global Fight Against ‘Man-Made Pathogens.’” Foreign Policy Blogs, November 16, 2025. https://foreignpolicyblogs.com/2025/11/16/at-the-u-n-trump-proclaims-strong-will-to-lead-global-fight-against-man-made-pathogens/.
Benson, Dennis A., Mark Cavanaugh, Karen Clark, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and Eric W. Sayers. “GenBank.” Nucleic Acids Research 41, no. D1 (2013): D36-D42. https://doi.org/10.1093/nar/gks1195.
Kosuri, Sriram, and George M. Church. “Large-Scale De Novo DNA Synthesis: Technologies and Applications.” Nature Methods 11, no. 5 (2014): 499-507. https://doi.org/10.1038/nmeth.2918.
Alley, Ethan C., Maxim Khimulya, Surojit Biswas, Mohammed AlQuraishi, and George M. Church. “Unified Rational Protein Engineering with Sequence-Based Deep Representation Learning.” Nature Methods 16, no. 12 (2019): 1315-1322. https://doi.org/10.1038/s41592-019-0598-1.
Andersen, Kristian G., Andrew Rambaut, W. Ian Lipkin, Edward C. Holmes, and Robert F. Garry. “The Proximal Origin of SARS-CoV-2.” Nature Medicine 26, no. 4 (2020): 450-452. https://doi.org/10.1038/s41591-020-0820-9.
Jumper, John, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, et al. “Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 596, no. 7873 (2021): 583-589. https://doi.org/10.1038/s41586-021-03819-2.
Wetterstrand, Kris A. “DNA Sequencing Costs: Data from the NHGRI Genome Sequencing Program.” National Human Genome Research Institute. Accessed February 2026. https://www.genome.gov/about-genomics/fact-sheets/Sequencing-Human-Genome-cost.
Brainard, Jess. “Scientists Are Drowning in COVID-19 Papers. Can New Tools Keep Them Afloat?” Science, May 13, 2020. https://doi.org/10.1126/science.abc7839.
Koblentz, Gregory D. “From Biodefence to Biosecurity: The Obama Administration’s Strategy for Countering Biological Threats.” International Affairs 88, no. 1 (2012): 131-148. https://doi.org/10.1111/j.1468-2346.2012.01064.x.
Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, 4171-4186. Minneapolis: Association for Computational Linguistics, 2019. https://doi.org/10.18653/v1/N19-1423.
Shiffman, Daniel, and Filippa Lentzos. “Monitoring Dual-Use Research Through Open-Source Intelligence.” Science and Public Policy 49, no. 1 (2022): 108-119. https://doi.org/10.1093/scipol/scab071.
Dent, Alison, and Jasper Becker. “Patent Analytics for Biosecurity: Tracking Dual-Use Biotechnology Innovation.” Health Security 19, no. 3 (2021): 285-296. https://doi.org/10.1089/hs.2020.0186.
Lentzos, Filippa, and Guy Reeves. “Funding, Ethics, and Dual-Use Life Sciences Research.” EMBO Reports 20, no. 7 (2019): e48049. https://doi.org/10.15252/embr.201948049.
DiNanno, Thomas G. “Modern Tools for Modern Threats: Towards Strengthening BWC Implementation, Verification, and Assurance.” Remarks at BWC Meeting of States Parties Side Event, Geneva, December 15, 2025. U.S. Mission to International Organizations in Geneva. https://geneva.usmission.gov/2025/12/16/remarks-on-msp-side-event-modern-tools-for-modern-threats-towards-strengthening-bwc-implementation-verification-and-assurance/.
Diggans, James, and Emily Leproust. “Next Steps for Access to Safe, Secure DNA Synthesis.” Frontiers in Bioengineering and Biotechnology 7 (2019): 86. https://doi.org/10.3389/fbioe.2019.00086.
International Gene Synthesis Consortium. “Harmonized Screening Protocol v2.0: Gene Sequence and Customer Screening to Promote Biosecurity.” IGSC, 2017. https://genesynthesisconsortium.org/wp-content/uploads/IGSCHarmonizedProtocol11-21-17.pdf.
Carter, Sarah R., and Robert M. Friedman. “DNA Synthesis and Biosecurity: Lessons Learned and Options for the Future.” J. Craig Venter Institute, 2015. https://www.jcvi.org/sites/default/files/2018-09/dna_synthesis_biosecurity_2015.pdf.
Dando, Malcolm, and Simon Whitby. “On the Fringe of Biology: Biotechnology and the Problem of Dual Use.” Medicine, Conflict and Survival 27, no. 4 (2011): 215-223. https://doi.org/10.1080/13623699.2011.645573.
Cupitt, Richard T. Nonproliferation Export Controls: Origins, Challenges, and Proposals for Reform. Aldershot: Ashgate, 2000.
Gronvall, Gigi Kwik. “Strengthening the US Program for Pathogen Security.” Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 12, no. 3 (2014): 121-128. https://doi.org/10.1089/bsp.2014.0018.
Myhrvold, Cameron, and Pardis C. Sabeti. “Large-Scale Environmental Monitoring of Pathogens Using Metagenomic Sequencing.” Nature Communications 13, no. 1 (2022): 7334. https://doi.org/10.1038/s41467-022-34831-3.
Hanham, Melissa, and Annie Woollacott. “Satellite Imagery and Open Source Intelligence for Arms Control Verification.” Journal of Strategic Studies 44, no. 4 (2021): 556-580.
Drezner, Daniel W. “Bad Debts: Assessing China’s Financial Influence in Great Power Politics.” International Security 44, no. 1 (2019): 7-50. https://doi.org/10.1162/isec_a_00355.
Barabási, Albert-László. Network Science. Cambridge: Cambridge University Press, 2016.
Eubank, Stephen, Hasan Guclu, V. S. Anil Kumar, Madhav V. Marathe, Aravind Srinivasan, Zoltan Toroczkai, and Nan Wang. “Modelling Disease Outbreaks in Realistic Urban Social Networks.” Nature 429, no. 6988 (2004): 180-184. https://doi.org/10.1038/nature02541.
Adalja, Amesh A., and Thomas V. Inglesby. “Artificial Intelligence: A Practical Solution to the Perennial Challenge of Biodefense.” Health Security 17, no. 3 (2019): 175-177. https://doi.org/10.1089/hs.2019.0067.
Lohn, Andrew J., and Micah Musser. “AI Verification: Mechanisms to Ensure AI Arms Control Compliance.” Center for Security and Emerging Technology, Georgetown University, 2022. https://doi.org/10.51593/20220008.
Zhou, Peng, Xing-Lou Yang, Xian-Guang Wang, Ben Hu, Lei Zhang, Wei Zhang, Hao-Rui Si, et al. “A Pneumonia Outbreak Associated with a New Coronavirus of Probable Bat Origin.” Nature 579, no. 7798 (2020): 270-273. https://doi.org/10.1038/s41586-020-2012-7.
Quay, Steven C., and Richard Muller. “The Science Suggests a Wuhan Lab Leak.” Wall Street Journal, June 6, 2021. https://www.wsj.com/articles/the-science-suggests-a-wuhan-lab-leak-11622995184.
Wan, Yushun, Jian Shang, Rachel Graham, Ralph S. Baric, and Fang Li. “Receptor Recognition by the Novel Coronavirus from Wuhan: An Analysis Based on Decade-Long Structural Studies of SARS Coronavirus.” Journal of Virology 94, no. 7 (2020): e00127-20. https://doi.org/10.1128/JVI.00127-20.
Menachery, Vineet D., Boyd L. Yount Jr., Kari Debbink, Sudhakar Agnihothram, Lisa E. Gralinski, Jessica A. Plante, Rachel L. Graham, et al. “A SARS-Like Cluster of Circulating Bat Coronaviruses Shows Potential for Human Emergence.” Nature Medicine 21, no. 12 (2015): 1508-1513. https://doi.org/10.1038/nm.3985.
U.S. Senate Committee on Health, Education, Labor and Pensions. “An Analysis of the Origins of the COVID-19 Pandemic.” Interim Report, October 2022. https://www.help.senate.gov/imo/media/doc/report_an_analysis_of_the_origins_of_covid-19_102722.pdf. See also: UC Davis One Health Institute. “PREDICT Data.” School of Veterinary Medicine. Accessed February 2026. https://ohi.vetmed.ucdavis.edu/programs-projects/predict-project/data.
Shi, Zhengli, and Zhihong Hu. “A Review of Studies on Animal Reservoirs of the SARS Coronavirus.” Virus Research 133, no. 1 (2008): 74-87. https://doi.org/10.1016/j.virusres.2007.03.012. See also: Ge, Xing-Yi, Jia-Lu Li, Xing-Lou Yang, Ali A. Chmura, Guangjian Zhu, Jonathan H. Epstein, Peter Daszak, et al. “Isolation and Characterization of a Bat SARS-Like Coronavirus That Uses the ACE2 Receptor.” Nature 503, no. 7477 (2013): 535-538. https://doi.org/10.1038/nature12711.
EcoHealth Alliance. “USAID PREDICT Program: Final Report.” New York: EcoHealth Alliance, 2020. See also: U.S. Right to Know. “EcoHealth Alliance Grants and Contracts.” Accessed February 2026. https://usrtk.org/biohazards/ecohealth-alliance-grants-and-contracts/.
Sainato, Michael. “Pentagon Funded Risky Coronavirus Research at Wuhan Lab.” The Guardian, September 8, 2021. https://www.theguardian.com/world/2021/sep/08/pentagon-funded-wuhan-institute-virology-research.
Relman, David A., and Harvey V. Fineberg. “A Call for an Independent Inquiry into the Origin of the SARS-CoV-2 Virus.” Proceedings of the National Academy of Sciences 119, no. 21 (2022): e2202769119. https://doi.org/10.1073/pnas.2202769119.
Carroll, Dennis, Peter Daszak, Nathan D. Wolfe, George F. Gao, Carlos M. Morel, Subhash Morzaria, Ariel Pablos-Mendez, Oyewale Tomori, and Jonna Mazet. “The Global Virome Project.” Science 359, no. 6378 (2018): 872-874. https://doi.org/10.1126/science.aap7463. See also: Kopp, Emily. “State Department, USAID Endorsed Novel Virus Project with China Despite National Security Risks.” U.S. Right to Know, December 8, 2025. https://usrtk.org/risky-research/state-usaid-endorsed-virus-project-with-china-despite-national-security-risks/.
UC Davis One Health Institute. “Emerging Pandemic Threats Program 2 PREDICT-2.” Accessed February 2026. https://www.vetmed.ucdavis.edu/research/researchgrants/pandemic-threats.
U.S. House of Representatives Select Subcommittee on the Coronavirus Pandemic. “Correspondence Between NIAID and EcoHealth Alliance.” Released March 2023.
Baric, Ralph S. “Emergence of a Highly Fit SARS-CoV-2 Variant.” New England Journal of Medicine 383, no. 27 (2020): 2684-2686. https://doi.org/10.1056/NEJMcibr2032888. See also: Menachery et al., “A SARS-Like Cluster,” 2015.
Lipsitch, Marc, and Thomas V. Inglesby. “Moratorium on Research Intended to Create Novel Potential Pandemic Pathogens.” mBio 5, no. 6 (2014): e02366-14. https://doi.org/10.1128/mBio.02366-14.
Maxmen, Amy, and Smriti Mallapaty. “The COVID Lab-Leak Hypothesis: What Scientists Do and Don’t Know.” Nature 594, no. 7863 (2021): 313-315. https://doi.org/10.1038/d41586-021-01529-3.
Kopp, Emily. “State Department, USAID Endorsed Novel Virus Project with China Despite National Security Risks.” U.S. Right to Know, December 8, 2025. https://usrtk.org/risky-research/state-usaid-endorsed-virus-project-with-china-despite-national-security-risks/.
U.S. Department of State. “Fact Sheet: Activity at the Wuhan Institute of Virology.” January 15, 2021. https://2017-2021.state.gov/fact-sheet-activity-at-the-wuhan-institute-of-virology/.
Carter, Sarah R., and Robert M. Friedman. “DNA Synthesis and Biosecurity: Lessons Learned and Options for the Future.” J. Craig Venter Institute, 2015. https://www.jcvi.org/sites/default/files/2018-09/dna_synthesis_biosecurity_2015.pdf.
Facher, Lev, and Jason Mast. “USAID-Funded Pandemic Research Failed to Spot COVID or Ensure Chinese Transparency.” Reason, February 6, 2025. https://reason.com/2025/02/06/usaid-funded-pandemic-research-failed-to-spot-covid-or-ensure-chinese-transparency/.
Kopp, “State Department, USAID Endorsed Novel Virus Project,” 2025.
Huang, Chaolin, Yeming Wang, Xingwang Li, Lili Ren, Jianping Zhao, Yi Hu, Li Zhang, et al. “Clinical Features of Patients Infected with 2019 Novel Coronavirus in Wuhan, China.” Lancet 395, no. 10223 (2020): 497-506. https://doi.org/10.1016/S0140-6736(20)30183-5. See also: Bloom, Jesse D., Yujia Alina Chan, Ralph S. Baric, Pamela J. Bjorkman, Sarah Cobey, Benjamin E. Deverman, David N. Fisman, et al. “Investigate the Origins of COVID-19.” Science 372, no. 6543 (2021): 694. https://doi.org/10.1126/science.abj0016.
Worobey, Michael, Joshua I. Levy, Lorena Malpica Serrano, Alexander Crits-Christoph, Jonathan E. Pekar, Stephen A. Goldstein, Angela L. Rasmussen, et al. “The Huanan Seafood Wholesale Market in Wuhan Was the Early Epicenter of the COVID-19 Pandemic.” Science 377, no. 6609 (2022): 951-959. https://doi.org/10.1126/science.abp8715.
Australian Strategic Policy Institute. “Wuhan Institute of Virology Satellite Imagery Analysis.” ASPI International Cyber Policy Centre, 2021. https://www.aspi.org.au/report/wuhan-institute-virology.
Mining Awareness+. “New UC Davis VP for ‘Grand Challenges’ JK Mazet Connected to Wuhan Institute of Virology-Daszak (EcoHealth).” October 23, 2021. https://miningawareness.wordpress.com/2021/10/23/new-uc-davis-vp-for-grand-challenges-jk-mazet-connected-to-wuhan-institute-of-virology-daszak-ecohealth/. See also: Facher and Mast, “USAID-Funded Pandemic Research Failed,” 2025.
Facher and Mast, “USAID-Funded Pandemic Research Failed,” 2025.
U.S. Senate Committee on Health, Education, Labor and Pensions. “An Analysis of the Origins of the COVID-19 Pandemic.” Interim Report, October 2022. https://www.help.senate.gov/imo/media/doc/report_an_analysis_of_the_origins_of_covid-19_102722.pdf.
Sainato, “Pentagon Funded Risky Coronavirus Research,” 2021.
Rahalkar, Monali C., and Rahul A. Bahulikar. “Lethal Pneumonia Cases in Mojiang Miners (2012) and the Mineshaft Could Provide Important Clues to the Origin of SARS-CoV-2.” Frontiers in Public Health 8 (2020): 581569. https://doi.org/10.3389/fpubh.2020.581569.
U.S. House of Representatives Select Subcommittee on the Coronavirus Pandemic. “Correspondence Between NIAID and EcoHealth Alliance,” 2023.
Pekar, Jonathan E., Andrew Magee, Edyth Parker, Niema Moshiri, Katherine Izhikevich, Jennifer L. Havens, Karthik Gangavarapu, et al. “The Molecular Epidemiology of Multiple Zoonotic Origins of SARS-CoV-2.” Science 377, no. 6609 (2022): 960-966. https://doi.org/10.1126/science.abp8337.
Liu, Shing Hei, and Edward C. Holmes. “Constraints on the SARS-CoV-2 Natural Origin Hypothesis.” Virus Evolution 8, no. 2 (2022): veac080. https://doi.org/10.1093/ve/veac080.
Bloom, Jesse D. “Recovery of Deleted Deep Sequencing Data Sheds More Light on the Early Wuhan SARS-CoV-2 Epidemic.” Molecular Biology and Evolution 38, no. 12 (2021): 5211-5217. https://doi.org/10.1093/molbev/msab259.
UC Davis One Health Institute. “PREDICT Data.” School of Veterinary Medicine. Accessed February 2026. https://ohi.vetmed.ucdavis.edu/programs-projects/predict-project/data. See also: Mazet, Jonna. Global Virome Project Leadership Board biography. Accessed February 2026. https://www.globalviromeproject.org/who-we-are/leadership/jonna-mazet.
Wheelis, Mark, Lajos Rózsa, and Malcolm Dando, eds. Deadly Cultures: Biological Weapons Since 1945. Cambridge, MA: Harvard University Press, 2006.
Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, et al. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” arXiv preprint, 2018. https://arxiv.org/abs/1802.07228.
Berber, Nicole, and Danna Ingleton. “Opportunities to Strengthen U.S. Biosecurity from AI-Enabled Bioterrorism: What Policymakers Should Know.” Center for Strategic and International Studies, August 6, 2025. https://www.csis.org/analysis/opportunities-strengthen-us-biosecurity-ai-enabled-bioterrorism-what-policymakers-should.
National Academies of Sciences, Engineering, and Medicine. Biodefense in the Age of Synthetic Biology. Washington, DC: National Academies Press, 2018. https://doi.org/10.17226/24890.
Koblentz, Gregory D., and Jaime Yassif. “For Bioweapons Experts, Trump’s UN Speech Presents a Window of Opportunity.” Carnegie Endowment for International Peace, December 4, 2025. https://carnegieendowment.org/europe/posts/2025/12/biological-weapons-trump-united-nations-strengthen-treaty.
Ibid.



This is irrational: The Biological Weapons Convention (BWC), the foundational international treaty prohibiting biological weapons, has operated for more than fifty years without any formal verification mechanism. Apparently not worth the paper it is printed on. The launch of Covid was a Bio Warfare exercise and we in the US DOD had the blueprint for executing it. No? What am I missing?
Thank you for such a deep dive