This report examines the structural transformation of trust caused by the maturation of generative artificial intelligence. It introduces the Authenticity Inversion Model, a five-component conceptual framework for analyzing how perceptual parity, identity decoupling, verification asymmetry, evidentiary destabilization, and institutional lag have collectively ended automatic trust in media, communication, and identity verification. The analysis covers cross-sector consequences across journalism, law, finance, electoral systems, and interpersonal trust. It evaluates the structural limitations of the detection–generation arms race and assesses emerging infrastructure responses including the C2PA provenance standard, the NIST AI Risk Management Framework, the EU AI Act, eIDAS 2.0, and verifiable credential systems. The central thesis is that the cost of fabricating convincing content has fallen below the cost of verifying it, creating an irreversible structural condition that requires sustained institutional adaptation rather than a technological fix.
Executive Summary
This report examines the structural transformation of trust caused by the maturation of generative artificial intelligence. The central thesis is that a set of technologies, specifically generative image synthesis, voice cloning, video generation, and large language models, have collectively crossed a threshold at which the cost of fabricating convincing media, identities, and communication has fallen below the cost of verifying them. This inversion constitutes a systemic condition referred to throughout this document as the Authenticity Crisis.
The Authenticity Crisis is not a single incident, a temporary disruption, or a problem confined to a particular sector. It represents a permanent structural shift in the information environment. Where previous technological transitions expanded access to information, this transition undermines the reliability of information itself by making the production of convincing false material trivially accessible.
The report introduces the Authenticity Inversion Model, a five-component conceptual framework for analyzing the structural dynamics of this transformation: perceptual parity, identity decoupling, verification asymmetry, evidentiary destabilization, and institutional lag. This framework provides an analytical structure for understanding why the crisis emerged, why it is irreversible, and why conventional responses are insufficient.
Consequences are examined across five domains: journalism and media integrity, legal and evidentiary systems, financial infrastructure, electoral processes, and interpersonal trust. In each domain, the report identifies a common pattern: institutions designed around the assumption that certain forms of evidence are inherently reliable are encountering conditions in which that assumption no longer holds.
The report analyzes the detection–generation arms race, the structural limitations of detection methods, and emerging infrastructure responses including the C2PA content provenance standard [1], cataloged in the reference library, the NIST AI Risk Management Framework [2], the European Union’s AI Act [3], eIDAS 2.0 [4], and verifiable credential systems [5]. While these responses represent meaningful progress, none individually or collectively resolves the underlying asymmetry between fabrication and verification.
The structural outlook is that the Authenticity Crisis will deepen before stabilizing. The era of automatic trust is ending. What replaces it will be determined by the institutional, technical, and social choices made in the coming years.
1. Historical Trust Assumptions
For the greater part of modern history, societies operated under implicit trust assumptions regarding certain categories of evidence. These assumptions evolved as technologies of documentation became widespread and their outputs became integrated into institutional processes.
Photography, introduced in the nineteenth century, established a perceptual norm in which a photograph was understood to represent something that existed in physical reality at the moment of capture. While photographic manipulation existed from the earliest days of the medium, the effort required to produce convincing alterations limited their prevalence. The default assumption (that a photograph depicted reality) proved operationally reliable for most purposes.
The same logic extended to audio recording and video. A recorded voice was treated as evidence of a specific individual’s speech. A video recording served as documentation of events. These media forms became foundational to journalism, law enforcement, judicial proceedings, and diplomatic communication. Their evidentiary value rested not on formal certification but on the practical difficulty of producing convincing fabrications.
Identity verification systems evolved along similar lines. Government-issued documents incorporated physical security features, but everyday identity confirmation relied on biological signals: facial recognition by human observers, voice identification, behavioral consistency, and physical presence. These signals were trusted because they were difficult to replicate.
The common thread is that trust was automatic. It required no active verification step. The assumption of authenticity was the default, and the burden of proof rested on anyone claiming fabrication. This arrangement was not perfect, as forgeries and impersonations have always existed, but it was operationally sufficient for the functioning of institutional and social systems.
Automatic trust was not merely a convenience. It was a load-bearing element of social infrastructure. Courtrooms admitted video evidence because it was assumed to depict reality. Banks accepted identity documents because they were assumed to correspond to real individuals. Journalists published photographs because they were assumed to record actual events. Each practice depended on a background condition of authenticity so pervasive it was invisible.
The erosion of this background condition does not require that most media become synthetic. It requires only that the possibility of synthesis become credible. Once credible, the assumption of authenticity is compromised for all content, regardless of its actual provenance. This is the structural logic that distinguishes the Authenticity Crisis from all previous episodes of media manipulation. The formal definition of this concept is available on the definition page.
2. Generative Inflection Point
The technologies that constitute the generative inflection point did not emerge simultaneously, but their convergence within a narrow temporal window (approximately 2019 to 2025) created compound effects that no single technology would have produced in isolation.
Generative adversarial networks, first described in 2014 [6], reached output fidelity by 2019 that enabled synthetic human faces indistinguishable from photographs of real individuals under casual inspection [7]. Subsequent architectures, including diffusion models, extended this capability to full-scene generation, consistent identity rendering, and real-time video synthesis.
Voice synthesis technologies progressed from requiring hours of source material to producing convincing replications from samples as short as three seconds [14]. The incident archive documents real-world cases where these capabilities have been used in fraud, impersonation, and deception. Commercial and open-source systems capable of real-time voice conversion became available, enabling a speaker to produce output in another individual’s voice during live communication.
Large language models achieved fluency and contextual adaptation that made their outputs difficult to distinguish from human-authored text across most genres and registers. Combined with persona-specific fine-tuning, these systems could produce communication matching the style, vocabulary, and reasoning patterns of specific individuals.
The critical characteristic of this inflection point is not the quality of synthetic output but the democratization of production capability. Technologies that would have required state-level resources a decade earlier became accessible through consumer hardware and freely available software. The marginal cost of producing synthetic media approached zero, while the technical expertise required decreased with each generation of tools.
This democratization altered the threat model fundamentally. The relevant question shifted from whether a sophisticated actor could produce convincing synthetic media to whether any individual with basic technical literacy could do so. By 2025, the answer was unambiguously affirmative.
3. The Authenticity Inversion Model
To provide analytical structure for the dynamics described in this report, the following conceptual framework is proposed. The Authenticity Inversion Model identifies five structural components that, operating in combination, produce the condition referred to as the Authenticity Crisis. Each component represents a distinct mechanism; their interaction produces systemic effects that exceed the sum of individual parts.
Download one-page concept sheet (PDF)
The condition in which synthetically generated media (faces, voices, video, text) becomes indistinguishable from authentic material under normal human observation. Perceptual parity has been reached or closely approached across major media modalities in common viewing and listening conditions. Peer-reviewed research demonstrates that human accuracy in distinguishing synthetic faces from real ones has declined to near-chance levels, with some studies indicating that AI-generated faces are rated as more trustworthy than photographs of real individuals [8].
The separation of identity artifacts from biological origin. A face, a voice, and a communication style can now be synthesized independently of any living individual, or replicated from an existing individual without their participation. Identity is no longer anchored to biology. It can be manufactured, duplicated, and deployed as a digital construct. This decoupling renders identity verification systems that depend on biological signals structurally vulnerable.
The condition in which the cost and effort of fabricating convincing content is structurally lower than the cost and effort of verifying it. A synthetic image can be generated in seconds at negligible cost. Verifying its provenance may require forensic analysis, metadata examination, and cross-referencing with independent sources. This asymmetry is not incidental; it is inherent to the technology and is widening as generation quality improves.
Extended analysis of this mechanism appears in the essay on the liar's dividend.
The erosion of trust in authentic media caused by the known existence of capable generation tools. When any recording could plausibly be synthetic, the evidentiary value of all recordings diminishes regardless of provenance. This phenomenon, described by Chesney and Citron as the liar’s dividend [9], provides a universal defense against documented reality. Authentic evidence becomes dismissible, and the burden of proof shifts from the accuser of fabrication to the presenter of evidence.
The gap between the speed of generative capability advancement and the speed of institutional adaptation. Legal frameworks, evidentiary standards, identity verification protocols, and regulatory structures were designed for an environment of implicit trust. Their adaptation to an environment requiring explicit verification is measured in years or decades, while generation capabilities advance in months. This lag creates a persistent window of vulnerability.
The Authenticity Inversion Model is not predictive. It is diagnostic. It identifies the structural conditions that produce the Authenticity Crisis and provides a framework for evaluating the adequacy of proposed responses. Any effective response must address all five components; interventions targeting only one or two will be insufficient.
4. What the Authenticity Crisis Is Not
Analytical precision requires distinguishing the Authenticity Crisis from superficially similar phenomena with which it may be confused.
The Authenticity Crisis is not a moral panic. Moral panics are characterized by disproportionate alarm relative to actual risk, typically driven by media amplification and resolved through normalization. The Authenticity Crisis is a structural transformation supported by documented technical capabilities, observed institutional impacts, and measurable economic losses. The risks are proportionate to the technology.
It is not a temporary media cycle. Media cycles are driven by novelty and attention dynamics. The underlying capabilities that produce the Authenticity Crisis are permanent additions to the technological landscape. Open-source proliferation ensures that these capabilities cannot be retracted, restricted to specific actors, or reversed through policy intervention. The condition will persist and deepen regardless of media attention.
It is not equivalent to historical forgery. Forgery has existed throughout recorded history, but it required skill, time, and resources that naturally limited its scale. The Authenticity Crisis is distinguished by the democratization and automation of fabrication at a scale and fidelity that has no historical precedent. The difference is not one of degree but of kind.
It is not solved by fact-checking. Fact-checking addresses the accuracy of claims and the veracity of specific content items. The Authenticity Crisis operates at the infrastructure level: it undermines the categories of evidence on which fact-checking itself depends. When photographs, audio, and video can be fabricated at will, the evidentiary foundation of verification is compromised. Fact-checking remains valuable but cannot resolve a structural problem with procedural tools.
It is not merely about misinformation. Misinformation concerns the accuracy and intent of communicated content. The Authenticity Crisis concerns the reliability of the medium itself. The distinction is between a message that is false and a medium that can no longer be trusted to carry truth. The latter is a deeper structural problem that encompasses but is not limited to misinformation.
5. The Irreversibility Threshold
The threshold at which the Authenticity Crisis became structurally irreversible has already been crossed. This assessment rests on four conditions, each independently sufficient and collectively decisive.
First, generative capabilities have been democratized. The tools required to produce synthetic media at high fidelity are freely available as open-source software, executable on consumer hardware. No licensing regime, export control, or platform policy can retract capabilities that exist in millions of copies on personal devices worldwide.
Second, open-source proliferation has distributed foundational models beyond any single point of control. Model weights for capable image, voice, and text generation systems have been published, forked, modified, and redistributed across jurisdictions with no realistic mechanism for recall. The knowledge required to train new models is documented in peer-reviewed literature accessible to any graduate student in machine learning.
Third, the economic asymmetry between fabrication and verification is structural, not contingent. It derives from the fundamental mathematics of generative models: generation requires a single forward pass through a neural network, while verification requires comparison against an open-ended space of possible fabrication methods. This asymmetry will not reverse with incremental improvements to detection technology.
Fourth, global diffusion ensures that unilateral action by any single jurisdiction is insufficient. Generative tools are used on every continent, in every language, and across every sector. Regulatory responses in one jurisdiction do not constrain actors in others. International coordination on synthetic media governance is in its earliest stages and faces the same collective action problems that have limited progress on other global technology challenges.
The implication is not fatalistic. Irreversibility does not mean unmanageability. It means that responses must be designed for a permanent condition rather than a temporary disruption. The Authenticity Crisis will not resolve itself and cannot be reversed. It must be managed through sustained institutional adaptation.
6. The Collapse of Automatic Trust
The mechanisms described in the Authenticity Inversion Model converge on a single structural outcome: the end of automatic trust in media, communication, and identity. This collapse is observable in the present information environment.
Automatic trust, as used in this analysis, refers to the implicit assumption that certain categories of evidence correspond to reality without requiring independent verification. A photograph is assumed to depict something real. A voice is assumed to belong to the apparent speaker. A video is assumed to document an actual event.
Each of these assumptions is now unreliable in the general case. This does not mean that every photograph is fabricated. It means that the possibility of fabrication can no longer be excluded without active verification. The default has shifted from trust to uncertainty.
This shift has asymmetric consequences. A synthetic voice clone of a corporate executive can be produced in under an hour. Verifying whether a particular audio recording is authentic may require forensic analysis costing thousands of dollars and taking days to complete [10], a dynamic further analyzed in the liar's dividend essay. A synthetic identity package, including a generated face, fabricated credentials, and a plausible biography, can be assembled in minutes. Detecting it may require cross-referencing multiple databases and applying behavioral analytics over extended periods [11], a pattern also documented in the incident archive.
The psychological dimension should not be understated. Human cognition evolved in environments where sensory evidence was generally reliable. The requirement to continuously evaluate the authenticity of visual, auditory, and textual information imposes a cognitive burden for which there is no evolutionary precedent. This burden is not distributed equally: individuals with less technical literacy and fewer institutional resources bear disproportionate risk.
7. Sector-Level Impact
7.1 Journalism and Media Integrity
News organizations face a dual challenge. Synthetic media can be introduced into information flows as purported evidence of events that did not occur. Simultaneously, authentic reporting can be dismissed as fabricated. Both dynamics undermine the function of journalism as a mechanism for establishing shared factual reality.
Verification workflows adequate when fabrication required professional editing skills are insufficient against tools that produce output with no detectable artifacts. Organizations must invest in forensic verification capabilities that add cost and delay without providing certainty.
7.2 Legal and Evidentiary Systems
Legal systems have treated photographs, audio recordings, and video as evidence broadly presumed reliable. Generative AI at perceptual parity complicates this framework. Defense attorneys can challenge any digital evidence by raising the possibility of synthetic generation. Prosecutors must establish provenance through technical means that courts may not yet be equipped to evaluate [9].
The challenge extends to civil litigation, regulatory enforcement, insurance claims, and corporate investigations. In short, any domain in which media evidence was previously treated as self-authenticating.
7.3 Financial Infrastructure and Identity Systems
Financial institutions depend on identity verification at multiple points: account opening, transaction authorization, credit assessment, and regulatory compliance. A Federal Reserve-backed white paper cites a McKinsey estimate that synthetic identity fraud is the fastest-growing type of financial crime in the United States, with estimated losses measured in billions of dollars annually [11]. Unlike traditional identity theft, synthetic identity fraud creates identities corresponding to no real individual, complicating both detection and attribution.
Real-time voice cloning has been used in corporate fraud, including a documented case involving a deepfake video conference that defrauded a multinational firm of approximately twenty-five million dollars [10], also documented in the incident archive. As voice-based authentication expands in consumer banking, the attack surface grows proportionally.
Both the Federal Reserve guidance and the $25 million Hong Kong case are documented in the site's reference resources.
7.4 Electoral Processes
Synthetic media can fabricate statements, create false evidence of conduct, and generate misleading representations of political actors. The speed of social media distribution means synthetic content can reach millions of viewers before verification is possible.
Equally damaging is the preemptive effect: political actors can dismiss authentic evidence, including recorded statements and documented actions, by claiming synthetic fabrication. This application of the liar’s dividend directly undermines electoral accountability [9].
7.5 Interpersonal Trust
As awareness of synthetic media capabilities grows, individuals face uncertainty about the authenticity of images, messages, and calls from purported acquaintances. Romance fraud operations using synthetic identities and AI-generated communication have been documented at increasing scale [12]. The cognitive burden of continuous authenticity assessment represents a novel social cost with implications for mental health and community trust.
The cross-sector pattern is consistent: systems designed around implicit trust are encountering conditions that require explicit verification, and the infrastructure for explicit verification is either absent, immature, or inaccessible to the populations that need it most.
8. The Detection–Generation Arms Race
Detection methods for synthetic media include statistical analysis of pixel-level artifacts, frequency-domain analysis, biological signal detection, and neural network classifiers trained on real and synthetic datasets [15]. While these methods achieve high accuracy under controlled conditions, their real-world effectiveness is constrained by structural factors.
First, detection models are reactive. They must be trained on outputs of specific generation architectures. When generation methods change, accuracy degrades until models are retrained. This creates a permanent structural advantage for generation.
Second, generative models can be optimized against known detectors. Adversarial training reduces detection accuracy to near-chance levels in documented experiments [13]. The detector faces a moving target that actively adapts to defeat it.
Third, detection systems require computational resources and expertise not available as default infrastructure across platforms, legal systems, or individual devices. Their probabilistic outputs (confidence scores rather than binary determinations) are difficult to integrate into decision-making processes requiring clear evidentiary standards.
Fourth, the volume of content requiring assessment vastly exceeds detection capacity. Billions of media items are produced daily. Forensic analysis of more than a small fraction is not feasible.
An analogy from information security is instructive. The relationship between cyberattack and cyberdefense exhibits a similar asymmetry: the attacker needs to find one vulnerability while the defender must protect all of them. In the generative media domain, the generator needs to produce one convincing output while the detector must correctly classify every input. The mathematical advantage lies permanently with the generator.
Detection alone cannot resolve the Authenticity Crisis. It is a necessary tactical tool but insufficient as a primary strategy. Any approach that relies principally on detection will fail as generation quality improves and adversarial techniques mature.
The reference library catalogs detection research, while ongoing developments are tracked in Signal.
9. Infrastructure Responses
9.1 C2PA and Content Provenance
The Coalition for Content Provenance and Authenticity has developed a technical standard for embedding cryptographically signed metadata into media files at the point of creation [1]. This metadata records the device, software, and modifications applied to content, creating a verifiable chain of provenance. Major hardware manufacturers have begun incorporating C2PA support into camera systems.
The approach addresses the problem from the supply side: rather than detecting synthetic content after the fact, it establishes the authenticity of verified content at origin. Limitations include adoption fragmentation, the ease of stripping metadata, and the inference gap created by content predating the standard.
9.2 NIST AI Risk Management Framework
The National Institute of Standards and Technology has published guidance on managing AI-related risks, including those related to synthetic content [2]. The framework provides structured approaches to risk identification, assessment, and mitigation, but is advisory rather than mandatory.
9.3 European Regulatory Responses
The EU AI Act establishes transparency obligations under Article 50 for providers of systems that generate synthetic content [3]. Providers must ensure outputs are marked as artificially generated or manipulated. The eIDAS 2.0 regulation establishes a framework for European Digital Identity Wallets, providing standardized digital identity verification [4]. Pilot programs have begun cross-border testing.
These regulatory approaches represent meaningful structural responses, but their effectiveness depends on enforcement capacity and international coordination. The assumption of compliance is, by definition, inapplicable to the most harmful use cases.
9.4 Verifiable Credentials and Decentralized Identity
Verifiable credential systems, based on W3C standards [5], offer a technical architecture for identity claims that can be cryptographically verified without relying on a central authority. Proof-of-personhood systems attempt to establish that a digital identity corresponds to a unique living individual. These systems raise privacy concerns but represent an emerging approach that does not depend on the authenticity of media evidence.
9.5 Assessment
No single initiative addresses the full scope of the Authenticity Crisis. Content provenance addresses media authenticity but not identity. Regulatory frameworks establish obligations but depend on compliance. Detection provides probabilistic assessments but faces structural disadvantages. Identity systems address verification but require adoption at scale.
The most likely trajectory is a layered approach in which multiple systems operate in parallel. This approach is structurally sound but increases complexity and creates integration challenges that remain unresolved.
For ongoing tracking of these developments, see Signal. For the underlying research and standards, see the identity verification research in the reference library.
10. Structural Outlook
The Authenticity Crisis is not a temporary disruption. It represents a permanent alteration of the information environment requiring sustained institutional adaptation.
The quality and accessibility of generative tools will continue to improve. Real-time generation capabilities will extend from specialized applications to general-purpose communication tools. The barrier to producing synthetic media will approach zero across all modalities.
The transition from automatic trust to verified trust will accelerate. Institutions relying on implicit authenticity assumptions will be compelled to implement explicit verification infrastructure. This transition will be uneven, with high-value domains adopting verification systems before lower-value domains.
The asymmetry between fabrication and verification will persist. Verification infrastructure will improve but will not achieve parity with generation in speed, cost, or accessibility. The fundamental cost advantage of generation over detection is inherent to the technology.
New categories of institutional and legal frameworks will emerge. Standards for content provenance, identity verification, and synthetic content disclosure are in early development. Their maturation requires coordination across jurisdictions, industries, and platforms. This is a process measured in years.
The geopolitical dimension will intensify. Nations with advanced generative capabilities will possess asymmetric advantages in information operations and intelligence deception. The absence of international norms governing synthetic media in interstate relations creates a permissive environment for escalation.
Labor market implications will expand. Professions depending on authentic content production face competitive pressure from synthetic alternatives. New professional categories will emerge around verification, provenance management, and authenticity assurance.
The cumulative trajectory indicates that the Authenticity Crisis will be a defining structural challenge of the coming decade. It is not a technology problem amenable to a technology solution. It is a systems problem requiring coordinated response across technical, institutional, legal, and social domains.
For information about this project's scope, methodology, and authorship, see About.
Concluding Synthesis
The Authenticity Crisis is the structural condition in which artificial intelligence can generate human faces, voices, documents, video, and identities indistinguishable from reality, ending automatic trust in media, communication, and identity verification.
This condition emerged from the convergence of generative technologies that individually introduced risk and collectively transformed the information environment. The critical threshold was not a specific technical achievement but the democratization of production capability: the point at which convincing fabrication became accessible to any individual at negligible cost. That threshold has been crossed and cannot be reversed.
The Authenticity Inversion Model identifies five structural components that produce this condition: perceptual parity, identity decoupling, verification asymmetry, evidentiary destabilization, and institutional lag. Each is independently significant; their interaction is transformative.
The consequences are cross-sectoral and structural. Journalism, law, finance, governance, and interpersonal communication all depend on trust assumptions that are no longer reliable. The detection–generation arms race provides tactical tools but cannot resolve the underlying asymmetry. Infrastructure responses are necessary but individually insufficient.
The path forward requires a systemic approach: layered verification infrastructure, regulatory frameworks that establish accountability without assuming compliance, provenance standards that address media at origin, and identity systems that do not depend on the authenticity of media evidence. None of these components is adequate alone. Their integration is the central challenge.
The Authenticity Crisis is not a problem to be solved but a condition to be managed. The societies and institutions that recognize this earliest and respond most effectively will be best positioned to maintain functional trust in an environment where automatic trust is no longer available.
References
- Coalition for Content Provenance and Authenticity (C2PA). C2PA Technical Specification, Version 2.3. spec.c2pa.org
- Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. National Institute of Standards and Technology, January 2023. doi.org/10.6028/NIST.AI.100-1 [PDF]
- European Parliament and Council of the European Union. Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act). Official Journal of the European Union, L series. 2024. eur-lex.europa.eu
- European Parliament and Council of the European Union. Regulation (EU) 2024/1183 Amending Regulation (EU) No 910/2014 as Regards Establishing the European Digital Identity Framework (eIDAS 2.0). Official Journal of the European Union, L series. 2024. eur-lex.europa.eu
- Sporny, M., Noble, G., Longley, D., Burnett, D. C., Zundel, B., and Kyle Den Hartog. Verifiable Credentials Data Model v2.0. W3C Recommendation. 2024. w3.org/TR/vc-data-model-2.0
- Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. "Generative Adversarial Networks." 2014. arxiv.org/abs/1406.2661
- Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. "Analyzing and Improving the Image Quality of StyleGAN." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8110–8119. arxiv.org/abs/1912.04958
- Nightingale, S. J. and Farid, H. "AI-Synthesized Faces Are Indistinguishable from Real Faces and More Trustworthy." Proceedings of the National Academy of Sciences, 119(8), e2120481119, 2022. doi.org/10.1073/pnas.2120481119
- Chesney, R. and Citron, D. K. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review, 107(6), 1753–1819, 2019. doi.org/10.15779/Z38RV0D15J
- Chen, H. "Finance Worker Pays Out $25 Million After Video Call with Deepfake 'Chief Financial Officer.'" CNN, February 4, 2024. cnn.com
- Federal Reserve System. Synthetic Identity Fraud in the U.S. Payment System: A Review of Causes and Contributing Factors. Payments Fraud Insights, July 2019 (includes McKinsey-cited "fastest-growing" framing). fedpaymentsimprovement.org [PDF]
- NSA, FBI, and CISA. Contextualizing Deepfake Threats to Organizations. Cybersecurity Information Sheet, September 2023. media.defense.gov [PDF]
- Carlini, N. and Farid, H. "Evading Deepfake-Image Detectors with White- and Black-Box Attacks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 2804–2813. doi.org/10.1109/CVPRW50498.2020.00337
- Microsoft Research. VALL-E: Neural Codec Language Models for Speech Synthesis. 2023. microsoft.com/research
- Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., and Nießner, M. "FaceForensics++: Learning to Detect Manipulated Facial Images." Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1–11. doi.org/10.1109/ICCV.2019.00009
How to cite this report
Lukasz Czarniecki (2026). The Authenticity Crisis: Structural Breakdown of Trust. Authenticity Crisis. Version 1.0. https://authenticitycrisis.com/report
To cite a specific section, append the section anchor, e.g.:
Lukasz Czarniecki (2026). "The Authenticity Inversion Model." In The Authenticity Crisis: Structural Breakdown of Trust, Section 3.
https://authenticitycrisis.com/report#sec-3