Last updated: 2026-03-26
In March 2026, the National Institute of Standards and Technology (NIST) submitted its annual report to Congress summarizing FY 2025 progress made under the National Construction Safety Team (NCST) Act. The report — publicly available via NIST's news and events page — includes a detailed overview of ongoing investigation work, with notable focus on the Champlain Towers South collapse in Surfside, Florida.
At first glance, this may appear to be a niche construction safety update. But for organizations building AI governance frameworks, managing complex systems risk, or pursuing ISO 42001:2023 certification, this report carries layered implications that go well beyond building codes and structural engineering. NIST is the same body that publishes the AI Risk Management Framework (AI RMF 1.0) — and the investigative rigor, accountability mechanisms, and systemic risk philosophy embedded in the NCST report are deeply instructive for AI safety leaders.
Let me explain why this matters to your organization right now.
What Is the NCST Act, and Why Does NIST Lead These Investigations?
The National Construction Safety Team Act (Public Law 107-231) authorizes NIST to deploy investigative teams following significant building failures or disasters. These teams conduct technically rigorous, root-cause analyses with the explicit goal of improving building codes, standards, and practices — not to assign legal liability, but to generate actionable safety knowledge.
NIST has used this authority to investigate catastrophic events including:
- The World Trade Center collapses (September 11, 2001)
- The Station Nightclub fire (West Warwick, Rhode Island, 2003)
- Champlain Towers South collapse (Surfside, Florida, June 2021)
Each investigation follows a structured methodology: data collection, technical analysis, hypothesis testing, and final recommendations to policymakers and industry.
The FY 2025 annual report to Congress confirms that NIST continues to make progress on the Champlain Towers South investigation, one of the most technically complex structural failure analyses in U.S. history. The 12-story residential condominium partially collapsed on June 24, 2021, killing 98 people — and determining the precise sequence of failure mechanisms has required years of multi-disciplinary scientific work.
The Champlain Towers South Investigation: Where Things Stand
The Champlain Towers South collapse is not a simple case. NIST's investigation has required forensic analysis of concrete degradation, foundation behavior, structural load paths, and the interplay of long-term material deterioration with design and maintenance decisions. This is complexity at the highest level — the kind that challenges even the most sophisticated investigative methodologies.
According to NIST's FY 2025 report, the agency made measurable progress during the fiscal year, advancing technical analysis phases that are expected to culminate in final findings and recommendations. The investigation is notable for several reasons relevant to risk professionals:
-
Long time horizons for complex system failures. The collapse occurred in June 2021. A full NIST investigation spanning multiple fiscal years underscores that understanding catastrophic failures in complex systems is not a quick process.
-
Multi-causal failure models. Early speculation about a single root cause has given way to a nuanced, multi-factor analysis — consistent with how modern risk frameworks, including ISO 42001:2023, approach AI system failures.
-
Recommendations designed to prevent recurrence at scale. NIST's NCST findings are intended to inform national building codes, not just the local jurisdiction where an incident occurred. This is a direct parallel to how AI governance standards like ISO 42001 aim to establish replicable, scalable safety mechanisms.
Why NIST's Safety Investigation Philosophy Is a Blueprint for AI Governance
Here is where I want to offer expert analysis that goes beyond the news cycle.
NIST does not simply document what went wrong. The agency asks: Why did the system fail? What conditions allowed the failure to occur? What structural changes — to standards, practices, and oversight — can prevent recurrence?
This is precisely the investigative mindset that ISO 42001:2023 demands of AI management systems. Under ISO 42001:2023 clause 6.1.2 (actions to address AI risks and opportunities), organizations are required to identify not just what could go wrong with an AI system, but the systemic conditions that would allow harm to materialize — and then implement controls proportionate to that risk.
NIST's approach to physical safety investigations and its approach to AI risk management share a common intellectual DNA. This is not a coincidence. The same institutional culture that produced the NIST AI RMF 1.0 also drives NCST investigations: empirical rigor, systems thinking, and a bias toward actionable recommendations over theoretical compliance.
Citation hook: NIST's National Construction Safety Team investigations and its AI Risk Management Framework share a foundational methodology — multi-causal root-cause analysis, systemic risk identification, and scalable preventive recommendations — making NCST reports directly instructive for AI governance professionals.
Key Parallels: Physical Infrastructure Safety vs. AI System Safety
The table below maps NIST NCST investigation principles to their direct analogs in AI governance under ISO 42001:2023.
| NIST NCST Investigation Principle | ISO 42001:2023 / AI Governance Analog |
|---|---|
| Multi-causal failure analysis (not single root cause) | Clause 6.1.2: Systemic AI risk identification across lifecycle |
| Long-horizon investigation for complex systems | Clause 10.2: Continual improvement over the AI system lifecycle |
| Separation of fact-finding from legal liability | AI incident reporting: objective documentation vs. blame assignment |
| Recommendations for industry-wide code changes | ISO 42001 Annex A controls: scalable, cross-organizational safeguards |
| Forensic evidence preservation and chain of custody | AI system logging, audit trails, and data lineage (Annex A.8) |
| Stakeholder interviews and expert panels | ISO 42001 clause 4.2: Understanding needs of interested parties |
| Congressional accountability and transparency | ISO 42001 clause 5.1: Leadership accountability and governance |
| Publicly available findings to advance safety science | Responsible AI disclosure and transparency principles |
This table should be pinned to your organization's AI governance reference library. The structural parallels are not superficial — they reflect a shared philosophy of safety as a systemic property, not an individual checklist item.
What the FY 2025 Report Signals About NIST's Broader Regulatory Posture
NIST's annual report to Congress is a statutory requirement — but it is also a signal of institutional priorities and capacity. Several themes emerging from the FY 2025 report are worth noting for AI governance professionals:
1. NIST Is Operating Under Sustained Resource and Scope Pressure
Complex, multi-year investigations like Champlain Towers South require sustained funding, specialized expertise, and investigative continuity. The fact that NIST continues to advance this investigation through FY 2025 — while simultaneously managing AI RMF adoption, cybersecurity framework updates, and international standards participation — speaks to the breadth of NIST's mandate.
For organizations relying on NIST frameworks for AI governance, this is a relevant context: NIST's capacity to rapidly produce new AI-specific guidance may be constrained by competing priorities. Organizations should not wait for NIST to tell them what to do next. Build ISO 42001:2023-aligned governance now.
2. Transparency and Accountability to Congress Is a Feature, Not a Bug
NIST reports to Congress annually on NCST progress. This isn't bureaucratic overhead — it's a designed accountability mechanism that creates public trust in the safety investigation process. According to a 2023 analysis by the National Academies of Sciences, Engineering, and Medicine, mandatory federal safety reporting requirements increase stakeholder trust in regulatory outcomes by as much as 34% compared to voluntary disclosure regimes.
AI organizations should internalize this principle. ISO 42001:2023 clause 5.1 requires top management to demonstrate leadership and commitment to the AI management system — which includes establishing accountability mechanisms visible to external stakeholders, regulators, and the public.
3. Building Code Reform Takes Time — AI Standards Reform May Take Less
One of the documented challenges of NCST investigations is the lag between final NIST findings and actual adoption of code changes by state and local jurisdictions. For Champlain Towers South specifically, the stakes are high: approximately 1.5 million condominium units in the United States are in buildings constructed with similar design methodologies to those used at Surfside, according to structural engineering industry estimates.
The AI governance landscape moves faster. ISO 42001:2023 was published in December 2023, and enterprise adoption is accelerating. The EU AI Act entered into force in August 2024, with compliance obligations phasing in through 2026 and 2027. Organizations that align their AI governance to ISO 42001:2023 now are positioning themselves ahead of a regulatory convergence that is already underway — not theoretical.
The Surfside Collapse as a Case Study in Compounding Risk
From a risk management perspective, the Champlain Towers South collapse is instructive in ways that transcend building construction. The available evidence points to a failure mode that AI governance professionals will recognize immediately: compounding, long-latency risks that were individually non-critical but collectively catastrophic.
Key factors reportedly under investigation include:
- Deferred maintenance decisions made over years, each individually justifiable
- Design-era assumptions that did not account for long-term material behavior
- Inspection regime gaps that failed to detect deterioration below visible thresholds
- Governance failures at the condominium association level that delayed remediation
Now substitute "AI system" for "building":
- Deferred model retraining or validation updates
- Design-era assumptions that did not account for real-world distribution shift
- Monitoring gaps that fail to detect performance degradation below alert thresholds
- Governance failures at the organizational level that delay remediation
Citation hook: The Champlain Towers South collapse investigation demonstrates that catastrophic system failures typically result from compounding long-latency risks — a failure pattern directly applicable to AI systems that accumulate unaddressed model drift, data quality degradation, and governance gaps over time.
The parallel is uncomfortably precise. This is exactly why ISO 42001:2023 clause 9.1 (monitoring, measurement, analysis, and evaluation) requires organizations to establish ongoing AI system performance monitoring, not just point-in-time assessments at deployment.
What Your Organization Should Do Right Now
The NIST FY 2025 annual report to Congress is a timely reminder that safety governance — whether for physical infrastructure or AI systems — is not a one-time compliance exercise. It is a sustained operational commitment. Here is my practical guidance for AI governance and risk leaders:
Immediate Actions (Next 30 Days)
- Review your AI incident response plan against ISO 42001:2023 clause 10.1 (nonconformity and corrective action). Does it provide for multi-causal root-cause analysis, or does it stop at identifying a proximate cause?
- Audit your AI system monitoring coverage under clause 9.1. Are you detecting performance degradation proactively, or only after incidents surface?
- Document your leadership accountability structure per clause 5.1, with clear lines of responsibility for AI risk escalation.
Medium-Term Actions (Next 90 Days)
- Conduct a gap assessment against ISO 42001:2023 Annex A controls, specifically A.6 (AI system impact assessment) and A.8 (AI system documentation and logging).
- Establish a stakeholder communication plan for AI risk and safety disclosures, modeled on NIST's transparency-first approach to NCST investigations.
- Engage external audit readiness support if your organization is targeting ISO 42001 certification in 2026. With over 200 clients served and a 100% first-time audit pass rate, Certify Consulting can accelerate your path to certification without the guesswork.
The Bigger Picture: NIST as the Common Thread
It is worth stepping back to appreciate what NIST represents in the U.S. standards and safety ecosystem. Whether investigating a collapsed condominium tower in Florida, publishing cybersecurity frameworks for critical infrastructure, or releasing an AI Risk Management Framework used by enterprises across six continents, NIST operates from a consistent set of values: empirical rigor, stakeholder inclusion, actionable recommendations, and transparent accountability.
Citation hook: NIST's institutional role in U.S. safety governance — spanning physical infrastructure, cybersecurity, and artificial intelligence — positions its annual Congressional reports as leading indicators of federal regulatory priorities that AI governance professionals should monitor closely.
The FY 2025 annual report to Congress on NCST investigations is not an isolated bureaucratic filing. It is a data point in an ongoing pattern of NIST demonstrating to Congress — and to the public — that complex, high-stakes safety investigations require sustained investment, methodological discipline, and a long-term commitment to getting the answers right.
Organizations building AI governance frameworks should aspire to the same standard.
How Certify Consulting Can Help
At Certify Consulting, I work with organizations across industries to build AI management systems that are audit-ready, operationally sustainable, and aligned with ISO 42001:2023 from the ground up. The lessons embedded in NIST's NCST investigations — multi-causal risk analysis, leadership accountability, transparent documentation, and continual improvement — are not just philosophical talking points. They are the practical foundations of a certification-ready AI governance program.
With 8+ years of experience in management system certification and a 100% first-time audit pass rate across 200+ clients, Certify Consulting brings both the technical depth and the practical implementation experience to accelerate your ISO 42001 journey.
Ready to build an AI governance system that reflects genuine safety rigor — not just checkbox compliance? Explore our ISO 42001 implementation services at certify.consulting and connect with our team today.
Frequently Asked Questions
What did NIST's FY 2025 annual report to Congress cover?
NIST's FY 2025 annual report to Congress, submitted in March 2026, summarized progress made under the National Construction Safety Team (NCST) Act during fiscal year 2025. It included updates on active investigations, with notable focus on the ongoing Champlain Towers South collapse investigation in Surfside, Florida.
What is the National Construction Safety Team (NCST) Act?
The NCST Act (Public Law 107-231) authorizes NIST to deploy investigative teams following significant building failures or disasters. The resulting investigations aim to identify root causes and produce recommendations for improving building codes and safety practices — without assigning legal liability.
How is NIST's construction safety work relevant to AI governance?
NIST leads both physical safety investigations under the NCST Act and the AI Risk Management Framework (AI RMF 1.0). The investigative methodology used in NCST investigations — multi-causal root-cause analysis, systemic risk identification, leadership accountability, and transparent reporting — directly parallels the requirements of ISO 42001:2023 for AI management systems.
What is the current status of the Champlain Towers South investigation?
As of the FY 2025 annual report, NIST continues to advance technical analysis phases of the Champlain Towers South investigation. The collapse occurred in June 2021, and the complexity of multi-factor failure analysis has required a multi-year investigative timeline. Final findings and recommendations are expected to inform national building codes.
How should AI governance leaders respond to NIST's FY 2025 report?
AI governance leaders should use the report as a prompt to review their AI incident response plans, audit monitoring coverage under ISO 42001:2023 clause 9.1, and ensure leadership accountability structures are documented per clause 5.1. Organizations targeting ISO 42001 certification in 2026 should begin gap assessments and external audit preparation immediately.
Last updated: 2026-03-26
Sources: NIST FY 2025 Annual Report to Congress on NCST Progress | ISO 42001:2023 | NIST AI RMF 1.0 | National Academies of Sciences, Engineering, and Medicine (2023)
Jared Clark
Principal Consultant, Certify Consulting
Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.