In a 2024 strategy document the US NRC said rapid advances in AI have “tremendous potential to change how the nuclear industry and the NRC operate”. The organisation is at the mid-point of a FY2023–2027 Strategic Plan that has focused not only on the potential functions that AI can serve, but also on the organisation’s own capabilities to review and evaluate the application of AI, maintain awareness of technological innovations and ensure that its use is safe and secure. 

A Strategic Plan set out in April 2024 has five goals: 

  • Ensure NRC readiness for regulatory decision-making
  • Establish an organisational framework to review AI applications
  • Strengthen and expand AI partnerships
  • Cultivate an AI-proficient workforce
  • Pursue use cases to build an AI foundation across the NRC.

To support these goals, the agency is strengthening AI expertise among the NRC staff, staying ahead of technological innovations in AI, and building collaborative efforts with other Federal agencies and counterpart agencies in other countries. 

The NRC is at the mid-point of a FY2023–2027 Strategic Plan focusing on the potential functions that AI can serve and the organisation’s own capabilities to review and evaluate AI in the nuclear sector

Key to the strategy is an evaluation of use cases. The NRC staff identified 61 potential use cases where AI might improve organisational efficiency or improve experiences and services for agency stakeholders. From these 61 potential use cases, the staff identified 36 that align with the capabilities of current AI tools. The others could be addressed using non-AI solutions. 

To implement these, the agency says it will need: a sound data strategy and data management programme; AI governance; an information technology infrastructure that supports AI development; and a skilled AI workforce. The next steps in the implementation plan for the NRC are: 

  • Develop an enterprise-wide AI strategy to advance the use of AI within the agency
  • Prepare AI governance to ensure responsible and trustworthy AI implementation 
  • Mature the agency data management programme
  • Strengthen AI talent by strategically hiring and by upskilling the existing workforce
  • Allocate resources to support the integration of AI tools as part of the IT infrastructure
The NRC’s Enterprisewide Strategy on Artificial Intelligence

The Commission also has to invest in foundational tools; generative AI services that integrate with current applications and with the Agencywide Documents Access and Management System (ADAMS) search technology. 

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Gap analysis 

The NRC’s AI programme has been informed by a report, Regulatory Framework Gap Assessment For The Use Of Artificial Intelligence In Nuclear Applications, by the Southwest Research Institute (SWRI).

In carrying out the AI regulatory gap analysis (AIRGA), SWRI used regulatory guides (RGs), which it noted were “frequently more detailed than regulations”. The analysis first examined whether guides had potential gaps or details inconsistent with the use of AI. With this set of potential gaps, the analysis examined the NRC regulations for potential conflicts with the use of AI. 

A report by the Southwest Research Institute (SWRI) informed the NRC’s AI programme

The analysis sought answers to three questions for each Regulatory Gap. The questions were: 

  • Can AI technologies be used within the scope of the RG? 
  • Is the RG flexible to allow use of AI? 
  • Does the RG provide adequate guidance to evaluate the use of AI? 

The analysis identified 71 RGs with potential gaps, which were classified into eight categories: 

Gap 1: Implied Manual Actions: This refers to statements in RGs implying manual actions by humans. Operators and technicians are implicitly referred to in the RG; either to take an action or perform a task. AI technologies offer alternatives to execute those potential actions without human intervention and would potentially conflict with statements in RGs. 

Gap 2: Special Computations: AI techniques can be used in support of special computations, particularly when databases exist that could be used for machine learning. In general, the guidance was considered insufficient to evaluate computations using AI techniques. 

Gap 3: Preoperational and Initial Testing Programmes: RGs for preoperational and initial testing programmes recommend specific systems to be tested prior to operation and as part of any initial testing programme. If AI systems are used in safety systems, it is expected those systems would need to be fully tested, including tests of software malfunction and fail-safe design, with consideration of special risks. AI systems may require additional pre-operational testing to complement the criticality, risk, hazard and security analyses. Regardless of whether AI systems are used in safety or non-safety systems, AI systems should be thoroughly tested from a cybersecurity standpoint. 

Gap 4: Habitability Conditions under Autonomous Operations: Some RGs describe acceptable methods for ensuring habitable conditions in power plant control rooms under normal and accident situations, such as low radiation, clean air and enough oxygen. Methods for ensuring habitability are in general also sufficient to protect critical equipment but if AI systems might be used to achieve different levels of autonomous operation up to full autonomy, habitability guidance could be possibly refocused on recommendations to protect equipment. 

Gap 5: Periodic Testing, Monitoring, and Reporting: This is similar to the general manual actions addressed in Gap 1, but is related to manual actions for periodic testing, monitoring, and surveillance. It is the category with most RGs. 

Gap 6: Software for Safety-Related Applications: RGs related to software development, control and procurement for safety-related applications were adequate for AI as well as other software, but complementary guidance may be needed.

Gap 7: Radiation Safety Support: There may be commercial incentives to use large language models (LLMs) and other AI technologies to support activities traditionally assigned to radiation safety professionals and technicians. Digital advisors could keep track of relevant federal and state regulations, monitoring programmes and recommended actions, and may write reports. However RGs imply that specific activities and tasks can only be executed by certified professionals, mirroring underlying NRC requirements. 

Gap 8: Training and Human Factors Engineering: If AI systems were successfully used in nuclear power plants, the role of operators may change. It is unclear, if such circumstances were allowed, whether training programmes should be updated to include only controls operated by humans and whether AI systems should include handover switches to operators under special circumstances. The scope of training programmes may require examination in light of functions and actions by AI systems. 

Overall, the main potential for regulatory conflicts with AI technologies are in regulations that explicitly or implicitly involve actions by humans, which could be alternatively executed by AI systems. However, most regulatory requirements do not specify the role of humans, only that actions should be completed. 

Similarly, the regulations do not generally specify methods to execute computations and there is flexibility to use AI techniques, except those corresponding to modelling the emergency core cooling system in nuclear reactors and fracture toughness models. In those cases, the regulations call for physics- based models satisfying special attributes. 

SWRI says that rather than explicitly introducing AI statements in RGs to address potential gaps, it may be more practical to consider developing new RGs that could address cross-cutting issues. 

But it also said RGs and software development standards may need to be extended to recognise that AI systems have special features. For example, machine learning requires abundant data, raising questions on data sufficiency, quality and representation of a range of conditions and multiple states of the system. There are unique attributes of AI technologies that draw attention to issues related to systematic testing and level of documentation of verification, validation, and AI system confidence activities. There is always a possibility of anomalous outputs by AI systems (sometimes referred to as hallucinations). Systematic fail-safe design must therefore including active identification in the case that input data is very different than that used during the model development, active identification of anomalous outputs and options to mitigate or correct errors and stop them propagating. 

Next steps

With AI readily available in commercial and open-source software, guidance is needed on how to evaluate computations using AI technologies and the level of supporting documentation needed. Confidence depends on the quality of the data and whether trends exist in the data that could be synthesized by an AI system. A standard approach in ML is setting aside data for verification, making sure that the prediction error in the verification dataset is similar to the error in the training dataset. Recommendations from AI practitioners are needed on additional systematic approaches and data analyses that would enhance confidence in predictions, which could be captured in general guidance. 

Despite the growing interest in establishing standards for the development, deployment, and use of AI systems there are no widely accepted or universally adopted standards. Existing standards typically cover areas such as ethics, safety and transparency. They aim to provide a common framework for developers, users, and regulators to assess and compare AI system performance and trustworthiness, but it is difficult to derive practical guidance from those standards. 

Since the Strategic Plan was published last year, the Commission has continued to pursue its five goals. For example, in expanding AI partnerships, in September last year it jointly published an ‘AI Principles’ paper with the Canadian Nuclear Safety Commission and the UK’s Office for Nuclear Regulation. The document outlines guiding principles to consider when using AI in nuclear facilities and using nuclear materials. 

In the coming year the Commission also plans to move the AI issue forward with a number of initiatives as part of a strategy to engage with the inspection community, communicate with industry stakeholders and further develop the regulatory framework. The NRC’s plans for the year include publishing a strategy for identifying and removing barriers to the use of AI and improve in AI maturity as well as broadening the scope of generative AI training to agency staff and contractors and develop rules of behaviour for the use of generative AI tools.