The CAA has published its i) response to emerging AI-enabled automation (CAP3064 here), ii) strategy for regulating AI in the UK aviation industry (CAP3064A here), iii) and strategy how it will use AI. Here we summarise the first and second - how the CAA plans to regulate AI rather than use it.

The impact of AI

The CAA had already published its framework for AI, consisting of:

  1. a common language for AI;
  2. building trust in AI;
  3. an AI technology outlook.

This is the ‘foundational AI framework that will ensure consistency and enable capability enhancement across regulatory and operational domains’. It underpins the strategies for regulating AI in aerospace and the CAA's use of AI.

In the CAA's response to emerging AI-enabled automation, it provides the following overview of how AI could influence the CAA, having previously produced a horizon scanning paper on AI use cases in aviation (CAP3019 here).

 

 

The CAA recognises there are two main challenges of AI use for regulation:

  • assuring the robustness of AI-based software - due to the sophistication of AI systems, it can be difficult to verify the decision-making processes and provide AI will operate safely and securely across all scenarios, including as the system evolves through learning;
  • people & AI working together - how to ensure that humans and AI work effectively together, with adequate trust in the AI.

The CAA's readiness for regulating AI and potential impact will be assessed against 8 critical elements of effective safety and security oversight as described by the International Civil Aviation Organisation:

  1. Establishing comprehensive laws that cover both safety and security aspects of civil aviation.
  2. Enacting regulations that address safety and security requirements for aviation activities.
  3. Establishing competent authorities responsible for both safety and security oversight.
  4. Ensuring personnel involved in oversight are adequately qualified and trained covering both safety and security aspects.
  5. Providing guidance materials to both the industry and inspectorate while providing the tools that cover safety and security oversight activities.
  6. Implementing processes for licensing, certifying, and approving entities, personnel, and equipment regarding both safety and security.
  7. Conducting regular monitoring and inspection activities to ensure compliance with both safety and security regulations.
  8. Establishing mechanisms for addressing safety and security issues, incidents, and accidents promptly and effectively.

The CAA seeks to align to the UK government's pro-innovation approach to AI regulation whilst also considering international AI regulations and standards.

What next

The CAA's ‘flight plan’ for regulating AI includes:

  • Pre-flight checks in 2025-2026 - developing capability, regulatory research, mapping standards & regulations, collaborating & partnering;
  • Taxi & Take-off in 2026-2028 - subject to the above, the CAA aspires to the following types of activity: launching regulatory projects with the sector; sponsoring regulatory and policy research; systemic reviews across CAA's regulatory responsibilities.

The above will then inform the CAA's AI Portfolio which will be the delivery body for the strategies.  The CAA will establish an AI Strategy & Portfolio Hub, providing guidance, promoting sandboxing and working groups, and ensuring the effective implementation to the UK's AI regulatory principles.

As part of all this, the CAA aims to integrate AI governance into existing governance frameworks.

Whilst no formal consultation is open, the CAA states that it welcomes feedback as it continually monitors and evolves its strategy.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin Cook, Liz Smith or any other member in our Technology team.