Dec 6, 2025

UAS Knowledge Series: What EASA’s First AI Regulation Means for Drone Operators

EASA has taken a major step in defining how Artificial Intelligence can be safely integrated into aviation, including unmanned systems. With the release of NPA 2025-07(B), the agency introduces its first regulatory framework for AI, offering guidance on how to demonstrate the safety and trustworthiness of AI components in aeronautical products — including drones.

A Shift Toward Risk-Based, Scalable AI Oversight

Instead of creating an entirely new category for AI systems, EASA proposes a performance- and risk-based framework that sits within existing regulatory pathways. This includes detailed specifications, Acceptable Means of Compliance (AMC), and Guidance Material (GM) tailored to how critical an AI component is within a drone’s architecture.

The focus is on enabling innovation while maintaining safety and public trust. The proposal addresses a wide range of AI use cases, from perception models for object detection to higher-level autonomy like navigation and flight control in drone-in-a-box systems. As AI becomes more embedded in operations such as inspections, security, and emergency response, the need for a harmonised framework becomes essential.

What Makes AI “Trustworthy”?

EASA outlines seven dimensions of AI trustworthiness:

  • Robustness

  • Explainability

  • Transparency

  • Human Oversight

  • Data Quality

  • Reliability

  • Safety

These elements scale with the criticality of the AI system. A low-risk support tool (e.g. AI for pre-flight diagnostics) will be subject to lighter requirements than an autonomous mission controller used in beyond visual line-of-sight (BVLOS) operations.

The trustworthiness criteria will become an integral part of the compliance argument for Design Verification, Type Certification, and operational authorizations, especially when AI plays a safety-critical role.

Why This Matters for Drone Operators

Drone manufacturers and enterprise operators already rely on AI for advanced tasks - think of real-time object tracking, predictive maintenance, or autonomous flight planning. EASA’s proposed rules make it clear that such capabilities can no longer be treated as “black boxes.” Regulators will require visibility into how these systems are trained, tested, and monitored.

If you’re planning to deploy AI-driven functionality in a safety- or mission-critical setting, you’ll need:

  • Documented design intent and model behaviour

  • Evidence of robustness and explainability

  • Human override mechanisms

  • Traceability of input, output, and decisions

How AirHub Can Support

At AirHub, we support both the technical deployment and the regulatory integration of AI in drone operations.

With our software, you can:

  • Integrate AI tools via API or SDK into the AirHub platform

  • Log inference data and sensor inputs during flights

  • Maintain a complete operational trace of decisions and performance

  • Centralize documentation to support future audits or certification

With our consultancy services, we help you:

  • Translate EASA’s trustworthiness principles into practical documentation

  • Align your ConOps, SORA, and risk assessments with AI features

  • Prepare for Design Verification or Certification submissions

Whether you’re developing an autonomous inspection workflow or deploying AI-driven situational awareness tools for security, we help bridge the gap between operational value and regulatory compliance.

What’s Next?

NPA 2025-07(B) is still in the consultation phase, and stakeholders can submit comments until June 2026. However, it’s already clear that AI will require a structured approach to risk, performance, and documentation, just like any other safety-critical system in aviation.

Drone operators that adopt these principles early will not only be better prepared for compliance but also gain a competitive edge by safely and confidently scaling autonomous operations.