The Nuance Gap: Why we expanded our AI risk assessment model

Regulatory compliance is usually binary. A device either passes EMC testing or it doesn’t. It either has a unique password (RED 3.3) or it doesn’t.

The EU AI Act is different. It brings a level of subjective complexity that is foreign to most embedded engineering workflows. When we stress-tested our original, streamlined compliance model against real-world Edge AI architectures, we identified a significant “Risk Gap.”

We found that standard questions like “Does your device use AI?” were insufficient. They generated false negatives-clearing devices as safe when, in reality, they carried significant legal exposure under Articles 12, 14, and 15 of the new regulation.

To close this gap, we have updated our Device Prophet engine with a more granular logic matrix. Below is a gap analysis of the specific architectural blind spots we identified, and why we now require deeper data points to validate your roadmap.


Gap #1: The “Influence” Distinction (Article 6)

The Previous Assumption: Many developers assume that if their AI is not powering a medical device or a car, it is automatically “Minimal Risk.”

The Regulatory Reality: The classification depends heavily on the degree of influence the AI has over the decision.

We introduced the specific question: Does the AI directly influence safety-critical or diagnostic decisions? to distinguish between three states that look identical in code but are legally distinct:

  • Insight (Minimal Risk): A camera counts people in a room to adjust HVAC. The decision is non-critical.
  • Supportive (High Risk potential): A camera highlights a potential safety hazard on a dashboard, but a human guard decides to hit the alarm.
  • Direct (High Risk): A camera detects a safety hazard and automatically locks a machine door.

A “Direct” influence creates a safety component. Under the AI Act (and the Machinery Regulation), this triggers a full third-party conformity assessment. If your architecture relies on the AI making autonomous decisions without a hard-coded “sanity check” or human override, you move from self-certification to a mandatory audit.


Gap #2: The Hardware Deficit (Article 12 & Logging)

The Previous Assumption: Embedded engineers typically view logging as a debug feature, often disabled in production to save Flash cycles and battery life.

The Regulatory Reality: Article 12 of the AI Act requires High-Risk AI systems to enable “automatic recording of events” (logging) over the system’s lifetime, specifically to trace system functioning and detect anomalies.

We added the check: Does the AI system logging meet EU AI Act traceability requirements? because this is often a hardware constraint, not just software.

The gap manifests in two common scenarios:

  • If you selected a microcontroller with limited external Flash memory (e.g., 256KB), you may physically lack the storage required to maintain the “logs of operation” mandated by law.
  • If your device relies on a cloud connection to offload logs, what happens when the connection drops? The regulation requires continuous monitoring capability.

This is a critical “Shift Left” moment: if you don’t budget for the BOM cost of extra memory for compliance logging today, you cannot “firmware update” your way out of it in 2027.


Gap #3: The “Poisoning” Blind Spot (Article 15)

The Previous Assumption: Security is handled by the network stack (TLS, Secure Boot). The AI model itself is just data.

The Regulatory Reality: Article 15 introduces a requirement for “Accuracy, Robustness, and Cybersecurity.” Crucially, this includes defense against data poisoning and adversarial attacks.

We now explicitly ask: Is model poisoning defense (input validation) enabled?

Standard IoT security prevents hackers from accessing the device. It does not prevent a malicious actor from tricking the sensors.

Example: A smart sign reader that misinterprets a speed limit because of a specifically crafted sticker (adversarial patch) placed on the sign.

If your system acts on raw sensor data without an intermediate validation layer (e.g., checking if the input data falls within a statistical norm), you fail the “Robustness” requirement. This requires extra compute power on the edge to run sanitization algorithms-another hardware sizing consideration often missed.


Gap #4: The Open-Source Liability Trap

The Previous Assumption: “I am using an open-source model (e.g., from HuggingFace), so the liability for the dataset lies with the original creator.”

The Regulatory Reality: The AI Act places the burden on the Provider-defined as the entity putting the system on the market under their own name.

We added the trigger: AI model sourcing: Open-source vs. Proprietary?

If you integrate an open-source model into your commercial device, you inherit the liability:

  • You must demonstrate that the training data complied with copyright law.
  • You must demonstrate the model does not contain unacceptable bias.

If you use a “Black Box” model where the training data is undocumented, you cannot legally complete the Technical Documentation (Article 11) required for CE marking. Using open-source AI without a “Data Bill of Materials” is a supply chain risk that leads to an immediate blocked sale.


Conclusion: Complexity is the Price of Entry

We understand that answering more questions adds friction to your workflow. However, the EU AI Act has transformed AI from a software feature into a regulated component with specific hardware and process requirements.

By expanding our logic engine to cover influence classification, logging constraints, robustness validation, and sourcing due diligence, we provide a “Digital Twin” of the auditor’s checklist. It is better to uncover these gaps in the design phase, where they cost pennies to fix, than during the certification phase, where they cost millions.


Regulatory References & Further Reading

  • EU AI Act (Regulation 2024/1689): EUR-Lex Official Text - The full text of the AI Act, including Article 6 (Classification), Article 12 (Record-keeping), and Article 15 (Accuracy and Robustness).
  • ETSI GR SAI 005: ETSI AI Threat Mitigation - Technical guidance on mitigation strategies for the AI threat landscape, including poisoning defenses.
  • ISO/IEC 42001: ISO AI Management System - The international standard for Artificial Intelligence Management Systems, providing a framework for organizational compliance.