The EU AI Act's phased applicability has reached the parts that matter to security teams. From August 2026, the high-risk-AI obligations — including the cybersecurity provisions — apply in earnest. This article is for the security leader whose company is on either side of the AI line: making an AI-driven product, or running AI systems that touch protected data, regulated processes or critical infrastructure.
Article 15 — accuracy, robustness and cybersecurity
Article 15 is the part of the AI Act that pulls security teams into scope. For high-risk AI systems, providers must "design and develop" the system to achieve "an appropriate level of accuracy, robustness and cybersecurity," with measures including:
- Resilience to attempts to alter the system's use, output or performance through unauthorised manipulation of input data — that is, adversarial input attacks.
- Resilience to model inversion, model extraction and membership inference, where the input data or training data is sensitive.
- Logging of events relevant to security, with traceability of input and output data.
- A defined level of accuracy, declared in the technical documentation, and a process for monitoring drift.
Adversarial-ML threat modelling, for the security team
The AI-specific threats relevant to Article 15 are the well-documented ones:
- Adversarial input attacks: crafted inputs that cause misclassification or unsafe outputs. Mitigations include input validation, ensemble decisions, and bounded-input testing during evaluation.
- Data-poisoning attacks: manipulated training data that corrupts the model. Mitigations include training-data provenance controls, supplier vetting and integrity checks on the training pipeline.
- Model extraction: queries that reconstruct the model. Mitigations include rate-limiting, output perturbation, and watermarking for sensitive models.
- Model inversion / membership inference: queries that reconstruct training data. Mitigations include differential privacy in training, output noise, and access control.
- Prompt injection (for LLM-driven systems): adversarial inputs that override system prompts. Mitigations include input sanitisation, output validation, and bounded tool calling.
High-risk systems — the practical list
Annex III lists the high-risk categories. The ones that matter most to security and regulated-industry buyers:
- Biometric identification and categorisation.
- Critical infrastructure (electricity, gas, water, transport, traffic management).
- Education and vocational training (admission, assessment).
- Employment (recruitment, performance evaluation).
- Access to essential private and public services (credit scoring, life and health insurance pricing, benefits eligibility).
- Law enforcement (predictive policing, evidence reliability assessment).
- Migration, asylum, border control management.
- Administration of justice and democratic processes.
What the technical documentation must show
The technical documentation under Annex IV is extensive. The cybersecurity-relevant elements:
- A description of the elements of the AI system and the development process — including the training methodology, the data sources and the validation procedure.
- A description of the cybersecurity measures, with traceability to identified threats.
- The post-market monitoring plan, with metrics and triggers for review.
- Risk management documentation linking identified risks to mitigations and residual risks.
- For systems trained with personal data, the lawful basis and the technical privacy measures (links into GDPR Article 32 and the GDPR-AI-Act interplay).
How this lands inside an existing security programme
For organisations already running NIS2- or ISO 27001-aligned programmes, the AI Act adds two operational layers:
- AI-specific threat modelling, integrated into your existing threat-modelling cadence. This is incremental; it is not a separate function.
- AI-specific testing — adversarial-input evaluation, prompt-injection testing for LLM systems — added to the existing pentest scope.
For organisations that have not yet stood up an information-security programme, the AI Act becomes the forcing function for one. The technical documentation cannot be assembled credibly without an underlying ISMS.
Where Sandline fits
We run adversarial-ML and prompt-injection testing as part of penetration testing engagements, integrate AI-specific risk into existing vulnerability management programmes, and produce the cybersecurity portion of the AI Act technical documentation. We do not handle AI bias and AI fundamental-rights assessment work — for that, you need specialists with a different mandate.
