Paper deep dive
Security, privacy, and agentic AI in a regulatory view: From definitions and distinctions to provisions and reflections
Shiliang Zhang, Sabita Maharjan
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 96%
Last extracted: 3/22/2026, 6:10:59 AM
Summary
This paper provides a comprehensive review and regulatory analysis of the evolving European Union (EU) AI landscape between 2024 and 2025, focusing on the intersection of security, privacy, and agentic AI. It clarifies critical definitions, deconstructs regulatory interpretations, and identifies gaps in current provisions for autonomous agentic systems, offering recommendations to align legal obligations with emerging algorithmic agency.
Entities (5)
Relation Signals (3)
Shiliang Zhang → authored → Security, privacy, and agentic AI in a regulatory view
confidence 100% · Shiliang Zhang and Sabita Maharjan are with Department of Informatics, University of Oslo
European Union → issued → EU AI Act
confidence 100% · The European Union (EU)... has introduced frameworks like the EU AI Act
EU AI Act → regulates → Agentic AI
confidence 90% · The regulatory landscape is striving to adapt... to categorize and mitigate AI risks.
Cypher Suggestions (2)
Find all regulations issued by the European Union related to AI. · confidence 90% · unvalidated
MATCH (r:Regulation)-[:ISSUED_BY]->(b:RegulatoryBody {name: 'European Union'}) WHERE r.topic CONTAINS 'AI' RETURN rIdentify technologies regulated by the EU AI Act. · confidence 85% · unvalidated
MATCH (t:Technology)-[:REGULATED_BY]->(r:Regulation {name: 'EU AI Act'}) RETURN tAbstract
Abstract:The rapid proliferation of artificial intelligence (AI) technologies has led to a dynamic regulatory landscape, where legislative frameworks strive to keep pace with technical advancements. As AI paradigms shift towards greater autonomy, specifically in the form of agentic AI, it becomes increasingly challenging to precisely articulate regulatory stipulations. This challenge is even more acute in the domains of security and privacy, where the capabilities of autonomous agents often blur traditional legal and technical boundaries. This paper reviews the evolving European Union (EU) AI regulatory provisions via analyzing 24 relevant documents published between 2024 and 2025. From this review, we provide a clarification of critical definitions. We deconstruct the regulatory interpretations of security, privacy, and agentic AI, distinguishing them from closely related concepts to resolve ambiguity. We synthesize the reviewed documents to articulate the current state of regulatory provisions targeting different types of AI, particularly those related to security and privacy aspects. We analyze and reflect on the existing provisions in the regulatory dimension to better align security and privacy obligations with AI and agentic behaviors. These insights serve to inform policymakers, developers, and researchers on the compliance and AI governance in the society with increasing algorithmic agencies.
Tags
Links
- Source: https://arxiv.org/abs/2603.18914v1
- Canonical: https://arxiv.org/abs/2603.18914v1
Full Text
51,106 characters extracted from source content.
Expand or collapse full text
Security, privacy, and agentic AI in a regulatory view: From definitions and distinctions to provisions and reflections Shiliang Zhang ∗ Sabita Maharjan ∗ March 20, 2026 The rapid proliferation of artificial intelligence (AI) technologies has led to a dy- namic regulatory landscape, where legislative frameworks strive to keep pace with technical advancements. As AI paradigms shift towards greater autonomy, specif- ically in the form of agentic AI, it becomes increasingly challenging to precisely articulate regulatory stipulations. This challenge is even more acute in the domains of security and privacy, where the capabilities of autonomous agents often blur tra- ditional legal and technical boundaries. This paper reviews the evolving European Union (EU) AI regulatory provisions via analyzing 24 relevant documents published between 2024 and 2025. From this review, we provide a clarification of critical definitions. We deconstruct the regulatory interpretations of security, privacy, and agentic AI, distinguishing them from closely related concepts to resolve ambiguity. We synthesize the reviewed documents to articulate the current state of regulatory provisions targeting different types of AI, particularly those related to security and privacy aspects. We analyze and reflect on the existing provisions in the regulatory dimension to better align security and privacy obligations with AI and agentic be- haviors. These insights serve to inform policymakers, developers, and researchers on the compliance and AI governance in the society with increasing algorithmic agencies. 1 Introduction Artificial intelligence (AI) has undergone rapid evolution across scientific, industrial, and soci- etal domains, shifting from pretrained predictive models to powerful systems that can plan, act, and adapt in open-ended environments [1]. AI mechanisms, models, systems, and applications now permeate energy [2], healthcare [3], finance [4], manufacturing [5], transportation [6], au- tomation [7] and vehicles [8], government services [9], and knowledge work [10]. More recently, the rise of autonomous agentic AI [11] has expanded the scope of applications, enabling com- plex workflows through autonomous orchestration of APIs, software tools, and data sources. ∗ Shiliang Zhang and Sabita Maharjan are with Department of Informatics, University of Oslo, Norway (e-mail: shilianz, sabita@uio.no). 1 arXiv:2603.18914v1 [cs.CR] 19 Mar 2026 Unlike their predecessors, which passively respond to human prompts, agentic AI possesses the capability to perceive, reason, and proactively execute complex, multi-step goals with minimal human intervention [12]. This transition from AI as a tool to AI as an active agent is rapidly reshaping various domains in human life. However, this increasing AI autonomy introduces challenges, particularly regarding security and privacy [13–15]. As AI agents are increasingly granted direct access to external tools, databases, and APIs to fulfill their objectives, the attack surface expands significantly [16]. Agentic behav- iors introduce security vulnerabilities, such as prompt injection propagating across multi-agent ecosystems [17], unauthorized data exfiltration during autonomous collaboration [18], and the potential for agents to bypass safety guardrails in pursuit of misaligned sub-goals [19]. Supply chain risks intensify when agents rely on third-party libraries, APIs, and model components [20]. Poisoning or dependency confusion can compromise agent outputs and actions [21]. The pri- vacy implications are equally critical. Privacy risks extend beyond model-centric leakage to pipeline-centric exposure [22]. Agents that persistently act on behalf of users might require ac- cess to personal context and sensitive data [23], raising issues about data minimization, consent management, and the black-box nature of autonomous decision-making. Long-horizon tasks and memory components increase the likelihood of retaining sensitive information beyond le- gitimate use [24]. Tool calls can cross jurisdictional boundaries and contractual frameworks, raising compliance questions under data protection regimes [25]. In response to these disruptions, the regulatory landscape is striving to adapt. The European Union (EU), a global forerunner in digital governance, has introduced frameworks like the EU AI Act [26] to categorize and mitigate AI risks. Complementary EU instruments and frameworks reinforce privacy and security baselines that are applicable to AI systems. Such examples are the GDPR for data protection [27], the NIS2 Directive for network and information system security [28], the Cyber Resilience Act for secure-by-design products with digital elements [29], and the Data Act for data access and sharing [30]. Between 2024 and 2025, EU has issued a range of regulatory documents aiming at operationalizing the AI Act and harmonizing it with existing privacy and security law [31]. These materials explore definitions of AI systems, clarify risk categories, propose conformity assessment procedures, and outline cybersecurity and data governance requirements. However, conceptual ambiguity persists around key terms - security, privacy, personal data, generative AI, general-purpose AI, large language models, agentic AI - especially when applied to AI systems that can act in unanticipated ways. This ambiguity makes it difficult for practitioners to interpret obligations [32]. A lack of distinctions can lead to inconsistent compliance practices and regulatory gaps. Furthermore, while high-level principles for privacy and security exist, specific regulatory provisions that address the unique risks of agentic AI remain fragmented and less articulated. There is a need to clarify how existing privacy and security mandates apply to systems that can act independently of real-time human oversight. Articulating the regulatory provisions that pertain to privacy and security in the context of agentic AI is necessary, which is anticipate to bridge the gap between abstract mandates and implementable controls. 2 To address this gap, this paper provides a review and regulatory analysis of the evolving EU landscape concerning agentic AI. We focus on the intersection of security, privacy, and agentic AI. Our contributions are summarized as follows: (i) We analyze 24 relevant EU AI regulatory documents published between 2024 and 2025. Based on this review, we deconstruct and clarify the critical definitions of privacy, security, and agentic AI, and distinguish them from closely related concepts to resolve regulatory ambiguities. (i) We synthesize the regulatory documents to articulate the provisions targeting agentic AI. We map these provisions against the technical capabilities of agents to identify where the law is robust and where it remains porous. (i) We reflect and discuss the existing provisions. We extract regulatory recommendations to help policymakers, developers, and researchers align security and privacy obligations with the reality of increasing algorithmic agency. The reminder of this paper is as follows. Section 2 overviews the reviewed regulatory docu- ments. Section 3 provides the definitions and distinctions of key regulatory concepts related to privacy, security, and agentic AI. Section 4 articulates and reflects privacy and security provi- sions towards AI systems, with a focus on the analysis for agentic AI, as well as suggestions for EU regulatory provisions with a focus on agentic AI. We conclude this work in Section 5. 2 Overview of the EU AI regulatory documents We review 24 EU regulatory documents related to AI from 2024 to 2025. The reviewed docu- ments include EU regulations, EU Commission implementing regulations, European Commis- sion (EC) proposals, EC communications, EC opinions, EC Council decisions, EC staff working documents, etc. We list all the reviewed regulatory documents in Table 1 with their full name, publish data, short name used in this paper, and link to the original document file. The listed regulatory documents range from institutional aspects (e.g., the establishment of AI office in EU), ethical aspects (e.g., equality and human right concerns with AI), AI development plan and strategy, EU AI initiatives, to technological aspects (e.g., challenges, risks, and oppor- tunities). The regulatory provisions over the time reflect the rapid advancement in AI. That is, while most of the documents have been talking about AI systems throughout 2024-2025, they extended their scope to generative AI, general-purpose AI, and talked more about large language models (LLMs) in later regulatory documents. It is from Oct. 2025 that the concept of agentic AI is formally mentioned in EU regulatory documents, as shown in Figure 1. In this figure, we also mark whether general or dedicated provisions are provided by those documents regarding security and privacy. We observe that while most of those regulatory documents talk about privacy and security in general aspects, e.g., the requirements of compliance with privacy and personal data protection regulated in GDPR, stipulations for specific AI systems are sparse, both for privacy and security. In the following sections, we will extract critical regulatory definition, distinct closely concepts, and summarize and reflect provisions regarding privacy, security, and agentic AI, based on those reviewed documents. 3 Table 1: List of EU AI regulatory documents reviewed in this paper (2024-2025) Short name of the document Publish date Full name and link to the document EC communication on Unlocking Data for AI 2025-11-19 European Commission Communication from the commission to the European Parliament and the Council Data Union Strategy Unlocking Data for AI (link) Proposal for Digital Omnibus on AI 2025-11-19 European Commission Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI) (link) EC Commission Staff Working Document on Digital Omnibus on AI 2025-11-19 Commission Staff Working Document Accompanying the documents Proposal for a Regulation of the European Parliament and of the Council Amending Regulations (EU) 2016/679, (EU) 2018/1724, (EU) 2018/1725, (EU) 2023/2854 and Directives 2002/58/EC, (EU) 2022/2555 and (EU) 2022/2557 as regards the simplification of the digital legislative framework, and repealing Regulations (EU) 2018/1807, (EU) 2019/1150, (EU) 2022/868, and Directive (EU) 2019/1024 (Digital Omnibus) Amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI) (link) Decision on Draft Recommendation on equality and AI 2025-11-17 Council Decision (EU) 2025/2350 of 13 November 2025 on the position to be taken on behalf of the European Union within the Committee of Ministers of the Council of Europe on the Draft Recommendation on equality and artificial intelligence (link) EC communication on Apply AI Strategy 2025-10-08 European Commission Communication from the commission to the European Parliament and the Council Apply AI Strategy (link) EC communication on European Strategy for AI in Science 2025-10-08 European Commission Communication from the commission to the European Parliament and the Council A European Strategy for Artificial Intelligence in Science Paving the way for the Resource for AI Science in Europe (RAISE) (link) EC proposal for a decision on equality and AI 2025-09-19 Proposal for a Council Decision on the position to be taken on behalf of the European Union on the Draft Recommendation of the Committee of Ministers of the Council of Europe on equality and artificial intelligence (link) EuroHPC initiative for trustworthy AI 2025-09-17 Official Journal of the European Union – EuroHPC initiative for start-ups to boost European leadership in trustworthy Artificial Intelligence – European Parliament legislative resolution of 24 April 2024 on the proposal for a Council regulation amending Regulation (EU) 2021/1173 as regards an EuroHPC initiative for start-ups to boost European leadership in trustworthy Artificial Intelligence (COM(2024)0029 – C9-0013/2024 – 2024/0016(CNS)) (Special legislative procedure – consultation) (link) EC proposal for a decision on AI and human rights, democracy, and the rule of law 2025-06-03 EC Proposal for a Council Decision on the conclusion, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (link) EC communication on AI Continent Action Plan 2025-04-09 Communication From the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions AI Continent Action Plan (link) Opinion on harnessing the potential and mitigating the risks of AI 2025-03-21 Opinion of the European Economic and Social Committee – Pro-worker AI: levers for harnessing the potential and mitigating the risks of AI in connection with employment and labour market policies (own-initiative opinion) (link) Commission Implementing Regulation (EU) 2025/454 2025-03-10 Commission Implementing Regulation (EU) 2025/454 of 7 March 2025 laying down the rules for the application of Regulation (EU) 2024/1689 of the European Parliament and of the Council as regards the establishment of a scientific panel of independent experts in the field of artificial intelligence (link) EC opinion on challenges and opportunities of AI in public sector 2025-01-24 Opinion of the European Committee of the Regions – Challenges and opportunities of artificial intelligence in the public sector: defining the role of regional and local authorities (link) Opinion on general-purpose AI and secure AI technology for the future 2025-01-10 Opinion of the European Economic and Social Committee a) General-purpose AI: way forward after the AI Act (exploratory opinion requested by the European Commission) b) A secure technology for the future: Artificial Intelligence (exploratory opinion requested by the Hungarian Presidency) – INT/1055 (link) 4 Table 1: List of EU AI regulatory documents reviewed in this paper (2024-2025) (continued) Short name of the document Publish date Full name and link to the document Opinion on ethical AI and access to supercomputing for start-ups 2024-12-04 Opinion of the European Committee of the Regions – Ethical Artificial Intelligence and access to supercomputing for start-ups (link) Decision on signing Europe framework on AI, human rights, democracy and the rule of law 2024-09-04 Council Decision (EU) 2024/2218 of 28 August 2024 on the signing, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (link) Regulation (EU) 2024/1689 (AI Act) 2024-07-12 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) (link) Proposal for signing Europe framework on AI, human rights, democracy and the rule of law 2024-06-26 Proposal for a Council Decision on the signing, on behalf of the European Union, of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (link) Council Regulation (EU) 2024/1732 in EuroHPC initiative to boost trustworthy AI 2024-06-19 Council Regulation (EU) 2024/1732 of 17 June 2024 amending Regulation (EU) 2021/1173 as regards a EuroHPC initiative for start-ups in order to boost European leadership in trustworthy artificial intelligence (link) Report on EU AI ambitions 2024-05-31 Special report 08/2024: EU Artificial intelligence ambition – Stronger governance and increased, more focused investment essential going forward (link) EC proposal for EuroHPC initiative to boost trustworthy AI 2024-02-16 Proposal for a Council Regulation amending Regulation (EU) 2021/1173 as regards an EuroHPC initiative for start-ups to boost European leadership in trustworthy Artificial Intelligence (link) Commission decision on establishing European AI Office 2024-02-14 Commission Decision of 24 January 2024 establishing the European Artificial Intelligence Office (link) EC communication on boosting startups and innovation in trustworthy AI 2024-01-24 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on boosting startups and innovation in trustworthy artificial intelligence (link) Amendments adopted for the proposal on AI Act 2024-01-23 Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 — C9-0146/2021 — 2021/0106(COD)) (link) 5 PrivacySecurity Provisions EU AI regulatory documents timeline 2024-01-23: Amendments adopted for the proposal on AI Act 2024-01-24: EC communication on boosting startups and innovation in trustworthy AI 2024-02-14: Commission decision on establishing European AI Offic e 2024-02-16: EC proposal for Eur oHP C init iative to boost tr ustwor thy AI 2024-05-31: Report on EU AI ambitions 2024-06-19: Council Regulation (EU) 2024/1732 in EuroHPC initiative to boost trustworthy AI 2024-06-26: Proposal for signing Europe framework on AI, human rights, democracy and the rule of law 2024-07-12: Regulation (EU) 2024/1689 (AI Act) 2024-09-04: Decision on signing Europe framework on AI, human rights, democracy and the rule of law 2024-12-04: Opinion on ethical AI and access to supercomputing for start-ups 2025-01-10: Opinion on general-purpose AI and secure AI technology for the future 2025-01-24: EC opinion on challenges and opportunities of AI in public sector 2025-03-10: Commission Implementing Regulation (EU) 2025/454 2025-03-21: Opinion on harnessing the potential and mitigating the risks of AI 2025-04-09: EC communication on AI Continent Action Plan 2025-06-03: EC proposal for a decision on AI and human rights, democracy, and the rule of law 2025-09-17: EuroHPC initiative for trustworthy AI 2025-09-19: EC proposal for a decision on equality and AI 2025-10-08: EC communication on European Strategy for AI in Science 2025-10-08: EC communication on Apply AI Strategy 2025-11-17: Decision on Draft Recommendation on equality and AI 2025-11-19: EC Commission Staff Working Document on Digital Omnibus on AI 2025-11-19: Proposal for Digital Omnibus on AI 2025-11-19: EC communication on Unlocking Data for AI Figure 1: An overview of the EU AI regulatory documents and their relevance with privacy, security, and agentic AI. The word “General” indicates the presence of privacy/se- curity provisions that do not target any specific AI systems. The word “Dedicated” indicates the presence of privacy/security provisions that target one or more specific AI systems, and the provision of specific measures/obligations/recommendations to- wards such AI systems. 6 3 Definitions and distinctions from a regulation perspective This section provides regulatory definitions for the concepts related to AI system, privacy, and security, and the distinctions of those concepts from closely relevant ones to eliminate uncer- tainties and ambiguities. We also provide examples of the concepts for intuitive understanding. 3.1 Concepts for AI systems and examples From the reviewed regulatory documents, we extract the definitions for different types of AI concepts, including AI systems, generative AI (GAI), general-purpose AI (GPAI), large language models (LLMs), and agentic AI, and other relevant concepts, shown in Table 2. We also list examples for each type of the AI concepts, and discuss the difference and implications of their difference from a regulatory perspective. Table 2: Regulatory definitions of different types of AI and exemplifications AI concept Regulatory definitionExemplification AI system A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. (EU AI Act, Article 3 (1)) Spam filter, intelligence traffic control system, automatic grammar check, etc. LLMs Advanced AI models that excel in understanding and generating human-like language. (EC Communications on boosting startups and innovation in trustworthy AI, 3.2) Llama, OpenAI, and other models from HuggingFace. GAI Systems such as sophisticated large language models that can create new content, ranging from text to images, by learning from extensive training data. (Opinion on harnessing the potential and mitigating the risks of AI, chapter 2) Segment Anything Playground, Midjourney, etc. GPAI model An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market. (EU AI Act, Article 3 (63)) ChatGPT Base Model, Gemini 3 Pro. Large GAI models are also a typical example for a GPAI model. GPAI system An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems. (EU AI Act, Article 3 (66)) ChatGPT Enterprise, Google Gemini, MS Copilot, etc. Agentic AI AI systems that can independently make decisions and take actions. This enables agents to understand language, reason about tasks, take actions autonomously to achieve predefined objectives, and interact with the world around them, orchestrating interactions including with humans. (EC Communication on Unlocking Data for AI, chapter 2) Devin AI, Google Colab. AI Factory A centralised or distributed entity that provides an AI supercomputing service infrastructure which is composed of an AI-optimised supercomputer or an AI partition of a supercomputer, an associated data centre, dedicated access and AI-oriented supercomputing services, and which attracts and pools talent to provide the competences required to use the supercomputers for AI. (Council Regulations (EU) 2024/1732, Article 1 (1) (a) (3b)) Barcelona Supercomputing Center, LuxProvide, J ̈ulich Supercomputing Centre, Advanced Computing Austria. From the definitions and examples in Table 2, we notice a critical distinction between “model” and “system”, as indicated in the definitions for GPAI model and GPAI system. In particular, a 7 model represents the upstream component in AI, e.g., the neural network weights and architec- ture, which emphasizes the integration into downstream system. In comparison, a system refers to the deployed product, indicating a model wrapped with user interface and system prompts, which emphasize the direct use of AI. This distinction can lead to a difference in regulator implication. Providers of GPAI models can generally face obligations related to technical doc- umentation, software copyrights, transparency of training data, and model evaluation. In the case of GPAI systems, the providers - who might be distinct from the model providers - can face obligations in the deployment context. E.g., they might need to ensure that the system does not generate illegal content, to label AI output, to conduct risk impact assessments, etc, so as to guarantee that the use of AI models respect human rights and mitigate ethic and economic risks. Among those definitions for AI concepts, agentic AI introduces a significant difference. While GAI and GPAI infer how to generate content based on input, agentic AI infers how to take action to achieve an objective. The key difference is the orchestration of the agentic AI, indicating its interactions with the world (e.g., APIs, platforms, humans) rather than simply delivering a prediction/solution regardless of whether it really works. Due to agentic AI’s ability to make decisions independently and interact with the world, it can pose higher systemic risks. Therefore, it is possible that regulatory scrutiny focusing on guardrails will be needed, so as to ensure that the AI agent does not conduct harmful behaviors or cyber attacks autonomously in its decision making. 3.2 Privacy, security, and closely relevant concepts Privacy and security are not new topics in the evolution of technology and digitization. We in this paper check how such aspects is handled in the area of AI, particularly agentic AI. Before we dive into the details, we would like to lay the foundation of basic concepts related to privacy and security, and provide distinctions between the concepts to avoid regulatory uncertainties. Particularly, we examine how the reviewed EU AI regulatory documents define/refer to those concepts, aiming to gain a relevant context, as shown in Table 3. Note that the protection of personal data is at the core of privacy protection regulations, e.g., Regulation (EU) 2016/679 (GDPR). Therefore, we in Table 3 list concepts related to personal data when it comes to regulatory privacy concepts. From the regulatory definition of personal data, it is obvious that modern AI systems is highly possible to access and make use of personal data in their service provision, particularly in the cases of personal use of AI tools like healthcare related services and personalized recommendations. We also notice that EU regulations refer a lot to information system when talking about security. We would like to note that the relationship between information system and AI system is fundamental. AI represents a specialized layer of logic and inference that resides within the broader of an information system, and AI cannot exist without an information system to provide services. An AI system can be viewed as an application of an AI model integrated into an information system with computation power, data storage, and network resources. Further, the 8 Table 3: Regulatory definitions of concepts related to privacy and security in the reviewed EU documents and exemplifications ConceptRegulatory definitionExemplification Fundamental right to personal data protection Referred to GDPR (Proposal for signing Europe framework on AI, human rights, democracy and the rule of law) Referred to Article 8(1) of the Charter of Fundamental Rights of the European Union (Regulation (EU) 2016/679) Everyone has the right to the protection of personal data concerning him or her. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified (Charter of Fundamental Rights of the European Union) N/A Privacy and data governance AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity (Amendments adopted for the proposal on AI Act) N/A Personal data The definition is referred to Article 4, point (1), of Regulation (EU) 2016/679 (EU AI Act) Any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person (Regulation (EU) 2016/679) Phone number, ID number, name, home address, bank account, biometric data like facial images or dactyloscopic data Non-personal data The definition is referred to as data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679 (EU AI Act) Locally produced weather forecast, energy consumption prediction, demand forecast of buildings Security Their is no universal, stand-alone definition of security particularly for AI and information system, to the best of our knowledge. Below we provide the most relevant regulatory definitions closely related to security N/A Network and information security The ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted data and the related services offered by or accessible via those networks and systems (Regulation (EU) No 526/2013) N/A Cybersecurity Referred to Regulation (EU) 2019/881 (EU AI Act) The activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats (Regulation (EU) 2019/881) N/A Information system Computers and electronic communication networks, as well as electronic data stored, processed, retrieved or transmitted by them for the purposes of their operation, use, protection and maintenance (Regulation (EC) No 460/2004) Telecommunication networks, banking systems, industry control systems. Systemic risk A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public, security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. (EU AI Act) AI tools that massively produce realistic deep-fake figures, news, or coordinated disinformation campaigns on social media. 9 presence of AI, especially agentic AI, increases the security attack surface of the information systems. This is because agentic AI is no longer just a “resident” or “tenant” of the information system, but a “user” or “operator” that can interact with the information system autonomously, e.g., in executing code or modifying database. Therefore, an attack on the agentic AI becomes a direct security breach of the information system, in the formats network and information security or cybersecurity, or other new types of issues. Information systems are traditionally audited through code reviews and network logs. However, because LLMs and GPAI are “probabilistic”, or black boxes, they introduce non-deterministic risks into otherwise deterministic information systems. Regulation now requires “AI-specific” cybersecurity measures, such as input/output filtering and adversarial testing. AI system is viewed as a high-level functional layer that operates within the broader context of an information system. While an information system provides the capabilities for operation, the AI System provides the capabilities for inference. As AI moves from passive generation (LLMs) to active orchestration (Agentic AI), the legal distinction between “the tool” and “the user” of the information system continues to blur, requiring a unified approach to digital governance. 4 Regulatory provisions In this section, we look into privacy and security provisions from the reviewed EU AI regulatory documents. We analyze and reflect the provisions pertinent to different types of AI systems, and check exactly the status of regulatory provisions targeting agentic AI. 4.1 Privacy provisions We list the provisions towards privacy and personal data protection for AI systems and GPAI in Table 4. Note that there is no provisions specifically targeting privacy for GAI and LLMs. From the table, we observe that the regulatory documents integrates the principles in privacy and personal data protection laid down in existing regulations in AI related regulatory docu- ments to guide how AI should be used. We also notice that while there are sufficient privacy provisions targeting AI systems, those dedicated to specific types of AI are rare. AI providers and practitioner, regardless of what type of AI they are using, can follow the provisions for AI systems in they practice in general, so as to pursuit regulatory compliance. However, more specific provisions for different types of AI can help reduce regulatory uncertainties and promote AI literacy 1 , thus facilitating development in AI. While the above shows analysis on privacy provisions for non agentic AI, we note that for now, there are no provisions dedicated to agentic AI. Therefore, it is currently not clear whether 1 “AI literacy” means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause, Article 3 (56) of Regulation (EU) 2024/1689 (AI Act). 10 Table 4: Privacy and personal data protection provisions for different types of AI AI concepts Privacy and personal data protection provisions (not a exhaustive list) AI systems AI systems should make best efforts to respect general principles ... in line with the Charter of Fundamental Rights of the European Union ..., including the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, .... (Amendments adopted for the proposal on AI Act) The indiscriminate and untargeted scraping of biometric data from social media or CCTV footage to create or expand facial recognition databases add to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy. The use of AI systems with this intended purpose should therefore be prohibited (Amendments adopted for the proposal on AI Act) Throughout the recruitment process ... of persons ..., such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, ... . AI systems used to monitor the performance and behaviour of these persons may also undermine the essence of their fundamental rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AI-systems in the employment context (Amendments adopted for the proposal on AI Act) ... for the training, validation and testing of AI systems ... the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner (Amendments adopted for the proposal on AI Act) The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. ... data minimisation and data protection by design and by default ... are essential when the processing of data involves significant risks to the fundamental rights of individuals. Providers and users of AI systems should implement state-of-the-art technical and organisational measures ... . Such measures should include not only anonymisation and encryption, but also the use of increasingly available technology ... (Amendments adopted for the proposal on AI Act) To the extent that it is strictly necessary for the purposes of ensuring negative bias detection and correction in relation to the high-risk AI systems, the providers ... may exceptionally process special categories of personal data ... subject to appropriate safeguards ... including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving (Amendments adopted for the proposal on AI Act) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply with all requirements resulting from Article 10 of Directive (EU) 2016/680 (EU AI Act) ... suggests including and documenting privacy and data security measures ... in the development and training of AI. The inclusion and documentation of robust privacy and data security measures, including encryption, access controls, and regular audits, protect sensitive data from cyber threats (Opinion on ehtical AI and access to supercomputing for start-ups) GPAI General-purpose AI models could pose systemic risks ... In particular, international approaches have so far identified the need to pay attention to risks ... the facilitation of disinformation or harming privacy with threats to democratic values and human rights (EU AI Act) ... the AI Act allows providers of high-risk AI systems to exceptionally use sensitive personal data – which is otherwise prohibited by the GDPR – for the purpose of bias detection and correction. This facilitates effective AI training and testing. The possibility of relying on this legal basis should be extended to providers of all AI systems and general-purpose AI models (EC Commission Staff Working Document on Digital Ommibus on AI) 11 specific measures should be conducted for agentic AI when it comes to regulatory privacy in EU. Though it is the same situations that agentic AI practitioners and providers can refer to general privacy provisions for compliance purpose, the interpretation of generally rules can lead to obscures and uncertainties that hurdles their business. Table 5: Security provisions for different types of AI AI concepts Security provisions (not a exhaustive list) AI systems ... AI Act will guarantee the use of trustworthy AI and ensure the transparency, safety and required human oversight. In addition, complementary regulations ensuring cybersecurity and privacy are key ... mitigating the risk of potential misuse ... particularly in contexts like biowarfare (EC Communication on boosting startups and innovation in trustworthy AI) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity (EU AI Act) To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure (EU AI Act) ... the assessment of the cybersecurity risks, associated to a product with digital elements classified as high-risk AI system ... should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks ... (EU AI Act) GAI Generative AI can exponentially increase the capacity to learn and replicate patterns found in cyber threats ... thereby assisting cybersecurity professionals ... generative AI can also be used by cybercriminals to organise sophisticated cyber-attacks ... internal security actors will also need to be well equipped to address the use of generative AI by cybercriminals (EC Communication on boosting startups and innovation in trustworthy AI) GPAI The providers of general-purpose AI models presenting systemic risks should be subject ... to obligations aimed at identifying and mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as a standalone model or embedded in an AI system or a product (EU AI Act) ... providers (of general-purpose AI models) should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use or attacks should duly consider accidental model leakage, unauthorised releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. That protection could be facilitated by securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls ... (EU AI Act) This gap reflects the rapid technological development in AI and the need of swift follow-up by the regulation side, and the need of contextualized provisions for privacy, particularly for the emerging concept of agentic AI. Actually, there are learned experience for the contextual- ization of generally regulations in specific areas, e.g., when it comes to privacy and personal data protection in smart grids. The solutions is the regulatory efforts made by dedicated EU expert groups, where they draft regulatory documents to interpret and contextualize privacy in smart grid. Their regulatory documents, e.g., the Data Protection Impact Assessment (DPIA) template for smart grid 2 , are recognized by EU regulations, and combining with the general privacy regulations, it formulates full-fledged regulatory privacy and personal data protection in the smart grid. Taking this experience, we suggest dedicate efforts from the regulation side that can improve regulatory literacy and mitigate uncertainties. 2 https://energy.ec.europa.eu/document/download/e93b8-1bda-4bdc-ac64-7edd6d0e60bc_en? filename=dpia_for_publication_2018.pdf, accessed 2026-01-15. 12 4.2 Security provisions We show security provisions for AI from the reviewed regulatory documents in Table 5. Similar to the case of privacy, security is not well contextualized in different types of AI. For now, dedicated provisions are available for the concepts of AI system, GAI, and GPAI, while missing for LLMs and agentic AI. Even though, it is applicable that practitioners of LLMs and agentic AI take the security provisions for other types of AI in their practice, while subject to a regulatory uncertainty. Furthermore, we notice that the security provisions for AI are generally risk orientated. That is, those provisions either target high-risk AI system, or systemic risks potentially posed by GAI or GAPI. Therefore, we envision that the definition boundaries for such risks and systems will be critical, so it is with the interpretation and exemplification of the definitions that can facilitate intuitive and crucial understanding of AI practitioner in their business. We notice that while a general regulatory boundary for “high-risk AI systems” is available, it will be even more useful if a more granular illustration/interpretation/exemplification on different types of AI can be provided. 5 Conclusions This work reviews EU AI regulatory documents that are published between 2024 and 2025. We check regulatory definitions related to privacy, security, and agentic AI, and distinct them from closely relevant concepts. We examine the regulatory provisions for privacy, security, and agentic AI, and analyze and reflect the status of the provisions for AI especially for the emerging concept of agentic AI. Our study reveals that, though applicable provisions exist, the contextualization of the provision for AI particularly for different types of AI remains in its early stage. We also show the connections between the provisions for AI and those for general information systems and indiscriminative areas, so as to bridge AI provisions with a broader context in regulation compliance. We envision that future efforts are needed in mitigating regulatory uncertainties, by differentiating, interpreting, and articulating privacy and security provisions for AI in a more granular manner. Acknowledgment This work was supported by the PriTEM project funded by UiO:Energy Convergence Environ- ments. References [1] P. Radanliev, “Artificial intelligence: reflecting on the past and looking towards the next paradigm shift,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 37, no. 7, p. 1045–1062, 2025. 13 [2] A. Melguizo, R. Katz, and J. Jung, “Can AI grow green? evidence of a kuznets curve among AI, renewable energies and emissions,” Energy Policy, vol. 208, p. 114883, 2026. [3] M. Kim, J. Oh, D. Kim, J. Shin, and D. Lee, “Understanding user preferences in developing a mental healthcare AI chatbot: a conjoint analysis approach,” International Journal of Human–Computer Interaction, vol. 41, no. 8, p. 4813–4821, 2025. [4] D. B. Vukovi ́c, S. Dekpo-Adza, and S. Matovi ́c, “AI integration in financial services: a systematic review of trends and regulatory challenges,” Humanities and Social Sciences Communications, vol. 12, no. 1, p. 1–29, 2025. [5] S. Fosso-Wamba, C. Guthrie, M. M. Queiroz, and A. Oyedijo, “Building AI-enabled ca- pabilities for improved environmental and manufacturing performance: evidence from the us and the uk,” International Journal of Production Research, vol. 64, no. 2, p. 545–564, 2026. [6] H. Yan and Y. Li, “Generative AI for intelligent transportation systems: Road transporta- tion perspective,” ACM Computing Surveys, 2025. [7] D. A. Spencer, “AI, automation and the lightening of work,” AI & society, vol. 40, no. 3, p. 1237–1247, 2025. [8] S. N. Sharma, “Generative AI and digital twins for sustainable last-mile logistics: Enabling green operations and electric vehicle integration,” Accelerating logistics through generative AI, digital twins, and autonomous operations, p. 183–216, 2026. [9] T. Chen and M. Gasco-Hernandez, “Uncovering the results of AI chatbot use in the public sector: Evidence from us state governments,” Public Performance & Management Review, vol. 48, no. 6, p. 1331–1356, 2025. [10] X. Tan, G. Cheng, and M. H. Ling, “Artificial intelligence in teaching and teacher pro- fessional development: A systematic review,” Computers and Education: Artificial Intelli- gence, vol. 8, p. 100355, 2025. [11] M. Hasselwander, V. Sunio, O. Lah, and E. Mogaji, “Toward agentic AI: User acceptance of a deeply personalized AI super assistant (AISA),” Journal of Retailing and Consumer Services, vol. 89, p. 104620, 2026. [12] A. Biswas and W. Talukdar, Building Agentic AI Systems: Create intelligent, autonomous AI agents that can reason, plan, and adapt. Packt Publishing Ltd, 2025. [13] M. Leo, F. Tan, T. Miao, and G. Anand, “From threat to trust: assessing security risks of agentic AI systems,” International Journal of Information Security, vol. 25, no. 1, p. 23, 2026. [14] S. Hosseini and H. Seilani, “The role of agentic AI in shaping a smart future: A systematic review,” Array, p. 100399, 2025. 14 [15] S. Zhang, S. Maharjan, L. A. Bygrave, and S. Yu, “Data sharing, privacy and security considerations in the energy sector: A review from technical landscape to regulatory spec- ifications,” arXiv preprint arXiv:2503.03539, 2025. [16] Z. Deng, Y. Guo, C. Han, W. Ma, J. Xiong, S. Wen, and Y. Xiang, “AI agents under threat: A survey of key security challenges and future pathways,” ACM Computing Surveys, vol. 57, no. 7, p. 1–36, 2025. [17] M. A. Ferrag, N. Tihanyi, D. Hamouda, L. Maglaras, A. Lakas, and M. Debbah, “From prompt injections to protocol exploits: Threats in LLM-powered AI agents workflows,” ICT Express, 2025. [18] K. Huang and C. Hughes, “Securing multi-modal agentic AI systems,” in Securing AI Agents: Foundations, Frameworks, and Real-World Deployment, p. 253–285, Springer, 2025. [19] K. Huang and C. Hughes, “The commercial landscape of agentic AI security,” in Securing AI Agents: Foundations, Frameworks, and Real-World Deployment, p. 347–373, Springer, 2025. [20] V. Jannelli, S. Schoepf, M. Bickel, T. Netland, and A. Brintrup, “Agentic LLMs in the supply chain: towards autonomous multi-agent consensus-seeking,” International Journal of Production Research, p. 1–31, 2025. [21] F. S. ESEN, “The risks of agentic AI: The curse of autonomy,” The Age of Generative Artificial Intelligence, p. 156, 2025. [22] P. P. Ray, “A comprehensive introspection on AI risks: taxonomy, challenges, and future directions,” Iran Journal of Computer Science, vol. 9, no. 1, p. 18, 2026. [23] S. Zhang, Y. Ma, J. Chen, S. Li, X. Yi, and H. Li, “Towards aligning personalized AI agents with users’ privacy preference,” in Proceedings of the 2025 Workshop on Human-Centered AI Privacy and Security, p. 33–42, 2025. [24] Y. K. Dwivedi, M. Y. Helal, I. A. Elgendy, R. Alahmad, P. Walton, A. Suh, V. Singh, and I. Jeon, “Agentic AI systems: What it is and isn’t,” Global Business and Organizational Excellence, 2025. [25] L. Hughes, Y. K. Dwivedi, K. Li, M. Appanderanda, M. A. Al-Bashrawi, and I. Chae, “Ai agents and agentic systems: Redefining global it management,” 2025. [26] H. Nolte, M. Rateike, and M. Finck, “Robustness and cybersecurity in the EU artificial intelligence act,” in Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, p. 283–295, 2025. [27] M. Beltr ́an, “AI algorithms under scrutiny: GDPR, DSA, AI Act and CRA as pillars for algorithmic security and privacy in the European Union,” Computers & Security, p. 104628, 15 2025. [28] M. Kianpour, P. A. Earls Davis, and I. M. Windekilde, “Digital sovereignty in practice: analyzing the EU’s NIS2 directive,” International Journal of Information Security, vol. 24, no. 4, p. 1–11, 2025. [29] M. Mueck, T. Roberts, S. Du Boisp ́ean, and C. Gaie, “Introduction to the european cyber resilience act,” in European Digital Regulations, p. 91–110, Springer, 2025. [30] B. Hohmann and G. Koll ́ar, “Reflections on the data protection compliance of AI systems under the EU AI act,” Cogent Social Sciences, vol. 11, no. 1, p. 2560654, 2025. [31] M. Calvano, A. Curci, G. Desolda, A. Esposito, R. Lanzilotti, and A. Piccinno, “Building symbiotic artificial intelligence: Reviewing the AI act for a human-centred, principle-based framework,” Minds and Machines, vol. 36, no. 1, p. 1, 2026. [32] U. Nizza, “What do AIs think about the AI Act? an experimental analysis of the EU approach on artificial intelligence,” EUROPEAN BUSINESS LAW REVIEW, vol. 36, no. 2, 2026. 16