Paper deep dive
The Internet of Physical AI Agents: Interoperability, Longevity, and the Cost of Getting It Wrong
Roberto Morabito, Mallik Tatipamula
Abstract
Abstract:The Internet has evolved by progressively expanding what humanity connects: first computers, then people, and later billions of devices through the Internet of Things (IoT). While IoT succeeded in digitizing perception at scale, it also exposed fundamental limitations, including fragmentation, weak security, limited autonomy, and poor long-term sustainability. Today, advances in edge hardware, sensing, connectivity, and artificial intelligence enable a new phase: the Internet of Physical AI Agents. Unlike IoT devices that primarily sense and report, Physical AI Agents perceive, reason, and act in real time, operating autonomously and cooperatively across safety-critical domains such as disaster response, healthcare, industrial automation, and mobility. However, embedding fast-evolving AI capabilities into long-lived physical infrastructure introduces new architectural risks, particularly around interoperability, lifecycle management, and premature ossification. This article revisits lessons from IoT and Internet evolution, and articulates design principles for building resilient, evolvable, and trustworthy agentic systems. We present an architectural blueprint encompassing agentic identity, secure agent-to-agent communication, semantic interoperability, policy-governed runtimes, and observability-driven governance. We argue that treating evolution, trust, and interoperability as first-class requirements is essential to avoid hard-coding today's assumptions into tomorrow's intelligent infrastructure, and to prevent the high technical and economic cost of getting it wrong.
Tags
Links
- Source: https://arxiv.org/abs/2603.15900v1
- Canonical: https://arxiv.org/abs/2603.15900v1
Intelligence
Status: not_run | Model: - | Prompt: - | Confidence: 0%
Entities (0)
Relation Signals (0)
No relation signals yet.
Cypher Suggestions (0)
No Cypher suggestions yet.
Full Text
58,708 characters extracted from source content.
Expand or collapse full text
The Internet of Physical AI Agents: Interoperability, Longevity, and the Cost of Getting It Wrong Roberto Morabito ∗ and Mallik Tatipamula † ∗ EURECOM, France, † Ericsson Silicon Valley, USA roberto.morabito@eurecom.fr, mallik.tatipamula@ericsson.com Abstract—The Internet has evolved by progressively expanding what humanity connects: first computers, then people, and later billions of devices through the Internet of Things (IoT). While IoT succeeded in digitizing perception at scale, it also exposed fundamental limitations, including fragmentation, weak security, limited autonomy, and poor long-term sustainability. Today, advances in edge hardware, sensing, connectivity, and artificial intelligence enable a new phase: the Internet of Physical AI Agents. Unlike IoT devices that primarily sense and report, Phys- ical AI Agents perceive, reason, and act in real time, operating autonomously and cooperatively across safety-critical domains such as disaster response, healthcare, industrial automation, and mobility. However, embedding fast-evolving AI capabilities into long-lived physical infrastructure introduces new architectural risks, particularly around interoperability, lifecycle management, and premature ossification. This article revisits lessons from IoT and Internet evolution, and articulates design principles for building resilient, evolvable, and trustworthy agentic systems. We present an architectural blueprint encompassing agentic identity, secure agent-to-agent communication, semantic interoperability, policy-governed runtimes, and observability-driven governance. We argue that treating evolution, trust, and interoperability as first-class requirements is essential to avoid hard-coding today’s assumptions into tomorrow’s intelligent infrastructure, and to prevent the high technical and economic cost of getting it wrong. Index Terms—Physical AI Agents, Agentic AI, Interoperability, Networked Systems, Edge Computing, 6G Networks I. INTRODUCTION The Internet has never been just a network. It has always been a story about what humanity decides is worth connecting. It began by linking research computers across a handful of universities. With the Web, it connected people, turning connectivity into a social fabric through email, messaging, and digital communities. The Internet of Things (IoT) extended this reach to billions of devices, embedding connectivity into homes, cities, factories, and vehicles. Each phase expanded the Internet’s role, from computation, to communication, to perception. We now stand at the edge of another transition: the Internet of Physical AI Agents. A related version of this work is currently under review for publication in an IEEE magazine. ∗ Roberto Morabito is an Assistant Professor in the Communication Systems Department at EURECOM, France. † Mallik Tatipamula is CTO at Ericsson, Silicon Valley, with a distinguished 35-year career spanning Nortel, Motorola, Cisco, Juniper, F5 Networks, and Ericsson. Unlike IoT devices, which primarily sensed and reported, physical AI agents will perceive, reason, and act in the real world. They are not endpoints streaming telemetry into distant clouds. They are autonomous actors that close the loop between sensing and action. They move, decide, cooperate, and intervene. A drone does not merely observe a wildfire, but it predicts its spread and coordinates suppression. A medical device does not simply report vitals, but it adapts therapy in real time. A robot does not wait for instructions, but it reorganizes production when demand shifts. If IoT connected things, the Internet of Physical AI Agents connects actors. This transition is not incremental. It represents a shift from a network of sensors to a network of embodied intelligence. From passive data collection to distributed agency. From mon- itoring the world to shaping it. Yet history offers a cautionary tale. IoT arrived with enormous promise: smart cities, au- tonomous infrastructure, self-managing industries. What emerged instead was a fragmented ecosystem of proprietary platforms, battery-hungry devices, insecure endpoints, and complex lifecycle management and integrations. Many de- ployments stalled. Others delivered data but little action. At planetary scale, even small design mistakes became systemic liabilities. We cannot afford to repeat those mistakes. Physical AI agents will operate in safety-critical domains such as health- care, transportation, disaster response, energy, and manufac- turing. Their failures will not merely drop packets. They will endanger lives. Their longevity will be measured not in software release cycles, but in decades of operation in the physical world. And once deployed at scale, their protocols and interfaces will be extremely difficult to evolve. This is where the next Internet must be built differently. The Internet succeeded because it was designed for longevity: simple core abstractions, layered architectures, open standards, and continuous evolution without breaking com- patibility. At the same time, it also taught us a hard lesson: protocols ossify [1]. Once deployed at global scale, even flawed mechanisms become nearly impossible to replace. We still carry design decisions from the 1980s in today’s networks. Transport Layer Security (TLS) itself, despite multiple itera- tions, has become a cautionary example of how difficult it is to evolve security once it is deeply embedded in infrastructure [2]. arXiv:2603.15900v1 [cs.NI] 16 Mar 2026 Lifecycle Mismatch Zone Model v1 Train+Align Model v1.1 Fine-tune Tool / API Change v1.2 Patch Model vN.1 Upgrade Tool / API Change vN.2 Patch Deployment & Commissioning Field Operation Recertification Window Maintenance / Aging timeMismatch àUneven Patching àCompatibility Drift àFragile Systems àAgentic Ossification AI / Model Lifecycle (fast) Physical Agent Lifecycle(slow) Weeks Months Weeks Months Model v(N+1).1 Upgrade Months / Years Fast-moving Artifacts: •Models •Prompts •Tool APIs Slow-moving Infrastructure: •Hardware •Certification •Deployment •Regulation Fig. 1. Lifecycle mismatch between fast-moving AI artifacts and slow-moving physical agents increases operational debt and accelerates agentic ossification. The Internet of Physical AI Agents risks an even more dangerous form of ossification: agentic ossification, where identity, communication, safety, autonomy mechanisms, and even AI model interfaces become locked into proprietary silos or prematurely standardized abstractions before their long-term implications are fully understood. Unlike traditional Internet endpoints, Physical AI Agents must operate in safety- critical domains such as healthcare, transportation, disaster response, energy, and manufacturing. Their failures do not merely drop packets; they endanger lives. This risk is fundamentally driven by a mismatch in lifecy- cles, illustrated in Figure 1. Modern AI artifacts, i.e., models, prompts, reasoning frameworks, and tool APIs, evolve on timescales of days or weeks [3], while physical agents (e.g., robots, medical devices, vehicles, and industrial systems) are deployed, certified, and operated over several months, years or even decades (in a similar fashion as cyber-physical systems [4]). Once intelligence is embedded into such long-lived sys- tems, evolution itself becomes constrained by physical, regula- tory, and operational realities. The core systems insight is sim- ple but profound. Intelligence cannot be embedded into long- lived infrastructure without explicitly redesigning the lifecycle through which it evolves. If evolution is not treated as a first- class architectural concern, Physical AI systems will either accumulate unbounded operational risk or become effectively impossible to adapt. These tensions are not only technical but economic. In fact, frequent partial upgrades inflate OPEX through maintenance, recertification, and fleet fragmentation, while accelerated hardware replacement to keep pace with AI evolution drives unsustainable CAPEX. At the same time, we are already observing a rapid pro- liferation of agentic protocols, tool APIs, and orchestration frameworks, many driven by corporate platforms rather than by open architectural requirements. This mirrors early Internet dynamics. QUIC, for example, emerged from Google as a highly successful transport protocol, yet required years of standardization and significant redesign before becoming an Internet primitive [5]. In the agentic world, the stakes are higher: premature dominance of proprietary protocols could hard-code architectural constraints into the world’s intelligent infrastructure before the ecosystem has matured. If we get this wrong, we will not merely inherit IoT’s fragmentation, but we will embed it permanently into the fabric of intelligent systems. And unlike web browsers or smartphones, physical agents cannot simply be replaced every two years. This article argues that the Internet of Physical AI Agents must therefore be designed from day one for: (i) autonomy with reflexes, (i) interoperability without lock-in, (i) security without exception, (iv) observability without opacity, and (v) evolution without ossification. To do so, we must first understand what IoT got right—and where it failed. I. LESSONS FROM IOT The IoT promised a world in which every object, from thermostats to trucks, would stream data into a global nervous system. Billions of devices now sense and transmit telemetry, digitizing sectors as diverse as logistics, agriculture, energy, and urban planning. By sheer scale alone, IoT changed how the physical world is measured. But it did not change how the world acts. IoT succeeded in digitizing perception, but it failed to deliver agency. The result is an Internet rich in data but poor in reflexes, i.e., an infrastructure that observes everything and intervenes slowly. In many deployments, value never mate- rialized beyond dashboards and alerts. In others, complexity and maintenance costs quietly erased the promised return on investment. For the Internet of Physical AI Agents, IoT is not a template. It is a warning. Devices Designed for Demos, Not for Decades Early IoT devices were constrained by energy, form factor, and cost. Smart meters required bulky batteries. Environmental sensors needed constant recharging. Industrial deployments demanded frequent maintenance. In theory, devices were meant to be ”deploy and forget”. In practice, they became ”deploy and service”. This created a structural mismatch between ambition and re- ality. IoT was envisioned as planetary-scale infrastructure, but it was engineered more like consumer electronics. Hardware lifecycles were measured in months. Infrastructure lifecycles were measured in decades. For physical AI agents, this gap is even more dangerous. A drone fleet, a medical robot, or an autonomous vehicle cannot be treated as disposable hardware. These systems must operate reliably for years in harsh, often inaccessible environments. Their hardware, software, and AI models must evolve grace- fully without constant physical intervention. Longevity is not a feature. It is a first-order design requirement. A Nervous System Without Reflexes Most IoT devices were designed as passive sensors. They collected data and transmitted it upstream for processing. Intelligence lived in centralized clouds. Action, when it existed at all, arrived late. In this sense, IoT became the nerve endings of a digital nervous system, but without reflexes. For time-critical domains, this architecture proved fragile. Fire detection systems could detect smoke but could not respond. Industrial sensors could flag anomalies but could not prevent failures. Smart cities could observe congestion but could not react in real time. Physical AI agents must close this loop. They must sense, reason, and act locally while coordinating globally. Reflexes must live at the edge. The cloud should provide strategy, learning, and large-scale optimization, but not micromanage- ment of real-world dynamics. Without reflexes, autonomy is an illusion. Fragmentation as a Feature, Not a Bug IoT was never one Internet. It was thousands of discon- nected ecosystems [6]. Every vendor shipped its own stack. Every platform defined its own APIs. Devices spoke mutu- ally unintelligible dialects. Smart home systems could not interoperate. Industrial deployments were locked into single vendors. Cross-domain integration was rare and expensive. Fragmentation was not an accident. It was a business model. The result was a patchwork of silos rather than a unified fab- ric. Integration costs dominated deployment costs. Innovation slowed as ecosystems hardened around proprietary interfaces. For Physical AI Agents, fragmentation would be catas- trophic. A drone from one vendor must coordinate with a robot from another. A medical agent must trust a hospital system across borders. A disaster response fleet must assemble dynamically from heterogeneous providers. This requires a shared architectural substrate, not a marketplace of incompati- ble platforms. Interoperability is not optional. It is existential. Security Treated as a Patch, Not a Foundation IoT security was largely retrofitted. Devices shipped with hardcoded credentials, unpatchable firmware, and minimal isolation. At scale, these weaknesses became systemic vul- nerabilities. The Mirai botnet demonstrated the consequences [7]. Millions of insecure cameras and DVRs were weaponized into a global attack platform. What looked like isolated design shortcuts became Internet-scale liabilities. Physical AI agents will be far more powerful than IoT sensors. A compromised thermostat is an inconvenience. A compromised autonomous vehicle is a weapon. Security can- not be a layer added later. It must be embedded in hardware, firmware, models, runtimes, identities, and protocols. Trust must be verifiable. Updates must be cryptographically con- trolled. Autonomy must be bounded by policy. In an Internet of actors, insecurity is not a bug. It is an existential threat. Scale Without Value IoT was driven by numbers. Tens of billions of devices. Trillions of data points. Zettabytes of telemetry. But scale alone does not create value. Many deployments stalled because they lacked clear return on investment, were too complex to manage, or delivered little beyond monitoring. Data without action proved insufficient. Dashboards without automation became operational burdens. Physical AI must reverse this logic. Value must come first. Autonomy must deliver measurable outcomes: faster response times, lower costs, safer operations, higher resilience. Scale should follow proven impact, not precede it. The Hidden Cost: Lifecycle Fragility IoT systems, and cyber-physical systems more broadly, carry an implicit long-lived requirement [4]. However, perhaps IoT’s most underestimated failure was lifecycle management. Devices were deployed with little consideration for long- term software maintenance, security patching, hardware aging, or evolving protocols. Firmware updates failed. Certificates expired. Backend services changed. Devices were abandoned in the field. At small scale, these failures were tolerable. At global scale, they became unmanageable. Physical AI agents compound this challenge. Their cog- nitive core, the AI models themselves, evolves far faster than physical infrastructure. Models are retrained, aligned, compressed, and replaced on timescales measured in weeks. Physical systems evolve on timescales measured in years. Without disciplined lifecycle architecture, agentic systems will drift out of compatibility, security, and policy compliance. What begins as innovation ends as operational debt. Lesson Learned The IoT digitized perception, but not agency. It scaled endpoints, but not trust. It collected data, but delivered limited autonomy. Table I summarizes the architectural shift required to move from IoT to the Internet of Physical AI Agents. The limitations we highlighted should not be read as failures of vision, but as consequences of timing. IoT emerged in an era when low-power AI accelerators did not exist, energy harvesting was immature, connectivity was not deterministic, and edge computing was still in its infancy. Expecting reflexive autonomy from a technology stack built for telemetry is like expecting real-time video conferencing from dial-up modems. In the same way that Wi-Fi transformed what was once a wired-only Internet, today’s advances in AI, sensing, materials, and connectivity finally make it possible to close the loop between perception and action. The Internet of Physical AI Agents is not a correction of IoT, but rather it is its natural successor, enabled by a generation of technologies that simply did not exist before. If IoT taught us how to sense the world, Physical AI must teach us how to responsibly act within it. I. FROM IOT TO PHYSICAL AI AGENTS Physical AI agents represent the next leap in the Internet’s evolution. They are not a new category of connected devices. They are a new class of networked entities: TABLE I FROM IOT TO THE INTERNET OF PHYSICAL AI AGENTS: ARCHITECTURAL LESSONS DimensionInternet of Things (IoT)Internet of Physical AI Agents Role in the systemDigitized perceptionEmbodied intelligence FunctionSense and reportPerceive, reason, and act ArchitectureReporting pipelineReflexive control system IntelligenceCentralized in the cloudDistributed across agents and edge InteroperabilityFragmented ecosystemsOpen, interoperable fabric Trust modelRetrofitted securitySecurity by design LifecycleShort-lived productsLong-lived infrastructure EvolutionVendor-driven stacksOpen, evolvable architecture Failure modeData without actionBounded autonomy with accountability Scaling modelBillions of endpointsPlanetary-scale agency RiskInsecure devicesUnsafe autonomy Long-term threatFragmentationAgentic ossification • Where IoT devices sense and transmit, agents perceive, decide, and act. • Where IoT endpoints report, agents intervene. • Where IoT connects things, agents connect intelligence to the physical world. This is not simply an upgrade in capability. It is a shift in the Internet’s role from a medium of information exchange to a fabric of distributed cognition and action. Three properties distinguish Physical AI Agents from the IoT systems that came before. Autonomy. IoT digitized perception. Agents embody cogni- tion. They observe their environment, reason over goals and constraints, and take action without waiting for centralized control. Autonomy is not an optimization; it is the defining property. Integration. IoT separated sensing, networking, and intelli- gence into loosely coupled pipelines. Agents collapse this stack into a closed loop. Sensing, communication, learning, and control operate as a single reflexive system. Collaboration. IoT devices spoke to clouds. Agents speak to each other. They form distributed multi-agent systems that coordinate, negotiate, and adapt in real time. In this sense, Physical AI Agents are not endpoints. They are participants. Digital AI Agents vs. Physical AI Agents We are already living in a world populated by digital AI agents: Coding copilots write software, workflow agents orchestrate business processes, trading bots move markets, and customer-service agents negotiate with millions of users every day. These systems live entirely in software, yet they already influence economies, institutions, and knowledge flows. They operate at machine speed, reason over complex state, and interact with both humans and other agents. Physical AI agents extend this intelligence into the real world. A self-driving vehicle is not just a robot with wheels. It is a mobile decision system operating under uncertainty, interacting with humans, infrastructure, and other autonomous systems. A drone fleet is not a collection of flying cameras, but a distributed sensing-and-action platform. A surgical robot is not a mechanical tool, but a cognitive system embedded in a human feedback loop. Physical AI agents are then intelligence in motion. They bring all the challenges of digital AI, reasoning, alignment, robustness, security, into domains where mistakes have phys- ical consequences. The Next Internet Primitive Every major phase of the Internet introduced a new prim- itive: (i) the early Internet introduced the host, (i) the Web introduced the resource, (i) mobile introduced the user-in- motion, (iv) IoT introduced the sensor. Physical AI introduces the agent. An agent is not just a device with an IP address. It is an entity with goals, perception, memory, policies, and the ability to act. It has identity, reputation, and accountability. It participates in workflows. It collaborates. It negotiates. It makes decisions. Once agents become first-class citizens of the Internet, everything changes: • Routing is no longer just about packets, but it is about missions. • Security is no longer just about channels, but it is about delegated authority. • Naming is no longer just about endpoints, but it is about accountable actors. • Observability is no longer just about traffic, but it is about decisions. This is why Physical AI Agents cannot be treated as another vertical industry or another application domain. They require a rethinking of Internet architecture itself. A Familiar Transition This transition mirrors earlier Internet inflection points. From one end we had the Web that moved computing out of research labs and into everyday life. Later, mobile broadband freed the Internet from fixed locations. Finally, cloud comput- ing abstracted infrastructure into programmable services. Physical AI agents extend intelligence beyond the cloud and into the physical fabric of society. Just as the Internet once connected computers, then people, then things, it is now connecting actors. And just as the Web required new abstractions (URLs, browsers, HTTP), and mobile required new ones (mobility management, radio access, handover), Physical AI will require its own architectural foundations: identity for agents, protocols for coordination, runtimes for autonomy, and governance for safety. The Internet has gone through this kind of transition before. Each time, success depended not on any single technology, but on architectural discipline: simplicity at the core, intelligence at the edge, layered abstractions, open standards, and neutral governance. The same principles must guide the Internet of Physical AI Agents. IV. ENABLERS: WHY NOW? The Internet of Physical AI Agents is not a speculative vi- sion. It is a convergence that is already underway. For decades, the idea of autonomous, cooperative machines operating at global scale remained constrained by fundamental limitations in computing, energy, connectivity, and sensing. IoT emerged before edge intelligence was practical. Robotics matured be- fore reliable, low-latency global connectivity existed. Artificial intelligence advanced rapidly, but for a long time it could only be deployed efficiently inside hyperscale data centers. Those constraints are now dissolving. A set of independent technology curves, spanning hardware, materials, networking, and AI, have reached a point of alignment. For the first time, it is possible to build physical systems that are compact, intelligent, connected, and autonomous at planetary scale. What once required bespoke infrastructure and centralized control can now be deployed as a distributed, self-organizing fabric of intelligent agents. Intelligent Devices: From Endpoints to Cognitive Systems Modern devices are no longer simple network endpoints. They have evolved into full-fledged computing platforms capable of hosting sophisticated perception and reasoning pipelines. Smartphones, drones, vehicles, robots, and industrial con- trollers now integrate GPUs, NPUs, and domain-specific ac- celerators that enable on-device execution of complex AI models [8]. Computer vision allows agents to interpret their physical surroundings, while speech and language models enable natural interaction with humans and other machines. Planning and control models support decision-making under uncertainty, and reinforcement learning enables continuous adaptation in dynamic environments. In effect, these systems are no longer merely connected. They are cognitive. In addition, Edge AI has crossed a critical threshold. In- ference workloads that once required cloud-scale infrastruc- ture can now be executed locally within tight latency and energy budgets. This shift enables real-time reflexes, privacy- preserving intelligence, and resilience under intermittent con- nectivity. It transforms the edge from a passive data source into an active decision-making layer [9]. Generative AI as the Cognitive Layer A second inflection point comes from the rise of generative AI. Large language models, multimodal foundation models, and embodied AI systems are redefining how machines reason, plan, and interact with their environment. These models go beyond classification and detection. They interpret context, generate explanations, decompose tasks, and coordinate ac- tions [10]. For physical agents, generative AI becomes a cognitive layer that enables high-level reasoning and semantic interoperability. It supports natural language interaction with humans, allows agents to reason over goals and constraints, and provides a shared abstraction layer across heterogeneous hardware and software stacks. Just as the Web introduced a universal content layer for the Internet, generative models introduce a universal rea- soning layer. When embedded into physical agents, these models transform robots, vehicles, and industrial systems from scripted machines into adaptive collaborators that can un- derstand intent, negotiate trade-offs, and coordinate missions across distributed fleets. New Materials for Compact, Sustainable Design Energy and form factor have long been the Achilles’ heel of autonomous systems. IoT deployments were constrained by batteries, maintenance cycles, and physical size, limiting their scalability and operational lifetime. That constraint is now being relaxed. Advances in meta- materials enable compact, high-performance antennas and sensors, while energy harvesting technologies, spanning solar, thermal, piezoelectric, and RF sources, allow devices to scav- enge power from their environment [11]. Combined with ultra- low-power electronics, these technologies extend operational lifetimes from months to years and, in some cases, enable batteryless operation. The result is a new class of sustainable agents that can be de- ployed at scale: lightweight drones with extended endurance, long-lived sensors embedded into infrastructure, wearable and implantable medical devices, and industrial robots capable of operating for years without constant human intervention. These systems are not only intelligent. They are practical. Next-Generation Connectivity as an Intelligent Fabric Connectivity is undergoing a similar transformation. It is no longer defined solely by throughput. Determinism, latency, and reliability have become first-class requirements. 5G URLLC and emerging 6G architectures provide ultra- reliable, low-latency communication with bounded delay, en- abling reflex-grade control loops across distributed agents [12]. Network slicing allows deterministic performance for safety- critical missions, while distributed cloud and edge computing push computation closer to where action happens. Integrated Sensing and Communication (ISAC) further col- lapses radar, perception, and connectivity into a unified stack, turning the network itself into part of the sensing system [13]. In this model, the network is no longer a passive transport substrate. It becomes an active participant in perception, coordination, and control. It carries not only data, but intent, context, and mission state. It enables agents to coordinate in real time with strong performance guarantees. Closing the Loop Individually, each of these advances is significant. Together, they close the loop between perception, reasoning, communi- cation, and action. Perception is now available at the edge, where physical interaction occurs. Reasoning can be executed in near-real time under strict latency constraints. Communication provides deterministic guarantees for coordination. Action is supported by reflexive control. Learning operates continuously across fleets, enabling collective adaptation. This is the control loop that IoT could never complete. Where IoT produced telemetry, Physical AI produces deci- sions. Where IoT required humans in the loop, Physical AI enables autonomy in the loop. Where IoT scaled sensing, Physical AI scales agency. A System Transition, Not a Product Cycle This moment resembles earlier inflection points in the Internet’s evolution. The Web did not emerge from a single browser. The mobile Internet was not created by a single phone. Cloud computing was not born from a single data center. Each required a convergence of hardware, software, networking, and economic forces. The Internet of Physical AI Agents represents the same kind of transition. It is not a product category, a vertical market, or a platform. It is a new layer of civilization’s infrastructure, one that embeds intelligence directly into the physical world. For the first time, the technological conditions to build it are finally in place. V. DESIGN PRINCIPLES FOR THE INTERNET OF PHYSICAL AI AGENTS The IoT taught us valuable lessons. Rather than treating them merely as cautionary tales, we can reframe them into positive design principles that should guide the Internet of Physical AI Agents toward long-term success. This is not about building better products. It is about building global infrastructure. Physical AI Agents will operate in safety-critical domains, across borders, across vendors, and across decades. They must survive hardware evolution, model evolution, protocol evolution, and regulatory evolution. They must remain secure long after their original designers have moved on. And they must evolve without ossifying. If the Internet of Physical AI Agents is to succeed as a planetary-scale system, it must be built on a small set of ar- chitectural principles that prioritize longevity, interoperability, safety, and continuous evolution. Compact, Affordable, and Sustainable by Design Where IoT devices were often bulky, battery-limited, and maintenance-heavy, Physical AI Agents must be designed from the ground up for compactness, energy efficiency, and long-term sustainability. Advances in energy harvesting, lightweight metamaterials, and low-power AI accelerators make it possible to deploy agents that operate for extended periods without human intervention. But this must be treated as a design constraint, not a future optimization. A wildfire drone that can patrol autonomously for hours without recharging is not a luxury, but rather a baseline requirement for planetary- scale deployment. Physical AI Agents will be embedded in forests, oceans, cities, factories, hospitals, and vehicles. Many will operate in environments where replacement is costly or impossible. Longevity must be engineered into their hardware, software, and AI lifecycle from day one. Sustainable design is not only an environmental goal. It is an architectural necessity. Autonomy with Reflexes, Not Automation Without Agency IoT systems collected data but relied on centralized process- ing and human intervention. Physical AI Agents must embody reflexes. This means embedding perception, reasoning, and control directly into the agent. Computer vision, real-time decision models, and integrated sensing and communication (ISAC) must operate as a closed loop. Agents should act locally while coordinating globally. In precision agriculture, drones should not merely capture crop images and upload them for later analysis. They should autonomously adjust irrigation, apply fertilizer, or trigger pest control in real time. In disaster response, agents should not wait for cloud decisions when every second matters. Autonomy is not about removing humans. It is about enabling machines to operate safely, responsibly, and pre- dictably when humans cannot be in the loop. Without reflexes, autonomy is an illusion. Interoperability Through Open Standards Fragmentation was one of IoT’s most damaging failures. Proprietary APIs, incompatible stacks, and vendor lock-in turned what should have been a global fabric into a market- place of silos. The Internet of Physical AI Agents must be built on open, interoperable frameworks from the start. This includes: • Agentic identities that provide universal, verifiable iden- tity across vendors and domains • Common semantic formats for knowledge representa- tion and reasoning • Standardized APIs and protocols for agent-to-agent and agent-to-cloud collaboration At the same time, experience from Internet standardization shows that openness alone is not sufficient. Poorly timed or prematurely scoped standards can be as damaging as proprietary ones, locking in flawed abstractions and inhibiting evolution. Lessons from past IETF standardization efforts highlight how technical, political, and economic pressures can derail otherwise sound designs when consensus is forced too early or driven by narrow interests [14]. Establishing standards early must therefore be a strong priority, but with an explicit focus on stable abstractions, incremental deployment, and long-term evolvability. Done correctly, this can create an environment in which drones, robots, vehicles, and medical devices—regardless of man- ufacturer—can cooperate seamlessly in real-world missions without hard-coding today’s assumptions into tomorrow’s in- frastructure. Security and Trust as First-Class Requirements Security must not be retrofitted. It must be intrinsic. Every Physical AI Agent should carry cryptographically verifiable credentials, employ zero-trust architectures, and support secure boot, remote attestation, and authenticated updates. Just as HTTPS became the default for web traffic, secure agent communication must be the default for autonomous systems. Trustworthy identity is not only a technical requirement. It is a societal prerequisite. A world of autonomous ma- chines requires public confidence. That confidence depends on transparency, accountability, and provable security properties. Without trust, there will be no adoption. Without adoption, there will be no Internet of Physical AI Agents. High-Value Use Cases Before Hyperscale IoT was driven by device counts. Tens of billions of sensors were promised long before value was demonstrated. Physical AI must invert this logic. Instead of chasing scale for its own sake, it should focus on high-value, mission-critical applications where autonomy delivers measurable benefits: (i) Disaster response (wildfires, floods, earthquakes), (i) Health- care (autonomous intervention devices and monitoring), (i) Industry 5.0 (collaborative robots and adaptive factories), (iv) Smart mobility (coordinated fleets of vehicles and drones). Scaling should follow trust. Trust should follow reliabil- ity. And reliability should follow disciplined engineering. Planetary-scale autonomy must be earned. Reliability, Governance, and Sustainability Physical AI Agents must operate at carrier-grade reliability. Failures in healthcare, transportation, or emergency response are unacceptable. Autonomy must degrade gracefully. Systems must be fault-tolerant, self-healing, and resilient to partial failures. Equally important is governance. The Internet succeeded because its core protocols were not owned by any single company or government. Neutral, transparent institutions such as the IETF and ICANN created global trust and enabled borderless interoperability [15]. The Internet of Physical AI Agents will require similar stewardship: neutral registries, open standards bodies, trans- parency logs, certification frameworks, and global governance structures that ensure safety, fairness, and accountability. Finally, sustainability must be designed in from the start. Low-power designs, circular materials, responsible manufac- turing, and lifecycle accountability are not optional. Billions of agents will inhabit the physical world. Their environmental footprint must be as carefully engineered as their intelligence. Governance & Observability Execution & Control Semantic Intelligence Communication Fabric Agent Substrate LayerRoleBuilding Blocks Accountability, audit, evolution Safe autonomy and orchestration Shared reasoning and meaning Coordination and reflexes Identity and trust Transparency logs, registries, policy engines, digital twins Universal runtimes, policy enforcement, safety interlocks Ontologies, knowledge graphs, semantic schemas Agent-to-agent protocols, QoS, ISAC, deterministic networking Agentic IDs, hardware roots of trust, lifecycle management Fig. 2. Layered Reference Architecture for the Internet of Physical AI Agents. VI. TOWARD AN INTERNET OF PHYSICAL AI AGENTS: AN ARCHITECTURAL BLUEPRINT Just as the Internet and the Web succeeded because of a small set of foundational building blocks—such as TCP/IP for networking, HTTP for data exchange, DNS for naming, and the browser as a universal runtime—the Internet of Physical AI Agents requires its own architectural substrate. The longevity of the Internet has been widely attributed to its layered design and to architectural principles that deliberately kept the core simple while allowing innovation and intelligence to evolve at the edges [16]. This substrate must support autonomy without chaos, in- telligence without opacity, scale without fragmentation, and evolution without ossification. It must be simple enough to deploy globally, yet expressive enough to support safety- critical autonomy. Most importantly, it must be designed as long-lived infrastructure, not as a fast-moving software stack. Rather than defining dozens of protocols and interfaces, we argue for a small number of architectural layers with stable abstractions between them. Each layer should evolve indepen- dently, while interoperating through well-defined interfaces, just as the Internet did. A Layered Architecture for Physical AI Agents We envision a layered architecture composed of five foun- dational layers, as shown in Figure 2. Agent Substrate: Identity, Trust, and Lifecycle. At the foun- dation lies the agent itself. Every physical AI agent—whether a drone, vehicle, robot, or medical device—must be a first-class Internet entity with a globally verifiable identity. This identity must be cryptographically bound to hardware trust anchors and support the full lifecycle of an agent: registration, dele- gation, suspension, revocation, and decommissioning. Without secure and persistent identity, there can be no authentication, no authorization, no accountability, and no trust. This layer Physical AI Agent Networked Fabric Governance & Trust Perception (sensors, ISAC, context) Local Reasoning (models, planning) Policy Runtime (safety guardrails, constraints) Actuation (control, intervention) Local Observability (traces, decisions, health) Communication Fabric (URLLC/QoS, secure A2A) Edge Orchestrator (mission coordination) Identity & Attestation (Agentic IDs, verifiable state) Audit / Transparency (tamper-evident logs, compliance) Policy / Governance (rules, thresholds, updates) Human / Authority Safety boundary: - geofence / rate limits - human-in-the-loop triggers - kill-switch readiness - degraded safe mode Evolution without ossification: stable abstractions + conformance avoid hard-coding model/tool APIs observations proposed action + uncertainty approve / block safe action environment changes (feedback) intent + state (semantics) tasks + constraintsaggregate mission statetasking / re-plan attested identity + capabilities authenticated sessions decision traces + key events mission-level logs evidence for policy refinement updated guardrails oversight + accountability Fig. 3. Reference control-loop architecture for Physical AI Agents, integrating local reflexes, fleet-level coordination, and governance feedback under explicit safety and trust constraints. establishes the notion of the agent as an accountable actor rather than a disposable endpoint. Communication Fabric: Coordination and Reflexes. Above identity sits the communication fabric. Agents must communi- cate directly with each other and with infrastructure, not only with centralized clouds. This fabric must support mutual au- thentication, encrypted multimodal exchange (video, LiDAR, telemetry), quality-of-service guarantees, and degraded-mode operation under failure. This is the equivalent of TCP/IP and HTTP for physical AI systems: a lightweight, low-latency, safety-aware coordination layer that enables distributed re- flexes. In this model, the network is not merely a transport substrate. It is an integral part of the control loop. Semantic Intelligence: Meaning, Not Just Data. IoT systems exchanged data. Physical AI agents must exchange meaning. Agents must reason over shared representations of the world, using common ontologies, knowledge graphs, and semantic schemas. A drone should communicate “probable wildfire, 87% confidence, wind-driven spread toward sector B” rather than simply reporting a temperature anomaly. This layer provides the shared cognitive substrate that allows heteroge- neous fleets to reason coherently across domains, vendors, and geographies. Without semantic interoperability, collaboration collapses into translation overhead. Execution & Control: Runtimes, Policies, and Safety. Physical AI needs a universal execution environment, just as the Web needed the browser. This layer provides secure, sand- boxed runtimes for deploying agent logic, enforcing policies, and orchestrating fleets. It governs energy budgets, geofencing, safety constraints, model execution, and actuation boundaries. It supports confidential computing, remote attestation, and verifiable execution. It enforces fail-safe interlocks, human- in-the-loop escalation, kill-switches, and degraded safe modes. This is where autonomy becomes governable. Governance & Observability: Accountability and Evolu- tion. Finally, autonomy must be observable, auditable, and governable. This layer provides: • Telemetry schemas for agent health and decisions • Distributed tracing across multi-agent missions • Tamper-evident logs for audit and forensics • Digital twins for simulation and patch validation • Service-level objectives for autonomy and safety • Policy feedback loops with human oversight Just as importantly, this layer provides neutral steward- ship: registries, transparency logs, conformance testing, and multi-stakeholder governance bodies similar to the IETF and ICANN. This is what prevents fragmentation, lock-in, and agentic ossification. Why This Matters This architecture is intentionally minimal. It avoids mono- lithic stacks, vendor-specific frameworks, and hard-coding model interfaces that will inevitably evolve. Instead, it pro- vides stable abstractions around identity, communication, se- mantics, execution, and governance—allowing models, tools, and hardware to evolve independently without breaking the system. This is how the Internet survived five decades of technological disruption. It is also how the Internet of Physical AI Agents can avoid premature ossification. From Architecture to Operation. Figure 3 translates the architectural blueprint into an opera- tional perspective, showing how a Physical AI Agent behaves as a closed-loop system embedded within a broader networked and governance fabric. At the agent level, autonomy is realized through a tight local loop in which perception informs local reasoning, proposed actions are validated against explicit pol- icy constraints, and actuation closes the loop through interac- tion with the physical environment. This reflexive behavior is complemented by a coordination loop, where agents exchange intent and state over a deterministic communication fabric and rely on edge orchestration to support task allocation, col- lective adaptation, and mission-level re-planning. Importantly, identity, attestation, observability, and audit are not external services layered on top of autonomy, but integral elements of the control loop itself. Decision traces and key events feed governance mechanisms that refine policies, thresholds, and constraints over time, with human oversight remaining explicit rather than implicit. By separating fast local reflexes, distributed coordination, and slower governance feedback, this control-loop abstraction enables Physical AI Agents to operate autonomously without losing accountability, and to evolve without freezing interfaces prematurely. The following case studies illustrate how this operational model materializes across diverse, safety-critical domains. VII. CASE STUDIES: PHYSICAL AI AGENTS IN ACTION The Internet of Physical AI Agents is not a distant vision. Early forms of it are already emerging across disaster response, healthcare, industry, and mobility. These domains expose the limitations of IoT-style architectures and demonstrate why re- flexive, collaborative, and accountable autonomy is becoming a necessity rather than a luxury. Each case study illustrates a different facet of the agentic Internet: perception and action at the edge, real-time coordi- nation, semantic interoperability, and system-level resilience. Wildfire Response: Reflexes at Planetary Scale Wildfires are among the most time-critical disasters human- ity faces. A fire can double in size in minutes. Smoke spreads faster than human response teams can mobilize. Terrain, wind, and vegetation interact in unpredictable ways. In an IoT-centric model, forests are equipped with sensors that detect smoke, temperature, and humidity. These sensors report telemetry to centralized servers, where analytics systems attempt to infer fire conditions and alert emergency services. By the time a response is triggered, valuable time has already been lost. In a Physical AI Agent world, wildfire response becomes a distributed reflex system. Autonomous drones patrol forest corridors, continuously mapping terrain using thermal vision, LiDAR, and atmospheric sensing. When smoke is detected, nearby agents collaborate to triangulate the ignition point, predict fire spread using local wind and vegetation models, and coordinate suppression strategies. Ground robots deploy fire retardants. Aerial agents perform targeted water drops. Edge compute nodes run predictive simulations, while cloud models provide strategic forecasts. Each agent operates autonomously, yet cooperates through a shared semantic model of the envi- ronment. Identity, trust, and policy ensure that only authorized responders participate. Observability pipelines provide real- time audit trails for public agencies. Incident Commander Incident Commander Drone Fleet (Perception Agents) Drone Fleet (Perception Agents) Network Fabric (URLLC / Slicing / ISAC) Network Fabric (URLLC / Slicing / ISAC) Edge Orchestrator (Mission Control) Edge Orchestrator (Mission Control) Suppression Units (Ground/Aerial Robots) Suppression Units (Ground/Aerial Robots) Audit / Observability (Logs, Traces) Audit / Observability (Logs, Traces) Detect Patrol + detect anomaly (smoke/thermal signature) Publish event + location (secure channel) Deliver event (low latency) Decide Fuse observations + context (local map + policies) Situation summary + recommended actions Approve / adjust objectives (optional) Act Dispatch mission plan (tasks + constraints) Execute suppression task (geofence + safety rules) Suppress / contain fire Adapt Status + progress updates Updated perimeter / hotspots Updates (QoS) Re-plan if needed (adapt to wind/spread) Observe Record key events (detection, plan, actions, outcomes) Receipt / integrity proof Security/Trust are assumed: - authenticated identities - policy enforcement - revocation/kill-switch readiness Fig. 4. Distributed wildfire response system based on Physical AI Agents. Autonomous drones detect and classify fire events, coordinate through an edge mission orchestrator, and trigger suppression actions over a deterministic communication fabric with identity, policy, and audit controls. Patient Patient Glucose Sensor (CGM) Glucose Sensor (CGM) Insulin Pump (Physical AI Agent) Insulin Pump (Physical AI Agent) Safety Policy (Runtime Guardrails) Safety Policy (Runtime Guardrails) Clinician (Oversight) Clinician (Oversight) Audit Log (Tamper-evident) Audit Log (Tamper-evident) Continuous Control Loop loop [Every few minutes (or event-driven)] Glucose reading (secure link) Predict + compute dose (model-in-the-loop) Check safety bounds (rate limits, hypo prevention) alt [Within policy] Approve Deliver micro-dose Notify (optional) status + explanation [Out of bounds / uncertain] Block / safe fallback Alert patient (recheck / carbs / call) Escalate event (review needed) Log decision + action (minimal trace) Lifecycle assumed: - authenticated updates - staged rollout + rollback - fail-safe mode on sensor issues Fig. 5. Closed-loop insulin delivery system implemented as a Physical AI Agent, integrating continuous sensing, on-device prediction and control, safety policy enforcement, secure lifecycle management, and auditability. Figure 4 shows how a wildfire response mission is executed as a distributed reflex system, where perception, reasoning, co- ordination, and intervention operate as a closed loop across au- tonomous drones, edge orchestrators, and suppression robots. Healthcare: Closed-Loop Medicine Healthcare is fundamentally a control problem. It is about sensing, diagnosis, intervention, and continuous adjustment. Yet most medical IoT systems stop at monitoring. Wearables stream heart rate, glucose levels, or oxygen saturation to dashboards. Clinicians interpret trends and decide when to intervene. In emergency scenarios, minutes matter. In chronic care, delays accumulate into systemic risk. Physical AI Agents enable closed-loop medicine. Consider an autonomous insulin delivery system. A wearable sensor continuously monitors glucose. An embedded AI model pre- dicts metabolic response. The pump adjusts dosage in real time. The agent reasons over patient-specific profiles, histor- ical data, and safety constraints. When uncertainty increases, it escalates to clinicians. When anomalies arise, it triggers intervention. In surgical robotics, agentic systems adapt to patient vitals during procedures. In intensive care, agents manage ventila- tors, infusion pumps, and monitoring systems as a coordinated ensemble. These systems are not tools. They are clinical collaborators. Identity and authentication ensure that only certified devices and clinicians participate. Semantic models encode medical knowledge. Runtimes enforce safety policies. Observability provides auditability for regulators and hospitals. Figure 5 illustrates how a closed-loop insulin delivery system can be implemented as a safety-governed Physical AI Agent, where prediction, control, policy enforcement, and auditability are integrated into a single accountable medical device. Industry 5.0: Adaptive, Self-Organizing Factories Industrial IoT transformed factories into sensor-rich en- vironments. Machines report status. Predictive maintenance reduces downtime. Dashboards visualize production flows. But decision-making remains largely centralized and manual. Physical AI Agents enable factories that reason and adapt as distributed systems. Collaborative robots negotiate task allocation in real time. When a machine fails, nearby agents reconfigure workflows. When demand shifts, production lines reorganize themselves. Energy consumption is optimized dy- namically. Safety systems respond instantly to hazardous con- ditions. A factory becomes a multi-agent system. Each robot has identity, policies, and goals. They coordinate through agent- to-agent protocols. Semantic knowledge describes production processes and safety constraints. Runtimes enforce operational boundaries. Digital twins simulate changes before deployment. This is not automation. It is industrial cognition. Urban Mobility: Coordinated Intelligence on the Move Urban mobility is one of the most complex systems human- ity operates. Millions of vehicles, pedestrians, traffic signals, and infrastructure components interact in dense, dynamic environments. IoT enabled ride-hailing, fleet tracking, and traffic analytics. But coordination remains limited. Physical AI Agents enable cities that move as coherent systems. Autonomous vehicles negotiate routes collaboratively to minimize congestion and energy consumption. Drone fleets manage logistics and emergency delivery. Traffic infrastructure participates as an intelligent agent, coordinating signals with vehicle flows. Public transport adapts in real time to demand. Each agent reasons locally while sharing intent and context globally. Identity and trust ensure safe interaction. Policies enforce city regulations. Observability enables accountability. Mobility becomes a distributed control system rather than a collection of independent actors. What These Systems Have in Common Across these domains, a common architecture emerges. Agents possess verifiable identities. They (i) communicate through deterministic and secure fabrics, (i) share seman- tic models of their environment, (i) execute under policy- governed runtimes, and (iv) operate under continuous obser- vation and audit. This is not a collection of applications. It is a new Internet layer. The Internet of Physical AI Agents is already taking shape. The question is not whether it will exist, but whether it will be built as an open, trustworthy, and evolvable infrastruc- ture—or as a fragmented patchwork of proprietary platforms. The answer will determine whether autonomy becomes a public good or a private asset. VIII. CONCLUSION The IoT connected the world’s sensors. The Internet of Physical AI Agents will connect the world’s intelligence. By learning from IoT’s limitations—bulky devices, lack of reflexes, fragmentation, weak security—we can design a new system that is compact, responsive, interoperable, and sus- tainable. With advances in device hardware, ISAC, smart materials, and intelligent connectivity, the tools to realize this vision are already here. The stakes are high. Physical AI Agents will operate in healthcare, transportation, industry, and disaster response—domains where failures cost lives. If built on open, secure, and interoperable foundations, they can unlock unprecedented benefits. If trapped in silos and walled gardens, they risk amplifying IoT’s failures at planetary scale. The Internet’s history gives us the blueprint. This time, we must build it right. REFERENCES [1] M. Honda, Y. Nishida, C. Raiciu, A. Greenhalgh, M. Handley, and H. Tokuda, “Is it still possible to extend tcp?” in Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference, 2011, p. 181–194. [2] K. Wolsing, J. R ̈ uth, K. Wehrle, and O. Hohlfeld, “A performance perspective on web optimized protocol stacks: Tcp+ tls+ http/2 vs. quic,” in Proceedings of the 2019 Applied Networking Research Workshop, 2019, p. 1–7. [3] X. Tang, X. Li, Y. Ding, M. Song, and Y. Bu, “The pace of artificial intelligence innovations: Speed, talent, and trial-and-error,” Journal of Informetrics, vol. 14, no. 4, p. 101094, 2020. [4] A. Bennaceur, C. Ghezzi, K. Tei, T. Kehrer, D. Weyns, R. Calinescu, S. Dustdar, Z. Hu, S. Honiden, F. Ishikawa et al., “Modelling and analysing resilient cyber-physical systems,” in 2019 IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self- Managing Systems (SEAMS). IEEE, 2019, p. 70–76. [5] A. Langley, A. Riddoch, A. Wilk, A. Vicente, C. Krasic, D. Zhang, F. Yang, F. Kouranov, I. Swett, J. Iyengar et al., “The quic transport protocol: Design and internet-scale deployment,” in Proceedings of the conference of the ACM special interest group on data communication, 2017, p. 183–196. [6] S. Mumtaz, A. Alsohaily, Z. Pang, A. Rayes, K. F. Tsang, and J. Ro- driguez, “Massive internet of things for industrial applications: Address- ing wireless iiot connectivity challenges and ecosystem fragmentation,” IEEE industrial electronics magazine, vol. 11, no. 1, p. 28–33, 2017. [7] M. Antonakakis, T. April, M. Bailey, M. Bernhard, E. Bursztein, J. Cochran, Z. Durumeric, J. A. Halderman, L. Invernizzi, M. Kallitsis et al., “Understanding the mirai botnet,” in 26th USENIX security symposium (USENIX Security 17), 2017, p. 1093–1110. [8] X. Wang, Z. Tang, J. Guo, T. Meng, C. Wang, T. Wang, and W. Jia, “Empowering edge intelligence: A comprehensive survey on on-device ai models,” ACM Computing Surveys, vol. 57, no. 9, p. 1–39, 2025. [9] T. Meuser, L. Lov ́ en, M. Bhuyan, S. G. Patil, S. Dustdar, A. Aral, S. Bayhan, C. Becker, E. De Lara, A. Y. Ding et al., “Revisiting edge ai: Opportunities and challenges,” IEEE Internet Computing, vol. 28, no. 4, p. 49–59, 2024. [10] F. Khoramnejad and E. Hossain, “Generative ai for the optimization of next-generation wireless networks: Basics, state-of-the-art, and open challenges,” IEEE Communications Surveys & Tutorials, 2025. [11] Z. Li, J. Pan, H. Hu, and H. Zhu, “Recent advances in new materials for 6g communications,” Advanced Electronic Materials, vol. 8, no. 3, p. 2100978, 2022. [12] M. E. Haque, F. Tariq, M. R. Khandaker, K.-K. Wong, and Y. Zhang, “A survey of scheduling in 5g urllc and outlook for emerging 6g systems,” IEEE access, vol. 11, p. 34 372–34 396, 2023. [13] A. Liu, Z. Huang, M. Li, Y. Wan, W. Li, T. X. Han, C. Liu, R. Du, D. K. P. Tan, J. Lu et al., “A survey on fundamental limits of integrated sensing and communication,” IEEE Communications Surveys & Tutorials, vol. 24, no. 2, p. 994–1034, 2022. [14] M. Welzl, J. Ott, C. Perkins, S. Islam, and D. Kutscher, “How not to ietf: Lessons learned from failed standardization attempts,” in 2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops).IEEE, 2023, p. 427–432. [15] C. Jennings. (2026, January) Agentic ai communications: Identifying the standards we need. Internet Engineering Task Force (IETF). Accessed: 2026-01-28. [Online]. Available: https://w.ietf.org/blog/agentic-ai- standards/ [16] J. H. Saltzer, D. P. Reed, and D. D. Clark, “End-to-end arguments in system design,” ACM Transactions on Computer Systems (TOCS), vol. 2, no. 4, p. 277–288, 1984.