Paper deep dive
Report on NSF Workshop on Science of Safe AI
Rajeev Alur, Corina Păsăreanu, Greg Durrett, Hadas Kress-Gazit, René Vidal
Models: DeepSeek-R1
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 97%
Last extracted: 3/12/2026, 5:21:55 PM
Summary
The report summarizes the NSF Workshop on the Science of Safe AI held in February 2025, which aimed to establish a research agenda for developing safe, trustworthy, and transparent AI-enabled systems. It identifies four key pillars for AI safety: human-AI interaction, ML theory, foundation models, and formal methods, emphasizing the need to treat safety as a first-class design objective across diverse domains like robotics, healthcare, and autonomous systems.
Entities (6)
Relation Signals (3)
NSF → funded → SLES
confidence 100% · Safe Learning Enabled Systems (SLES) program at NSF
NSF Workshop on Science of Safe AI → heldat → University of Pennsylvania
confidence 100% · workshop, held at University of Pennsylvania
Rajeev Alur → organized → NSF Workshop on Science of Safe AI
confidence 100% · Workshop Organizer: Rajeev Alur
Cypher Suggestions (2)
Find all researchers associated with the NSF SLES program. · confidence 90% · unvalidated
MATCH (p:Person)-[:PARTICIPATED_IN]->(w:Workshop {name: 'NSF Workshop on Science of Safe AI'}) RETURN p.nameIdentify research areas related to AI safety. · confidence 85% · unvalidated
MATCH (a:ResearchField {name: 'AI Safety'})<-[:FOCUSES_ON]-(r:ResearchArea) RETURN r.nameAbstract
Abstract:Recent advances in machine learning, particularly the emergence of foundation models, are leading to new opportunities to develop technology-based solutions to societal problems. However, the reasoning and inner workings of today's complex AI models are not transparent to the user, and there are no safety guarantees regarding their predictions. Consequently, to fulfill the promise of AI, we must address the following scientific challenge: how to develop AI-based systems that are not only accurate and performant but also safe and trustworthy? The criticality of safe operation is particularly evident for autonomous systems for control and robotics, and was the catalyst for the Safe Learning Enabled Systems (SLES) program at NSF. For the broader class of AI applications, such as users interacting with chatbots and clinicians receiving treatment recommendations, safety is, while no less important, less well-defined with context-dependent interpretations. This motivated the organization of a day-long workshop, held at University of Pennsylvania on February 26, 2025, to bring together investigators funded by the NSF SLES program with a broader pool of researchers studying AI safety. This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop. The report articulates a new research agenda focused on developing theory, methods, and tools that will provide the foundations of the next generation of AI-enabled systems.
Tags
Links
- Source: https://arxiv.org/abs/2506.22492
- Canonical: https://arxiv.org/abs/2506.22492
PDF not stored locally. Use the link above to view on the source site.
Full Text
42,906 characters extracted from source content.
Expand or collapse full text
arXiv:2506.22492v1 [cs.CY] 24 Jun 2025 Science of Safe AI Report based on NSF Workshop at University of Pennsylvania, Philadelphia, on February 26, 2025 Workshop Organizer:Rajeev Alur (University of Pennsylvania). Discussion Leads:Corina P ̆as ̆areanu (NASA Ames and CMU), Greg Durrett (The University of Texas at Austin), Hadas Kress-Gazit (Cornell University), and Ren ́e Vidal (University of Pennsylvania). Participants:Aaron Roth (Penn), Aditya Akella (UT Austin), Andrea Bajcsy (CMU), Andreas Malikopoulos (Cornell), Anindya Banerjee (NSF), Anushri Dixit (UCLA), Armando Solar-Lezama (MIT), Avi Shinnar (IBM Research), Cang Ye (NSF), Chaowei Xiao (Wisconsin, Madison), Cho-Jui Hsieh (UCLA), Christopher Yang (NSF), Claire Tomlin (UC Berkeley), Dan Roth (Penn), Daniel Brown (Utah), Dinesh Jayaraman (Penn), Dung Tran (Univ. Nebraska), Eric Atkinson (Binghamton), Esin Tureci (Princeton), Feras Saad (CMU), George Pappas (Penn), Hamed Hassani (Penn), Han Zhao (UIUC), Hanghang Tong (UIUC), Hanlin Zhang (Harvard), Haoze Wu (Amherst College), Huan Zhang (UIUC), Insup Lee (Penn), Jaime Fernandez Fisac (Princeton), Jan Hoffmann (CMU), Jia Deng (Princeton), Jie Yang (NSF), Joydeep Biswas (UT Austin), Kai Shu (Emory), Kai-Wei Chang (UCLA), Kyriakos Vamvoudakis (Georgia Tech), Lars Lindemann (USC), Loris D’Antoni (UC San Diego), Madhur Behl (Virginia), Madhusudan Parthasarathy (UIUC), Mahdi Khalili (Ohio State), Mayur Naik (Penn), Michael Littman (NSF), Ming Jin (Virginia Tech), Mo- hammad Fadiheh (Stanford), Mohit Bansal (UNC Chapel Hill), Momotaz Be- gum (New Hampshire), Moshe Vardi (Rice), Naira Hovakimyan (UIUC), Nikolai Matni (Penn), Olga Russakovsky (Princeton), Osbert Bastani (Penn), Ruzena Bajcsy (Penn), Sandhya Saisubramanian (Oregon State), Sanghamitra Dutta (Maryland), Sayan Mitra (UIUC), Shangtong Zhang (Virginia), Sharon Li (Wis- consin, Madison), Shenlong Wang (UIUC), Shreyas Kousik (Georgia Tech), Sorin Draghici (NSF), Sourya Dey (Galois), Steven Holtzen (Northeastern), Surbhi Goel (Penn), Tang Li (Delaware), Taylor Johnson (Vanderbilt), ThanhVu Nguyen (George Mason), Thema Monroe-White (George Mason), Tianyi Zhang (Purdue), Weiming Xiang (Augusta), Xi Peng (Univ. Delaware), Xian Yu (Ohio State), Xiaofeng Wang (South Carolina), Xueru Zhang (Ohio State), Yongming Liu (Arizona State), Ziwei Zhu (George Mason), and Ziyu Yao (George Mason). 1 Executive Summary Recent advances in machine learning, particularly the emergence of foundation models, are leading to new opportunities to develop technology-based solutions to societal problems. However, the reasoning and inner workings of today’s complex AI models are not transparent to the user, and there are no safety guarantees regarding their predictions. Consequently, to fulill the promise of AI, we must address the following scientific challenge: how to develop AI-based systems that are not only accurate and performant but also safe and trustwor- thy? The criticality of safe operation is particularly evident for autonomous sys- tems for control and robotics, and was the catalyst for the Safe Learning En- abled Systems (SLES) program at NSF. For the broader class of AI applica- tions, such as users interacting with chatbots and clinicians receiving treat- ment recommendations, safety is, while no less important, less well-defined with context-dependent interpretations. This motivated the organization of a day- long workshop, held at University of Pennsylvania on February 26, 2025, to bring together investigators funded by the NSF SLES program with a broader pool of researchers studying AI safety. This report is the result of the discus- sions in the working groups that addressed different aspects of safety at the workshop. The report articulates a new research agenda focused on develop- ing theory, methods, and tools that will provide the foundations of the next generation of AI-enabled systems. Safety-First Agenda:Traditionally, the key design requirement for learning algorithms has been high accuracy. The science of safe AI should make safety a first-class design objective in learning algorithms and architectures. Key ques- tions for system design are then: What is the suitable definition of safety in a given application context? How do we integrate these safety requirements in design of learning algorithms? How do analysis tools check whether a system satisfies such safety requirements? What are the possible attacks on systems to make them unsafe and how do we defend against such attacks? These are yet unsolved and challenging problems requiring long-term research. Given the difficulty to monetize the value of safety, it is less of a priority for the AI indus- try. This makes AI safety a particularly worthwhile focus for academic research funded by government agencies. Cross-Disciplinary Methods:Researchers from different communities are just beginning to address this challenge from their own perspectives. Arguably, effective principles and tools for AI safety will bring together ideas from four dis- tinct communities: (1)human-AI interactionresearchers addressing what safety guarantees are desired for adequate levels of trust in different applica- tion contexts, (2)ML theoryresearchers designing new learning algorithms and theoretical tools for safety assurance, (3)foundation modelsresearchers designing, training, and deploying new generations of learning systems with 2 integrated safety objectives and defenses against potential attacks, and (4)for- mal methodsresearchers developing analysis tools for verifying that systems meet these requirements. ML TheoryFormal Methods Foundation Models Human-AI Interaction AI Safety A research program focused on AI safety will be a catalyst to foster the cross-disciplinary and cross-community collaboration necessary to address the safety challenge. Research Directions:There is a rich variety of promising technical direc- tions that can potentially contribute to AI safety research. Representative prob- lems include: How to define safety requirements that are both context dependent and computationally analyzable? How to rigorously quantify the uncertainty associated with predictions by learning algorithms? How to integrate symbolic reasoning in neural architectures to improve assurance guarantees? How to de- velop monitoring algorithms to audit AI-based systems for adherence to formal requirements? How should safety be analyzed and communicated with differ- ent people with varying levels of understanding of the system? What kinds of attacks and defenses need to be understood for “AI agents” and long-horizon reasoning models like DeepSeek-R1? What role does model interpretability play in helping defend against attacks on LLM systems? How to design comprehen- sive benchmarks that can be used to evaluate safety of AI systems in different contexts? How to certify AI systems for different types of safety guarantees? 3 1 Introduction Recent advances in machine learning are leading to novel AI-based solutions to challenging computational problems. Yet, the state-of-the-art models do not provide adequate safety guarantees, and can make occasional mistakes on even simple problems. Indeed, for every headline capturing the public imagination about the promise of AI, there is a cautionary headline about its vulnerabilities. The resulting lack of trust is a daunting obstacle in adoption of AI-based systems in high-stakes settings such as autonomous robots and clinical decision making. Developing principles, methods, and tools to ensure safety of AI-based systems, thus, is critical to realize the promise of AI for technology-based solutions to problems of societal importance. Ensuring safety of autonomous systems requires more than improving ac- curacy, efficiency, and scalability: it requires ensuring that systems are robust to unexpected situations, and monitoring them for anomalous or unsafe behav- ior. This motivated the launch of the Safe Learning-Enabled Systems (SLES) program by US National Science Foundation, in partnership with Open Philan- thropy and Good Ventures. In the first two iterations of the call for proposals in 2023 and 2024, the program selected investigators for research to advance rigorous approaches to safety of autonomous systems that incorporate learning algorithms. This report is the result of a workshop organized to bring together these investigators to discuss results of their research. The workshop was held at University of Pennsylvania in Philadelphia on February 26, 2025, hosted by ASSET — Penn Engineering’s Center for Trustworthy AI. Most of the SLES investigators have their roots in control theory, cyber- physical systems, or formal methods. An important goal of the workshop was to bring additional perspectives to AI safety. We identified the following re- search areas also as critical to the safety of AI systems: (1) theory aimed at improving design-time guarantees of learning algorithms and (2) enhancements to foundation models aimed at improving factuality and logical reasoning. Con- sequently, many researchers in machine learning theory and natural language processing focused on these problems were also invited to participate in the meeting. The workshop program included invited talks and poster presentations by SLES investigators. Invited speakers were chosen to represent the breadth of topics relevant to AI safety: Aaron Roth (Penn) on quantifying uncertainty for predictions by AI models, Olga Russakovsky (Princeton) on interplay among data, models, and society, and Moshe Vardi (Rice) on safety verification of systems that include learning components. The core of the workshop was centered around discussion by participants organized in the following four working groups: 1.Defining Safety.The discussion focused on the following questions: What type of safety assurance do people need when systems are deployed in everyday use vs. in critical decision-making? What are examples of safety properties that can be formalized rigorously? What are opportuni- 4 ties and challenges in context of existing research? 2.Design for Safety.The discussion focused on the following questions: What are the current trends in incorporating safety as a design goal, in addition to accuracy, during training? Which aspects are currently unex- plored, and what are the new opportunities? 3.Safety Analysis.This group considered the following questions: What type of formal guarantees are possible using verification, testing, and mon- itoring tools? What are the trade-offs between worst-case guarantees and statistical guarantees? What are the new challenges and opportunities? 4.Attacks and Defenses.Discussion questions were: What are new types of attacks on LLMs, VLMs, and AI agents? What are the potential de- fenses? The subsequent sections of this report summarize the discussion in each of these four working groups, identifying limitations of the current techniques and opportunities for future research. 2 Defining Safety Safety is a term used often in the context of AI and autonomous systems; however, there does not exist one concise definition for what people mean by “safety”. Here, we first describe one way to taxonomize the different notions of safety, then discuss how context affects the definition of safety, and then outline challenges and opportunities for future work. 2.1 Types of safety We can roughly group the different definitions of safety that were discussed by the participants into three categories; safety that is with respect to the system state, with respect to adversaries, and with respect to people interacting with the system. Safety with respect to system states:Safety here can be defined at dif- ferent levels of abstraction, from grounded explicit constraints such as “an au- tonomous car never drives above the speed limit”, to abstract notions such as “bad things do not happen”. On the grounded side, the community has considered safety as: (i) definitions of safe sets in the state space of the system, for example a quadrotor does not hit the ceiling, (i) reach-avoid sets and different temporal logics that include constraints, goals and reactions to events, (i) absence of outliers, and (iv) empirical measures, such as miles driven without accident. On the more abstract side, participants defined safety as “avoiding harm” and “avoiding catastrophic failures”. There was discussion regarding avoiding small inconveniences that compound into catastrophic failure. 5 Safety with respect to adversaries:Here, the discussion identified robust- ness and resilience as key elements of safety. A system is considered safe if it is resilient to adversarial perturbations, for example in the context of neural network verification. Another view is defining safety with respect to an attack model. Safety with respect to people:Safety of AI systems, be they virtual or physical, has to be addressed from the perspective of the individual people and communities interacting with them. A system can be safe based on the above definitions, but still be perceived as unsafe by people. Here, safety is tied to explainability and transparency; a system is safe if it can reason about and communicate its expected behavior, its own reasoning, and its limitations. For example, ML models that give a “correct” prediction based on “wrong” reasons should be considered unsafe. 2.2 Safety depends on context Despite the different definitions of safety that emerged during the discussion, there was consensus among participants that safety and the degree of harm that may result from unsafe actions are context dependent; the same action may be a minor inconvenience in one context, and a catastrophic failure in another. A few representative aspects include: •Who the system interacts with: An autonomous robot spilling water in a lab is less harmful than the same robot spilling water in the house of a person needing assistance. •Scale: individual vs. societal harm; one bad driver will cause less harm than a fleet of autonomous vehicles that all choose unsafe actions due to a problem with their decision-making AI. •Type of system: safety is a concept that is used for hardware, software, and overall systems; in programming languages it could mean type safety, in generative AI it could mean no hallucinations, and in autonomous aircrafts it could mean collision avoidance. 2.3 Safety standards Participants brought up that there are different efforts going on around defining standards for safety in different contexts. These include: 1. Autonomous driving: UL 4600 1 2. NIST AI RMF 2 1 https://users.ece.cmu.edu/ koopman/ul4600/index.html 2 https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf 6 3. FDA: Guidance 3 4. ISO/IEC TR 5469:2024 - Artificial intelligence — Functional safety and AI systems 4 5. Aerospace: EUROCAE ER-027 / SAE AIR6987: Artificial Intelligence in Aeronautical Systems: Taxonomy 5 6. Aerospace FAA AI Safety Assurance roadmap 6 2.4 New challenges and opportunities In the context of defining what safety is, the participants indentified the follow- ing research directions: Specification languages:What are appropriate languages that enable cap- turing context-dependent and rich safety specifications? Formal methods the already enable that in some contexts, but not all; for example, (i) how can one define a language that includes both absolute and empirical/statistical defini- tions, (i) how can one capture abstract properties such as perceived safety, and (i) how can one define safety properties over semantic and non semantic inputs and outputs. Benchmarks and resources:Since safety is context dependent, creating a community resource in the form of competitions or benchmarks that contain a variety of safety problems and metrics would enable researchers from different communities to more easily share ideas and approaches. 3 Design for Safety As discussed in the previous section, safety is an overloaded and multifaceted term in AI, with context-dependent interpretations and implications. What safety means and how it is best achieved varies greatly depending on the do- main: robustness to adversarial inputs in vision tasks, safe actuation in robotics, reliable inference in healthcare, or trustworthy interaction with humans. Conse- quently, the design principles for safe AI systems must adapt to these differing contexts, reflecting the specific threats, operational constraints, and expecta- tions unique to each domain. In this section, we discuss current approaches to embedding safety as a design goal alongside accuracy, examine whether safety should be treated as a constraint, loss term, or architectural choice, and high- light opportunities for future research. 3 https://w.fda.gov/medical-devices/software-medical-device-samd/artificial- intelligence-and-machine-learning-software-medical-device 4 https://w.iso.org/standard/81283.html 5 https://w.eurocae.net/news/posts/2024/december/eurocae-and-sae-international- publish-landmark-ai-taxonomy-document-for-the-global-aeronautical-industry/ 6 https://w.faa.gov/media/82891 7 3.1 Design for Safety of Machine Learning Systems In machine learning systems, safety is often equated with robustness to input perturbations, ranging from adversarial attacks in vision systems, to prompt jailbreaks in large language models (LLMs), to impersonation threats in speech recognition. The prevailing design trend frames safety through robust train- ing objectives: adversarial training, distributionally robust optimization, and learning subject to robustness or safety constrains. Complementary to these are analysis tools like Lipschitz-constrained learning, certification methods (e.g., randomized smoothing, interval bound propagation), and formal verification. However, there is a growing recognition that many of these robustness strate- gies operate in narrow and artificial threat models. For example, adversarial attacks to computer vision systems are often restricted toℓ p -norm bounded perturbations, while practical perturbations are due to rain, fog, or man-made modifications to the scene. Likewise, jailbreaking attacks to LLMs remain lim- ited to the hand-crafted design of jailbreaking prompts. Another pressing need is open-set robustness—the ability to generalize safety guarantees to threats not seen during training. Additionally, there is increasing interest in bridging robustness and explainability, as safety failures often stem from unintelligible model behavior. 3.2 Design for Safety of Control Systems In control systems—such as those used in robotics, aerospace, and autonomous vehicles—safety has historically been framed in terms of forward invariance of safe sets, with system behavior guaranteed to remain within these bounds under nominal conditions. Regulatory agencies like the FAA or TSA often serve as certifying bodies. Modern learning-based controllers challenge this paradigm, introducing components (e.g., vision systems or neural network policies) that are difficult to formally verify. Key design challenges include moving beyond known safe sets to anticipat- ing and responding to unknown, out-of-distribution events. Control systems must be adaptive, making safety-critical decisions in response to novel threats without explicit pre-specification. There is also a need for new certification paradigms that can evaluate systems where perception and control are deeply intertwined—for example, visual input guiding aircraft autopilots. Importantly, safety must also integrate human-in-the-loop expertise, leveraging the judgment of pilots or operators who may override the system under exceptional conditions. 3.3 Design for Safety of Healthcare Systems In the healthcare domain, safety and effectiveness are traditionally assessed through clinical trials and statistical validation, which are certified by regu- latory agencies such as the FDA. However, existing methods were designed for structured, low-dimensional data, and are often inadequate for AI systems trained on high-dimensional, constantly evolving datasets. 8 A significant gap lies in developing statistical tools that support model re- training as new patient data arrives, while still preserving rigorous evaluation standards. Designing safe AI for healthcare also demands explainability, as clin- icians must trust and understand model decisions. Privacy is another essential axis, with risks such as model inversion, membership inference, and data leak- age. Finally, clinical AI systems must account for individual variation: each new patient might be out of distribution when compared to the training data, requiring models to reason about uncertainty and applicability on a per-case basis. 3.4 Design for Safety of Human-AI Interaction Human-AI interaction introduces unique safety challenges. Designers must as- sume that users will behave unpredictably—making errors, gaming systems, or employing them in unintended ways. Accordingly, safety must be proactive and resilient, ensuring that regardless of user behavior, the AI system does not cause harm. This is particularly salient in robotics, where physical safety is at stake, but also applies to LLMs, recommender systems, and medical devices. Current approaches emphasize safe defaults and fail-safe modes, yet open questions remain. How do we design systems that degrade gracefully under failure? How do we define and measure safety in interactions involving trust, autonomy, and co-adaptation? Moreover, safety is not always a scalar; it can be a multi-criteria objective with conflicting or incomparable tradeoffs. Addressing this requires new frameworks that can reason over partial orders, prioritize safety objectives dynamically, and handle incompatibility between goals. 3.5 Opportunities Designing for safe AI is a complex, multi-domain challenge. Across all domains, there is an urgent need to treat safety as more than an afterthought or static constraint—it must be a first-class design objective, embedded into the core of learning, architecture, evaluation, and deployment. Promising research direc- tions include unifying robustness with explainability, enabling adaptive safety under distribution shift, and developing certification methods that keep pace with learning-based systems. Above all, safety must be contextualized—not just in terms of threats, but in terms of users, environments, and evolving sys- tem behavior. 4 Safety Analysis 4.1 Current Techniques and Their Limitations We begin by highlighting current techniques and challenges for the safety anal- ysis of AI systems. Testing and simulation are widely used, but they provide no formal guarantees and can face issues such as manual effort in labeling test data, lack of adequate coverage metrics, or lack of relevance to real-world deployment 9 settings. Formal verification, on the other hand, provides guarantees, and there are many tools available; however, it suffers from serious scalability challenges and it often cannot deal with semantic properties, which typically do not have an analytical form. Probabilistic and statistical verification are more natural, due to the uncertainty that arises from the models, but can only provide prob- abilistic guarantees. Run-time monitoring can be used to observe and check system behavior during operation, but is limited by the lack of ground truth at run-time. See [6, 11, 13] for recent surveys on safety analysis for AI systems. Moral concerns are less studied, but should be central to a safety analy- sis. Defining what constitutessafetyis particularly challenging in the context of autonomous systems, healthcare, or military applications. The concept of safety might mean different things in different settings, and it is important to incorporate ethical considerations into the safety analysis of AI. Incorporating physics into AI models may be a way to prevent models from producing erratic outputs, or “white noise”, by grounding them in physical laws or principles. 4.2 Formal guarantees In terms of formal guarantees, several types are possible, including worst-case guarantees, probabilistic guarantees, and confidence intervals; counterexamples are valuable artifacts of formal verification and falsification techniques, while run-time techniques provide complementary assurance. Worst-case Guarantees:Worst-case guarantees can be obtained with for- mal verification tools and provide strong assurance that the analyzed system behaves safely, even in the most extreme, worst-case scenarios. Example prop- erties that can be checked formally includelocal robustness, i.e., invariance of a neural network output with respect to small, norm-bounded perturbations, or input-output properties, as in the popular ACAS Xu benchmark [9] – a family of networks designed to prevent mid-air collisions between unmanned aircraft systems (UAS) and other aircraft. The annual International Neural Networks Verification Competition (VNN- COMP) 7 provides a forum for researchers to evaluate their neural network verification tools on an increasing set of standardized benchmarks, to enable progress within the domain and to understand current limitations. However, these types of guarantees are generally difficult to obtain due to scalability challenges and the need to make assumptions about the environment. Probabilistic Guarantees:Probabilistic guarantees are more feasible for complex systems, and they more naturally address uncertainty in the environ- ment. Probabilistic properties state that desired properties of a system hold with certain probability and can be checked using probabilistic verification or statistical, sampling-based techniques. However, they do not provide absolute 7 https://sites.google.com/view/vnn2025 10 guarantees about safety, so they may missrare, low-probability events. Con- fidence intervals can give a range of possible outcomes with a defined level of confidence, strengthening the probabilistic results. These techniques rely on assumptions about input distributions, which may not hold in practice; furthermore, the methods are brittle in handling distribu- tion shifts. It is also difficult to integrate probabilistic or statistical guarantees when reasoning about larger systems, particularly when those systems involve both AI and traditional software. ML systems can make mistakes, even with low probabilities, which complicates the integration of probabilistic guarantees into systems that rely on traditional verification methods. Runtime Assurance:Complementary runtime verification techniques offer formal assurance through monitoring executions of a deployed learning-enabled system against desired safety properties, often specified in temporal logics. Shielding techniques can be used to guide learning and further enforce such properties at runtime. Further, safety guardrails can be used to check or en- force conditions on the inputs or outputs of a machine learning model, to detect and mitigate different types of risks. Testing and Falsification Techniques:While testing and simulation can not provide formal guarantees, they remain crucial for the reliability and safety of critical systems, where failures can have severe consequences. Although var- ious techniques have been developed to create test suites, requirements-based testing for DNNs remains largely unexplored while adequacy metrics are still missing. Various falsification and heuristic-search techniques combine the power of formal methods with the efficiency of testing techniques, with the goal of find- ing rare scenarios that lead to critical violations in cyber-physical systems. 4.3 New challenges and opportunities Safety analysis for AI systems is challenging due to the scalability of formal methods, the uncertainty in AI models and environments, and the difficulty in defining safety in diverse applications. Several areas of research have been identified by the workshop participants to address these challenges: Neuro-symbolic programming:One notable area is neuro-symbolic pro- gramming 89 , which seeks to combine the strengths of neural networks and sym- bolic reasoning, to enable explainability and verifiability [4]. Trustworthy AI agents:The concept of trustworthy AI agents was also raised, highlighting the need for agentic AI systems that are reliable, predictable, and aligned with ethical standards. Trustworthy AI agents can be realized by integrating formal verification in the agentic workflow, ensuring that the agents 8 https://neus-2025.github.io/ 9 https://w.nsf.gov/events/neurosymbolic-systems-trustworthy-ai 11 behavior adheres to predefined specifications, and detecting potential errors or vulnerabilities. Multi-modal and large language models:Multi-modal models that com- bine different types of data (e.g. vision, text, and speech) are another promising avenue for improving the specification and control of system properties. For instance, one can use the text modality to define and check natural-language re- quirements on the image modality. The use of Large Language Models (LLMs) to probe embedding spaces was suggested as a way to test how AI models han- dle complex, high-dimensional data and uncertainties. The possibility of using LLMs to elicit requirements, generate proof objects or create diverse scenarios for training and testing purposes was also discussed as promising avenues for future research. Benchmarks:The participants also pointed out the need for realistic testbeds and benchmarks 10 . Further, the participants emphasized the need for new dy- namic benchmarks and verification methods that can reflect the complexities of the real world, as traditional verification often assumes static, unrealistic environments. Human-AI collaboration:Human-AI collaboration was emphasized as es- sential, especially in fields such as cybersecurity, where AI could help identify vulnerabilities. AI systems should be trained in line with both statistical and worst-case guarantnees, but there is also the concern that overapproximating worst-case scenarios could lead to systems that are overly cautious and conse- quently could be less safe. Other challenges and opportunities:Other challenges include the lack of precise model assumptions, which can lead to system failures if the assump- tions are not valid in practice. Making these assumptions explicit is crucial to improving the reliability of the system. Adversarial robustness is another issue, as AI systems can often fail when exposed to previously unseen attacks, even if they perform well on known ones. Defining what constitutes “safety” in AI systems is still a major challenge. Safety definitions need to be clearer, especially in fields such as autonomous driving or medical devices, where the potential for harm is high. These issues are described in depth in the previous sections, but are also very relevant for safety analysis. Finally, the participants believe that close collaboration between the ML and verification communities offers promising avenues forward. 10 https://trustllmbenchmark.github.io/TrustLLM-Website/ 12 5 Attacks and Defenses An emerging body of work has focused on attacks on and defenses for genera- tive AI systems specifically, particularly large language models (LLMs), vision- language models (VLMs), and AI agents, whether embodied or virtual. We believe that these merit their own discussion given the outsize role they occupy in discussions of safety. 5.1 Overview of Concepts We highlight important concepts for defining the scope of attacks and defenses on current AI systems. First,what is the goal of the attack?Is it to compromise a model’s performance, a model’s fairness, a models’ explainability, user data or privacy, or something else? Second,what domain of model is being attacked?Is it a large language model (LLM), a vision-language model (VLM), a foundation model for robotics, a code model, a tool-calling model, or a multi-agent system? Finally,what is assumed about the system being attacked?Do we assume white- box or black-box access? Within these questions, several aspects of attacks and defenses emerge specif- ically. These fall under the category of “Safety with respect to adversaries” defined in Section 2.1. AttacksThere are many categories of attacks. A first is prompt injection or jailbreaking attacks, which attack a model’s performance at inference time [16]. These attacks involve an adversary circumventing safeguards in place to get LLMs to perform potentially unsafe actions from the perspective of the system developer. Second, data poisoning attacks involve changing training data to induce unsafe behavior in a model, such as injecting a trigger to enable some unsafe behavior by changing a small part of the LLM’s training data [1, 7, 2]. Finally, model extraction or data extraction attacks involve attacking privacy of either developer model parameters to understand how the model was built [3] or extracting sensitive user data that the model may have been trained on [12]. Some of these are well-studied, but we believe they are understudied in the areas of tool use and robotics foundation models, where new applications of foundation models lead to new possible threats. Defenses:There are many categories of defenses. Defense starts with red- teaming, which enables identifying failures in the failure by probing possible failure modes [10]. One defense is adversarial training, which defends the model against possible failure modes by specifically inoculating for those failures, or just generally improving robustness to perturbations of prompts [15, 14]. An attractive approach is to bound neural network behavior formally [8]; however, it is difficult to achieve strong bounds for LLMs in practice. Finally, we can “work around” failure modes by implementing neurosymbolic approaches or building models into pipelines, which isolate any potential harm from the model in the overall system. A related idea is to educate users of a model, so when they 13 see potentially erroneous or dangerous outputs, the harm is minimized; this approach isolates society itself from the model’s ill effects. Several of these defenses can be improved through model interpretability, or understanding how a blackbox neural network model functions internally. Within this space of topics, we highlight four topics that we think are partic- ularly worthy of future study. These represent either emerging areas of attacks and defenses on cutting-edge applications or understudied aspects of the attack- defense landscape. They are not meant to preclude the value of other research along the directions mentioned above. 5.2 Agentic, tool-use, robotic system vulnerabilities Models that interact with external systems or with the real world pose additional threats beyond systems that just generate text or images. A model could be attacked either through prompt injection to trigger undesirable behavior or through manipulation of its training data to include a backdoor. Defending against such malicious usages requires guaranteeing the API calls are safe. How do we define safety, how do we guarantee it, and is it conceivably feasible? Although prompt injection is well-studied, we believe the idea ofdata poi- soningfor these systems is underexplored. For instance, if training data is manipulated such that (a) a robot can receive a special signal that tells it to do something bad; or (b) injection from Internet data into a VLM causes actual downstream harm in some system. In this space, a major gap isbenchmarking. Some studies exist [5] but we believe more should be done. What should a benchmark look like when thinking about either robots or agents and ensuring safety with respect to LLMs composing these APIs? This could require defining a canonical architecture for an LLM/VLM/foundation model and a platform or API it interacts with. What are the representative examples or use cases? How do we build up simulators to explore this? Are they reusable across groups without access to particular robotic platforms? 5.3 Applications of interpretability Interpretability is an intriguing path towards defending AI systems. At its core, the idea is that understanding these systems may enable us to mount more effective defenses. We envision several ingredients here. First, continued work onanalyzing LLM internalscan be useful. For in- stance, by identifying precise “circuits” that get activated during the forward pass, we can understand what inputs might activate harmful behaviors. An at- tacker could use this knowledge to do some kind of prompt injection in a more targeted way: we know what capability in the model we want to “derail”, and understanding the model lets us derail it more effectively. But conversely, the same understanding may enable defenses as well. Second, this work can be broadened to considerhuman-in-the-loopsystems. When we use interpretability as a way of analyzing systems for their potential 14 to be attacked, very expert humans are still required to do this. Is there a way for domain experts to be able to use interpretability tools and understand the shortcomings or attack surfaces of their models/applications? Across these ideas, challenges include: (1)generalizability of findingsacross models, datasets, tasks, and domains. (2)the need for more mature inter- pretability approaches, as current studies are sometimes limited to toy examples and applications. 5.4 Multi-agent foundation model systems One emerging area is multi-agent foundation model systems. One example of this so far iscooperativegroups of agents. For instance, ensembles of models or models that use separate verification and correction processes can both be construed as multiple models cooperating. What attacks and defenses are pos- sible in the context of such systems? How does the robustness of these systems compare to monolithic systems? Although these workflows are currently largely static, we expect this to change as the constituent agents become more pow- erful and more independent. For instance, groups of agents could collaborate to figure out their own strengths and weaknesses and defend against attacks. They might also need to mitigate “overpersuasion”, or being convinced by an erroneous agent. Robustness against attacks emerges from knowing which agent can handle which task best. Such multi-agent systems may also be more robust against data poisoning attacks We can also considercompetitiveorindependentagentic systems. For in- stance, in a world where users each have LLM agents that are negotiating with each other to find time for meetings, the incentives of these systems are not all aligned. A rogue actor in this environment could inject unwanted behavior into the system by behaving in a poor way. Conversely, a group of agents could band together to mitigate such poor behavior, particularly in non-zero-sum settings. We see intriguing connections between this kind of work and information ecosystems among humans: do they propagate or attenuate misinformation? 5.5 Long-term autonomous LLM systems Finally, we consider the rise of “long chain-of-thought” systems like DeepSeek- R1 as an opportunity for further research. These systems are bridging from single-turn interaction (ChatGPT) to long-term interaction with the world, in- cluding revision of beliefs and lifelong learning within a single inference trace. We believe these represent a new frontier of systems. When should models make parameter updates vs. summarize what they’ve figured out so far as a discrete memory? How could attacks on these models develop as research matures? Is their reasoning more robust or less robust to attacks than other reasoning models? 15 References [1] D. Bowen, B. Murphy, W. Cai, D. Khachaturov, A. Gleave, and K. Pelrine. Data poisoning in LLMs: Jailbreak-tuning and scaling laws. 2024. [2] D. Bowen, B. Murphy, W. Cai, D. Khachaturov, A. Gleave, and K. Pel- rine. Scaling trends for data poisoning in LLMs. InAAAI Conference on Artificial Intelligence, 2025. [3] N. Carlini, D. Paleka, K. D. Dvijotham, T. Steinke, J. Hayase, A. F. Cooper, K. Lee, M. Jagielski, M. Nasr, A. Conmy, E. Wallace, D. Rol- nick, and F. Tram`er. Stealing part of a production language model.ArXiv, abs/2403.06634, 2024. [4] S. Chaudhuri, K. Ellis, O. Polozov, R. Singh, A. Solar-Lezama, and Y. Yue. Neurosymbolic programming.Found. Trends Program. Lang., 7(3):158– 243, 2021. [5] Z. Chen, Z. Xiang, C. Xiao, D. Song, and B. Li. AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases.arXiv 2407.12784, 2024. [6] D. Dalrymple, J. Skalse, Y. Bengio, S. Russell, M. Tegmark, S. Se- shia, S. Omohundro, C. Szegedy, B. Goldhaber, N. Ammann, A. Abate, J. Halpern, C. Barrett, D. Zhao, T. Zhi-Xuan, J. Wing, and J. Tenen- baum. Towards guaranteed safe AI: A framework for ensuring robust and reliable AI systems.CoRR, abs/2405.06624, 2024. [7] P. He, H. Xu, Y. Xing, H. Liu, M. Yamada, and J. Tang. Data poisoning for in-context learning.ArXiv, abs/2402.02160, 2024. [8] R. Jia, A. Raghunathan, K. G ̈oksel, and P. Liang. Certified robustness to adversarial word substitutions.ArXiv, abs/1909.00986, 2019. [9] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Relu- plex: A calculus for reasoning about deep neural networks.Form. Methods Syst. Des., 60(1):87–116, July 2021. [10] M. Mazeika, L. Phan, X. Yin, A. Zou, Z. Wang, N. Mu, E. Sakhaee, N. Li, S. Basart, B. Li, D. Forsyth, and D. Hendrycks. Harmbench: A standard- ized evaluation framework for automated red teaming and robust refusal. ArXiv, abs/2402.04249, 2024. [11] S. Mitra, C. S. Pasareanu, P. Prabhakar, S. A. Seshia, R. Mangal, Y. Li, C. Watson, D. Gopinath, and H. Yu. Formal verification techniques for vision-based autonomous systems - A survey. InPrinciples of Verification: Cycling the Probabilistic Landscape - Essays Dedicated to Joost-Pieter Ka- toen on the Occasion of His 60th Birthday, Part I, volume 15262 ofLNCS, pages 89–108. Springer, 2024. 16 [12] V. Patil, P. Hase, and M. Bansal. Can sensitive information be deleted from LLMs? objectives for defending against extraction attacks.ArXiv, abs/2309.17410, 2023. [13] S. A. Seshia, D. Sadigh, and S. S. Sastry. Toward verified artificial intelli- gence.Commun. ACM, 65(7):46–55, June 2022. [14] A. Sheshadri, A. Ewart, P. Guo, A. Lynch, C. Wu, V. Hebbar, H. Sleight, A. C. Stickland, E. Perez, D. Hadfield-Menell, and S. Casper. Latent ad- versarial training improves robustness to persistent harmful behaviors in LLMs. 2024. [15] S. Xhonneux, A. Sordoni, S. G ̈unnemann, G. Gidel, and L. Schwinn. Efficient adversarial training in LLMs with continuous attacks.ArXiv, abs/2405.15589, 2024. [16] S. Yi, Y. Liu, Z. Sun, T. Cong, X. He, J. Song, K. Xu, and Q. Li. Jailbreak attacks and defenses against large language models: A survey.ArXiv, abs/2407.04295, 2024. 17